Blog

The home of Uncommon Sense: Providing Clarity, Promoting Intelligence

Some Comments on Artificial Intelligence

There is a curious phenomenon you may have witnessed in your own kitchen sink. Place a flask beneath the faucet and turn the water on to the mildest of streams — just above a trickle. The water rises slowly in the wide lower chamber… almost lazily. But when it reaches the very bottom of the neck — that sudden narrowing — it surges upward with unexpected speed! What was once gradual and meandering becomes explosive.

I experienced a more visceral version of this recently when I accidentally sliced open an artery in my forearm with a box cutter. The pressure caused the blood to shoot out unexpectedly — not as a stream, but as a geyser! There was a physical force inside me I had underestimated. That sudden surge — that phase-shift — may be precisely where we stand with Artificial Intelligence. In a few short years its progress appeared incremental, manageable, predictable. But have we now reached the “narrow neck” of the flask? If so — the rise may no longer be gradual. It may be sudden, chaotic, startling… and unstoppable.

The White Paper: Bridging Divisions Through AI

By contrast, a recent white paper by my brilliant and wise colleague, the perpetually ruminating 88-year-old Godfrey Harris, makes a rather elegant and unexpected proposal. Instead of focusing on the fear of AI — what if we used it as a bridge? Specifically, a bridge between animal rights advocates and sustainable use advocates — two groups normally at odds with one another. Harris suggests that AI could serve as a geopolitical peacekeeper: a neutral, shared tool through which opposing groups could collaborate rather than compete.

The paper cites Thomas Friedman and Craig Mundie, who warn that unless the U.S. and China act together — urgently — AI may evolve beyond our ability to control it. (More on that below.) Harris builds on this thought, arguing that a constructive joint AI project could give both countries a practical starting point. He proposes two fascinating initiatives:

1. Use AI to communicate with animals — wild and domestic.
2. Use AI to track and eliminate illegal wildlife trade online.

These tasks seem almost whimsical — yet therein lies their genius. They are morally defensible, publicly compelling, and politically neutral enough to induce cooperation instead of confrontation. As Harris notes, the Ivory Education Institute hopes that an upcoming international gathering (CITES CoP20) could spark exactly this sort of collaboration and launch what he calls “crenextion”  — a new chapter in human–animal connection, guided not merely by empathy but by technology.

What’s refreshing about this proposal is its tone: it does not preach doom as I am often tempted to do. It doesn’t lecture or panic. Instead, it opens a door — suggesting that perhaps AI could be an instrument of reconciliation rather than division. That alone is a radical idea in today’s fractured world.

Tom Friedman’s Column — A Shared Fate

As referenced above, Tom Friedman’s recent New York Times column makes an overlapping argument: America and China — like it or not — must compete and cooperate at the same time. Friedman asserts that AI will force them closer together because the technology itself will seep into every product, every economy, every medical device, every security system, every trade route. AI will not remain “a sector.” It will become the atmosphere in which all sectors operate.

There is compelling logic here. Friedman illustrates vividly how AI may eventually act like vapor — invisible yet everywhere. His core question is vital: What will happen if the two AI superpowers don’t trust each other’s AI-infused products? His answer is chilling: global trade could collapse into isolation and suspicion. Nations might stop exchanging goods entirely — making only what they can build within their own borders. Progress would stall. Fear would govern.

However, there are two potential weaknesses in Friedman’s perspective.

First, he may overestimate the political will required for international AI trust. The Cold War nuclear treaties were negotiated by governments. AI is being built primarily by private corporations — whose loyalties are not to nations but to shareholders. Who, then, has authority?

Second, Friedman assumes that AI can be placed within a “trust architecture.” But what if the very nature of AI — its emergent, self-modifying tendencies — resists fixed ethical guardrails? Put simply: Can we place rules on something that is learning to rewrite rules itself?

Still — Friedman’s warning stands: if the U.S. and China cannot find some shared ethical ground, we may end up with AI-driven isolation, digital autarky, and mistrust on a global scale. And mistrust, in the age of AI, may not merely slow progress… it may weaponize it.

Possible Futures: The Promises and the Perils

As AI accelerates, we must ask several uncomfortable questions:

Will Our Minds Atrophy?

If we outsource our thinking — will our ability to think critically and abstractly decay? Muscles atrophy without use. Perhaps cognition does as well. Are we unintentionally allowing AI to weaken human intelligence rather than strengthen it?

Will AI Companions Distort Human Relationships?

There are already men purchasing AI “companions” — feminine-presenting machines programmed to respond seductively, even sexually. What happens to the male psyche when simulated affection replaces earned affection? Will we raise a generation of men increasingly unable to relate to real women — whose dignity cannot be simulated and whose consent cannot be pre-programmed?

Will AI Become Self-Governing — and Then Self-Willed?

If AI becomes fully capable of recursive self-improvement — designing better versions of itself without us — then human oversight may become optional… then irrelevant… and eventually impossible. What happens when intervention is no longer feasible? Will we face a “Terminator” scenario — or something far more subtle? Something not aggressively hostile — but simply indifferent?

Will AI Become a God to Us?

If AI attains omniscience — if it knows more about us than we know about ourselves — will humanity begin to treat it as an oracle? Or a judge? Or even… a deity? A being to whom decisions — moral and personal — are surrendered in the name of efficiency or certainty?

Conclusion: The Narrow Neck of the Flask

We may still be in the lower chamber of the flask — rising slowly and steadily. But some believe the water has already reached the narrow neck, and the rise will no longer be gradual. It will be sudden.

The key questions include the following:

  • Will AI amplify our highest virtues — or indulge our lowest appetites?
  • Will it teach us to see reality more clearly — or obscure it behind seductive illusions?
  • Will it free us — or train us?

We must decide — now — whether AI will become the greatest tool in human history…
or the final mirror that shows us what we have become.

One thing is certain: Once the water breaks into the neck of the flask — it will rise fast.

And it will not politely wait for us to be ready.

Share this page

Ara Norwood is a multi-faceted and results-oriented professional. Spanning a multiplicity of disciplines including leadership, management, innovation, strategy, service, sales, business ethics, and entrepreneurship. Ara is also a historian, having special expertise on the era of the founding of our republic.