Playing the Indian Card

Wednesday, October 15, 2025

Will AI Be the End of Us?




As of a couple of years ago, I was not worried about AI. But the fact that Elon Musk is worried makes me worried. Let’s try to think this through.

My fundamental reason not to be worried was, put simply, if some AI system gets out of control, you can always pull the plug. Not necessarily literally, but it seems simple and obvious to build in a kill switch, so that you can cut power if it goes rogue. As AI gets more completely integrated into our lives, this might become more disruptive; but it is still surely fairly straightforward to craft systems that can be shut down incrementally with a minimum of disruption. Built-in redundancy is a standard engineering principle.

Simple-minded of me? What about more subtle matters, like job loss? What about the possibility that humans are made obsolete by AI?

I’ve had that discussion since the advent of desktop computers in the early 1980s. And it has really been going on since the days of Ned Ludd—since the days of Plato. At every new technological advance, people express this same worry. At the advent of the machine age in the 19th century, everyone feared there would be mass unemployment; and wealth would concentrate at the top. It did not happen. People invested the improved productivity in owning more stuff. General wealth expanded, and greater equality of wealth.

This happened again with the advent of computers. And with the advent of the World Wide Web.

So why not with AI?

There is an argument that AI is different: AI replaces the last human bit, the brain overseeing the operation of the machine. When self-driving cars have fewer accidents than mortal drivers, isn’t that the omega point? The machines can run themselves.

What they do still need is any purpose. They need a human to tell them the destination. Without humans, there is no place to go.

This argues that the greatest need in future will be for philosophers, theologians, creatives, entrepreneurs, and visionaries. AI will take over STEM and the professions. But it seems to me a computer cannot address purpose or meaning, any more than science and technology can.

Scott Adams thinks that AI has an absolute limit, and will always need human supervision, because it “hallucinates.” In other words, by its nature, artificial “intelligence” cannot make judgements about reality. Only humans can; human consciousness is a sort of divine spark.

How many jobs will there be for philosophers, creatives, and entrepreneurs? Quite likely, a lot. When the AI and the machines can so easily turn ideas into products, when every individual already owns a printing press, a broadcast network, an AI accountant, an AI lawyer, a 3-D printer, we may again simply move to more employment, greater wealth, and greater equality.


No comments: