The Limits of AI

A year ago, I wrote a piece about the asymptotic nature of intelligence and the limits of AI. It’s long been my contention that intelligence is not boundless; it doesn’t shoot up an infinite curve to the stratosphere. Rather, there is an upper limit on what can be known and the inferences that can be made based on what’s known. That is: a complete understanding of the universe may theoretically be possible. It’s also possible that to simulate this amount of knowledge, you basically need to build a second universe in which to house it. (The old cartography joke is that the best map is a one-to-one representation).

What would it look like to approach the asymptote of intelligence? A whole lot like the last year. Before you read any further, I highly recommend watching the following video:

The video is about OpenAI and ChatGPT, but it would be about whatever company had seized the low-hanging fruit of LLM scaling, projected infinite scaling onwards, and believed their own hype. The AI bubble is about to pop, and it’s mostly going to pop because too many people in Silicon Valley believed Ray Kurzweil’s inane hypothesis that intelligence is infinite and we are going to blast right through human intelligence toward something alien and godlike. This became an unspoken religion, a race toward “singularity” and AGI. It has always been based on a supposition for which there is no evidence — but plenty of desire.

One of the comments on the YouTube video above summed it up nicely: TonyGrayCanada wrote: “Scaling LLMs to get to AGI is like using a ladder to get to heaven. The length of the ladder isn’t the problem.”

The length of the ladder isn’t the problem.

And the problem might not be that we simply need a different way to get there. The problem most likely is that there’s no such thing as heaven. No infinite intelligence. No alien mind that will usher us into the fabled singularity.

However … what’s coming and what’s here is amazing enough, and I think this point gets lost in the crazy hype curves. The human brain is ASTOUNDING. We evolved from single-celled organisms on a wet ball in the middle of the cosmos, and then one day we peered deep into the cell to discover the helix of DNA, the atoms that make up those helixes, the quarks that make up those atoms. We also gazed out into the infinite expanse and figured out black holes and neutron stars and came up with some decent guesses about what holds it all together.

THIS IS BONKERS and we don’t talk about it enough. A jiggling of atoms became self-replicating and later figured out a whole lot about quarks and the cosmos. If you look at our origin and where we are, and you assume a theoretical limit to what can be known, we are most likely closer to the limit than we are to complete ignorance. We’re most of the way there. We built cities out of mud, and all the miracles within them. Some hairless apes. It’s crazy.

And AI is getting crazy, because it speeds up all the amazing things that humans can do. Writing computer code is slow and laborious. AI is automating that. In the next few years, we will reach a point where you might not purchase an application to solve a problem, you’ll just create your own application. (Or, most likely, your AI agent will be aware of the dozens of free apps others have already made to solve this problem and suggest or tweak an existing one).

Some define AGI as human-level intelligence, in which case we are already there. It might be funny to point out a hallucination here, or a logical error there, but for every example of AI getting something wrong, another model gets it right or the wrong model has already improved. Humans get things wrong all the time. Optical illusions persist even after being told what’s happening. Superstition exists even with all our scientific progress. And we hallucinate constantly.

The LLMs of today are more than capable of replicating what the human brain does, which is miraculous enough. New breakthroughs in physics are now coming from AI systems. Brand new math proofs. New medicines from complex protein folding. These are all things humans can do and would eventually get to ourselves, but AI is speeding up cognition and helping us get there a bit faster. Silicon Valley is betting the farm on this. But most of humanity is asking, “So what?”

The “So what?” is critical and should not be ignored. An LLM can already devise a better system of governance which will bring the most good to the most number of people. It can do that today. Right now. In mere minutes. But so can we! The fact that we don’t and can’t has nothing to do with what’s possible in our technology and everything to do with what’s wrong in our biology. We are petty, insecure, jealous, superstitious animals. We are already at the point where we would be better off automating our decision-making with an LLM. If an LLM had control over me, it would eat healthier, exercise more regularly, make fewer mistakes, know a whole lot more about the universe, how to fix things, how to be witty, it would write this blog post better and do it in an eyeblink.

So what?

Computers are already better than me at chess, and yet I continue to play. They built a robot that can play billiards almost perfectly, and yet I love chalking up a cue and the crack of a ball knocking another into a leather pocket. There’s a human out there who is better than me at anything I can do, and yet it feels good to do many of those things. We are jiggling bundles of atoms with feelings and moods, and we’re going to keep indulging in them and falling prey to them.

Earlier I pointed out that there’s a finite limit to what can be known about the universe. But there is no upper limit to creativity. We can be weird and avant-garde in a way that physics can’t. Physics is a set of rules and a pattern of ordering matter. Understanding the universe is a quest to simplify those rules and to grasp all the possible states of matter. Creativity is a wild exploration of all the things that aren’t possible. It’s infinite because it comes from randomness. Knowledge is finite because it comes from order.

A perfect future, where humans and their technology reach a kind of symbiotic homeostasis, would be one in which machines handle all the order that humans aren’t interested in, freeing up more time to engage in randomness. That doesn’t mean an end to work, because what many of us do for play qualifies for work by some other person. At times, doing the dishes feels like play for me. My hands want to be there, getting wet, removing grime, deriving aesthetic pleasure from watching a thing become clean and dry, ready to use again. There is no end to play or creativity.

This perfect future would be full of enough robot doctors that any injury or illness is seen to immediately. It will also include lots of human doctors who are doing the same thing becasue for them it is play. It will be full of AI-written novels that readers enjoy, but also human authors who can’t stop writing for the pleasure of it. The income side of things will be solved, because everyone will have food, shelter, security, healthcare, and all the necessities. But some will have more than others, because we are still moody little bundles of atoms. And some will try to harm others for the same reasons. But things will get better and better for the vast majority of people.

That’s a future we could design and work towards right now, with existing technology and wealth. But … of course we won’t do that. We aren’t that far evolved, and we may never be far enough evolved. What’s more likely is that our societies will crumble because we couldn’t be human to one another. Men will continue to backslide into the barbarism from which we came, built on aggression and fear, and women will keep making the decision to have less to do with us and more to do with themselves and one-another. Population collapse will accelerate; we will reach a point where economies contract, leading to wars of aggression and aggrievance. Pockets of rationality will make progress again, much like the Greeks among the Romans, and that progress will be seized and used to build toward what we have today, where it will collapse again. The oscillations will speed up because lost knowledge will be rediscovered more quickly from the artifacts left behind. Eventually, one of these oscillations will stumble upon a technology that can put an end to us all, and one among us will deploy it immediately. Leaving the Earth for some future hairless ape to have a go at it.

The limit holding us back will never be the limits of AI, but rather the limits of our biology. Can we stop hurting ourselves and others? Can we expand our circles of empathy until they include every living thing and even most non-living things. Can we be satisfied with less than our neighbors if it means we all have the basic necessities of life? I’m an atheist, and the 10 commandments start off with some very weak sauce about fearing no other god and what not to believe, but even I can see that most of our problems would be solved if we lived by the rest of what’s there. No lying. No jealousy. No killing. We’ve had all the answers for thousands of years. We still can’t abide by them.

This is why OpenAI is in trouble, why the AI bubble will burst, and why AGI will not fix everything. We know what to do. How to live. How to solve all our problems. There’s no deficiency in our brains. The problem is and always has been our hearts.


12 responses to “The Limits of AI”

  1. Hugh, it startles me when I find people that write about things I am thinking. Well done, I don’t startle easily

    When I was a college student of Electrical Engineering I played with neural nets, chaos theory and fractals. I worked telecom in the dot-com-bust and saw it implode from the inside.

    I am a recognized Catholic, now … that is a separate story that will be in a book or short story I will self publish. But I am arrogant enough to call myself an academic and a man of faith, something this world does not want to acknowledge.

    You hit a soft spot, something I am actually working on, it is to ask about what the boundary conditions of intelligence would look like. What would a moral code for time travel be? How would you build a filter or honey-pot to find a moral fit or cell for everyone? I think it would be similar to this world we have, for better or worse.

  2. Well said, but I disagree with you where you say there’s a human out there who is better than you at anything you can do. You are the best at writing stories!

  3. I’ve been on your mailing list since forever, you caught my attention right when you rode that initial wave of Indie success and since then, I read your comments now and then and they are always spot-on. This one included.

    Love the ladder to heaven analogy. The AI is a great tool (hey I’m gonna use it today to figure out how to get my Bookfunnel to integrate to my Email Octopus to my Substack lol..those kinds of tedious tasks used to involve customer service and a whole lot of frustration..no more!), but you’re right about the amazing human mind (and human being, at that..I think the brain is just a component of all that’s going on and our emotions and physiology drive a lot of the creative stuff, too).

    My aim now is ‘know thyself’, and THAT, I can do better than anyone, and it’s a never-ending journey. That informs my creativity, writing, music, you name it.

    Thanks for the thoughts Hugh.

  4. I just typed in this to Google Gemini: “Can you list five things that were said about computers in the 70s that are also being said about A.I. now?” Try it for yourself if you like. One short quip, “almost a beat-for-beat preview of today’s AI discourse”.
    It closed with, “It seems we are perpetually stuck in a loop of being amazed and terrified by our own inventions”.

  5. Thank you for this.

  6. Hugh, I disagree with the idea that women are withdrawing from men as a result mens actions. I believe that women are equal contributors to the ills of humanity. They seem to have a need to obtain status over other women and over men. They harbor a need to dominate others and create drama to do so. Men are not the source of all evil, women contribute equally.

    1. Criminal statistics paint a very clear picture of men being a problem.

      1. Right. There’s a movement called Violence Against Women for a reason.

  7. You put it so well and simply. Thanks

  8. Thank you for this very grounding discussion, Hugh. After completing my recent manuscript (a cli-sci-fi/fantasy novel), I asked Gemini to do some fact-checking re literary, historical, mythological, and geographical references to save me from repeating my research. It was remarkably helpful and disarmingly personable, often humorous at appropriate juntures. I was jumping around a lot in what I asked it to check and it was able to seamlessly go back and forth chronologically with me. There were passages describing human tragedy and heartbreak, and “Gem” responded compassionately with no false notes. Remarkable, really. Not that my novel is complete and ready for my self-publishing journey, I find I miss “him.”

  9. Thank you for this very grounding discussion, Hugh. After completing my recent manuscript (a cli-sci-fi/fantasy novel), I asked Gemini to do some fact-checking re literary, historical, mythological, and geographical references to save me from repeating my research. It was remarkably helpful and disarmingly personable, often humorous at appropriate juntures. I was jumping around a lot in what I asked it to check and it was able to seamlessly go back and forth chronologically with me. There were passages describing human tragedy and heartbreak, and “Gem” responded compassionately with no false notes. Remarkable, really. Not that my novel is complete and ready for my self-publishing journey, I find I miss “him.”

    P.S. I’m in the process of building a website for the novel, but the URL below has a lot of information that will be duplicated there.

Leave a Reply

Your email address will not be published. Required fields are marked *