A year ago, I wrote a piece about the asymptotic nature of intelligence and the limits of AI. It’s long been my contention that intelligence is not boundless; it doesn’t shoot up an infinite curve to the stratosphere. Rather, there is an upper limit on what can be known and the inferences that can be made based on what’s known. That is: a complete understanding of the universe may theoretically be possible. It’s also possible that to simulate this amount of knowledge, you basically need to build a second universe in which to house it. (The old cartography joke is that the best map is a one-to-one representation).
What would it look like to approach the asymptote of intelligence? A whole lot like the last year. Before you read any further, I highly recommend watching the following video:
The video is about OpenAI and ChatGPT, but it would be about whatever company had seized the low-hanging fruit of LLM scaling, projected infinite scaling onwards, and believed their own hype. The AI bubble is about to pop, and it’s mostly going to pop because too many people in Silicon Valley believed Ray Kurzweil’s inane hypothesis that intelligence is infinite and we are going to blast right through human intelligence toward something alien and godlike. This became an unspoken religion, a race toward “singularity” and AGI. It has always been based on a supposition for which there is no evidence — but plenty of desire.
One of the comments on the YouTube video above summed it up nicely: TonyGrayCanada wrote: “Scaling LLMs to get to AGI is like using a ladder to get to heaven. The length of the ladder isn’t the problem.”
The length of the ladder isn’t the problem.
And the problem might not be that we simply need a different way to get there. The problem most likely is that there’s no such thing as heaven. No infinite intelligence. No alien mind that will usher us into the fabled singularity.
However … what’s coming and what’s here is amazing enough, and I think this point gets lost in the crazy hype curves. The human brain is ASTOUNDING. We evolved from single-celled organisms on a wet ball in the middle of the cosmos, and then one day we peered deep into the cell to discover the helix of DNA, the atoms that make up those helixes, the quarks that make up those atoms. We also gazed out into the infinite expanse and figured out black holes and neutron stars and came up with some decent guesses about what holds it all together.
THIS IS BONKERS and we don’t talk about it enough. A jiggling of atoms became self-replicating and later figured out a whole lot about quarks and the cosmos. If you look at our origin and where we are, and you assume a theoretical limit to what can be known, we are most likely closer to the limit than we are to complete ignorance. We’re most of the way there. We built cities out of mud, and all the miracles within them. Some hairless apes. It’s crazy.
And AI is getting crazy, because it speeds up all the amazing things that humans can do. Writing computer code is slow and laborious. AI is automating that. In the next few years, we will reach a point where you might not purchase an application to solve a problem, you’ll just create your own application. (Or, most likely, your AI agent will be aware of the dozens of free apps others have already made to solve this problem and suggest or tweak an existing one).
Some define AGI as human-level intelligence, in which case we are already there. It might be funny to point out a hallucination here, or a logical error there, but for every example of AI getting something wrong, another model gets it right or the wrong model has already improved. Humans get things wrong all the time. Optical illusions persist even after being told what’s happening. Superstition exists even with all our scientific progress. And we hallucinate constantly.
The LLMs of today are more than capable of replicating what the human brain does, which is miraculous enough. New breakthroughs in physics are now coming from AI systems. Brand new math proofs. New medicines from complex protein folding. These are all things humans can do and would eventually get to ourselves, but AI is speeding up cognition and helping us get there a bit faster. Silicon Valley is betting the farm on this. But most of humanity is asking, “So what?”
The “So what?” is critical and should not be ignored. An LLM can already devise a better system of governance which will bring the most good to the most number of people. It can do that today. Right now. In mere minutes. But so can we! The fact that we don’t and can’t has nothing to do with what’s possible in our technology and everything to do with what’s wrong in our biology. We are petty, insecure, jealous, superstitious animals. We are already at the point where we would be better off automating our decision-making with an LLM. If an LLM had control over me, it would eat healthier, exercise more regularly, make fewer mistakes, know a whole lot more about the universe, how to fix things, how to be witty, it would write this blog post better and do it in an eyeblink.
So what?
Computers are already better than me at chess, and yet I continue to play. They built a robot that can play billiards almost perfectly, and yet I love chalking up a cue and the crack of a ball knocking another into a leather pocket. There’s a human out there who is better than me at anything I can do, and yet it feels good to do many of those things. We are jiggling bundles of atoms with feelings and moods, and we’re going to keep indulging in them and falling prey to them.
Earlier I pointed out that there’s a finite limit to what can be known about the universe. But there is no upper limit to creativity. We can be weird and avant-garde in a way that physics can’t. Physics is a set of rules and a pattern of ordering matter. Understanding the universe is a quest to simplify those rules and to grasp all the possible states of matter. Creativity is a wild exploration of all the things that aren’t possible. It’s infinite because it comes from randomness. Knowledge is finite because it comes from order.
A perfect future, where humans and their technology reach a kind of symbiotic homeostasis, would be one in which machines handle all the order that humans aren’t interested in, freeing up more time to engage in randomness. That doesn’t mean an end to work, because what many of us do for play qualifies for work by some other person. At times, doing the dishes feels like play for me. My hands want to be there, getting wet, removing grime, deriving aesthetic pleasure from watching a thing become clean and dry, ready to use again. There is no end to play or creativity.
This perfect future would be full of enough robot doctors that any injury or illness is seen to immediately. It will also include lots of human doctors who are doing the same thing becasue for them it is play. It will be full of AI-written novels that readers enjoy, but also human authors who can’t stop writing for the pleasure of it. The income side of things will be solved, because everyone will have food, shelter, security, healthcare, and all the necessities. But some will have more than others, because we are still moody little bundles of atoms. And some will try to harm others for the same reasons. But things will get better and better for the vast majority of people.
That’s a future we could design and work towards right now, with existing technology and wealth. But … of course we won’t do that. We aren’t that far evolved, and we may never be far enough evolved. What’s more likely is that our societies will crumble because we couldn’t be human to one another. Men will continue to backslide into the barbarism from which we came, built on aggression and fear, and women will keep making the decision to have less to do with us and more to do with themselves and one-another. Population collapse will accelerate; we will reach a point where economies contract, leading to wars of aggression and aggrievance. Pockets of rationality will make progress again, much like the Greeks among the Romans, and that progress will be seized and used to build toward what we have today, where it will collapse again. The oscillations will speed up because lost knowledge will be rediscovered more quickly from the artifacts left behind. Eventually, one of these oscillations will stumble upon a technology that can put an end to us all, and one among us will deploy it immediately. Leaving the Earth for some future hairless ape to have a go at it.
The limit holding us back will never be the limits of AI, but rather the limits of our biology. Can we stop hurting ourselves and others? Can we expand our circles of empathy until they include every living thing and even most non-living things. Can we be satisfied with less than our neighbors if it means we all have the basic necessities of life? I’m an atheist, and the 10 commandments start off with some very weak sauce about fearing no other god and what not to believe, but even I can see that most of our problems would be solved if we lived by the rest of what’s there. No lying. No jealousy. No killing. We’ve had all the answers for thousands of years. We still can’t abide by them.
This is why OpenAI is in trouble, why the AI bubble will burst, and why AGI will not fix everything. We know what to do. How to live. How to solve all our problems. There’s no deficiency in our brains. The problem is and always has been our hearts.


Leave a Reply