Many of the fears about what AI might do in the future come from curves like this:

And curves like this:

But there’s an assumption in these curves that don’t follow from any observation, and that’s the assumption that knowledge and speed have no upward bound. The arrow points almost straight up, as if a vertical asymptote exists along the X axis. As if there’s a becalmed future that cannon exist.
But it’s far more likely that the asymptote lies on the Y axis. It’s far more likely that there’s a limit to things-we-can-know and speeds-with-which-we-can-compute.
The graph of future computation probably looks more like:

This is a sigmoid curve, or a classic s-curve (of which there are many varieties). It has a lower and upper bound. Of all the things we could assume about the future of AI, it seems more reasonable to me that the space of knowable things is finite and the speed of compute is finite. There’s a finite amount of silicon in the universe. There’s a finite amount of energy pouring out of the sun. There’s a finite amount of mass to fuse and fissure and extract energy. There’s a finite amount of heat we can pour into the atmosphere without boiling ourselves alive.
We could thus prove that the AI takeoff curve has an upper bound. Which means we can dispense with the curve that points to infinity and beyond and talk more rationally about where the upper asymptote lies. Because — and here’s my real hot take — I won’t be surprised if we discover that humans have already gobbled up MOST of the low-hanging fruit of things-to-know.
We may have already accomplished, intellectually, 80% of everything there is to accomplish. One might argue that it’s only 20%. Or as low as 1%. But I believe we will find that our most advanced AIs are only slightly more capable than we are, rather than the thousands or millions of times more capable one may assume from the curves that point toward infinity.
When ChatGPT 3.5 released one year ago this month, it achieved a level of usefulness far greater than anything before it. If we think of training tokens as neurons, we’d finally wired up enough of them in a useful enough way that an emergent property arose from the ether. Much as human intelligence and consciousness is an emergent property of our brains reaching a certain size and and neuronal density.
As we train much larger models with more advanced techniques, these LLMs will get even smarter and more capable. Perhaps twice as smart. But if we get flummoxed or folks seem confused about diminishing returns, I think a sigmoid curve might be a better model to keep in mind than an infinite curve. We may have already taken up most of the space of what’s possible.
What will be left, then (and what ChatGPT already excels at) is speed. That will be the useful feature of AI in the future. I argue that it’s already the useful thing about AI. I can use an image generator to get art immediately rather than waiting hours or days to get a human to create it. I could ask ChatGPT to churn out this article in under a minute, rather than the days of thinking and hours of writing that it will take me. Right now, the result of either is slightly worse than the output of a human. But the speed is so much quicker that the trade-off is worth it.
Pretty soon, we will enter an era in which AI is slightly better than us at almost every endeavor. I don’t understand the assumption that it will go from there to infinitely better. Or even ten times better. The fears we have about AI in the future are based on a strange assumption that has no foundation here in the present.
Leave a Reply