Limits of Knowledge

Many of the fears about what AI might do in the future come from curves like this:

And curves like this:

But there’s an assumption in these curves that don’t follow from any observation, and that’s the assumption that knowledge and speed have no upward bound. The arrow points almost straight up, as if a vertical asymptote exists along the X axis. As if there’s a becalmed future that cannon exist.

But it’s far more likely that the asymptote lies on the Y axis. It’s far more likely that there’s a limit to things-we-can-know and speeds-with-which-we-can-compute.

The graph of future computation probably looks more like:

This is a sigmoid curve, or a classic s-curve (of which there are many varieties). It has a lower and upper bound. Of all the things we could assume about the future of AI, it seems more reasonable to me that the space of knowable things is finite and the speed of compute is finite. There’s a finite amount of silicon in the universe. There’s a finite amount of energy pouring out of the sun. There’s a finite amount of mass to fuse and fissure and extract energy. There’s a finite amount of heat we can pour into the atmosphere without boiling ourselves alive.

We could thus prove that the AI takeoff curve has an upper bound. Which means we can dispense with the curve that points to infinity and beyond and talk more rationally about where the upper asymptote lies. Because — and here’s my real hot take — I won’t be surprised if we discover that humans have already gobbled up MOST of the low-hanging fruit of things-to-know.

We may have already accomplished, intellectually, 80% of everything there is to accomplish. One might argue that it’s only 20%. Or as low as 1%. But I believe we will find that our most advanced AIs are only slightly more capable than we are, rather than the thousands or millions of times more capable one may assume from the curves that point toward infinity.

When ChatGPT 3.5 released one year ago this month, it achieved a level of usefulness far greater than anything before it. If we think of training tokens as neurons, we’d finally wired up enough of them in a useful enough way that an emergent property arose from the ether. Much as human intelligence and consciousness is an emergent property of our brains reaching a certain size and and neuronal density.

As we train much larger models with more advanced techniques, these LLMs will get even smarter and more capable. Perhaps twice as smart. But if we get flummoxed or folks seem confused about diminishing returns, I think a sigmoid curve might be a better model to keep in mind than an infinite curve. We may have already taken up most of the space of what’s possible.

What will be left, then (and what ChatGPT already excels at) is speed. That will be the useful feature of AI in the future. I argue that it’s already the useful thing about AI. I can use an image generator to get art immediately rather than waiting hours or days to get a human to create it. I could ask ChatGPT to churn out this article in under a minute, rather than the days of thinking and hours of writing that it will take me. Right now, the result of either is slightly worse than the output of a human. But the speed is so much quicker that the trade-off is worth it.

Pretty soon, we will enter an era in which AI is slightly better than us at almost every endeavor. I don’t understand the assumption that it will go from there to infinitely better. Or even ten times better. The fears we have about AI in the future are based on a strange assumption that has no foundation here in the present.


6 responses to “Limits of Knowledge”

  1. Hugh,
    Obviously time will reveal the true answer. But my experience is that with any new technology its adoption spurs a myriad of associated inventions. When the personal computer and smartphone were created, no one imagined it would open the door to thousands of apps that continue to flood the market. The uses for AI are just beginning to be imagined. Your line graph would better illustrate the future if it looked like a multi-branched tree. No matter thank you for all you are doing and I wish you a wonderful Season 2 of Silo. Can’t wait.

  2. And do you equate the same hypothesis to the expansion of universe? Hubble was indecisive on this point. No one knows the precise rate or consistency of the expansion. Personally, I think it may be faster and faster as, theoretically, its energy is infinite and self-perpetuating.

  3. It seems odd to assume that humans are near the limits of intelligence. AI systems can beat us at Go and maths and in some senses art already. Intelligence is expensive biologically: why would we be much smarter than the minimum threshold to have technological civilization? Arguments about the amount of matter in the universe or available energy also don’t actually provide strong bounds on eventual AI capabilities, because those numbers are stupidly big.

    Also, large amounts of speed and parallelism with slightly greater-than-human intelligence would still be very powerful. Consider a civilization of human geniuses thinking a hundred times faster than ours.

    1. Great points.

  4. PS. The same may be said for computer writing its own code.

  5. The fear is that jobs will be lost, and poverty will be worse than it already is.

    We live under capitalism. The few control the many.

    Machines will give the few excuses to give the many less than they already get.

    That’s basically it.

Leave a Reply

Your email address will not be published. Required fields are marked *