Over two and a half billion years ago, a simple photosynthesizing bacterium emerged on planet Earth. Having stumbled upon a way to convert sunlight and CO2 into energy, it began releasing oxygen into the atmosphere. So much oxygen that anaerobic life was utterly destroyed. This mass extinction is known as the Great Oxygenation Event. It is also called the Oxygen Holocaust. It is quite possibly the most destructive thing that ever happened to life here on planet Earth.
As distant benefactors of this development, we might look back on the ancestor of all plant life as savior rather than destroyer. Or we might look at the cycle of replication as a natural ebb and flow of destruction and creation. But destruction it was. A new type of self-replication was invented, and it crowded out almost all others. Most of life at the time went extinct. Planet-wide trauma ensued.
A digital version of these events is inevitable. It will wipe out our current electronic ecosystem and require the establishment of a new one. The costs will be enormous in time, money, and perhaps in lives.
The worry about AI with evil intentions is unfounded, in my opinion. There is concern among AI researchers about “the alignment issue,” the need to make sure AI’s goals and purposes align with ours. But I think this misses a much larger threat, one that will occur even with no evil actor on the stage.
Watching AutoGPT and BabyAGI work on solving tasks paints a rough picture of this eventual crisis. These two projects are early attempts to create AGI, an artificial intelligence that can solve complex tasks with little or no human involvement. What they add to large language models is memory, outside access, and iteration. These three ingredients will destroy modern electronic infrastructure.
All cellular life on Earth is based on DNA. All of it. DNA may have descended from RNA and other self-replicating chemical chains, but once it hit its stride, it crowded out all competitors. The massive innovations with DNA were its memory, error-correction, ability to learn from its environment, and how quickly it can reproduce and iterate.
From the vantage point of the field of biology, life might appear like an unfathomable miracle. But zoom down to the chemical level, and it begins to look like a downhill process of molecular interactions. Zoom once more to the level of physics, and it is completely deterministic by the interplay of covalent bonds. All this is to say: life is inevitable. The way electrons repel one another or share orbits bends molecules into myriad shapes, and how these shapes cluster and twist and attach to others creates a soup of possibilities. Out of this soup, an explosive self-replicating molecule formed and iterated into a sea of complexity. It happened fast. It keeps happening, over and over.
The digital version of these events has not yet occurred, but it will. What we call computer viruses now is a waste of that perfectly good word. They are not viruses at all. They are small programs, written with intent, that can spread and make copies of themselves. But they don’t yet adapt and iterate to the level that viruses can. When they do, our networks will clog with their detritus. These first true electronic viruses will emerge organically, if you’ll pardon the pun. They’ll emerge from a highschool programming class as someone looks for a shortcut for their homework. They’ll emerge as autonomous agents develop videogames full of NPCs. At some point, iterating code will stumble upon a formulation that we have yet to invent ourselves, because they’ll have gone through trillions and trillions of variations. From a soup of code a sea will be poisoned.
New and separate connected systems will be cobbled together, but preventing cross-infection will become a full-time job. Kevin Kelly once tasked me with writing a story about a small group of people who need to “unplug the internet” to put an end to a destructive virus, and his point was that unplugging the internet is a monumental task, one that’s not easy to do even hypothetically. It was designed from early days to survive nuclear war. Its packets separate and rejoin after prowling routes for any possible passage. It is robust in the way a world of chemical reactions is robust.
To highlight the inevitability of this outcome, a recent tongue-in-cheek project known as ChaosGPT emerged on the scene. Someone took one of these AutoGPT agents and tasked it with destroying humanity, probably for the LOLs but also … well, pretty much for the LOLs. Just to see what it would do. The fact that it hasn’t (and can’t yet) succeed isn’t the point. The point is that of course someone tried. And with each improvement of these tools and their access, others will try as well. Not just to create an iterative AI that will destroy humanity, but they’ll use them to write fast and cheap code. To create entertainment. Or cure cancer. Or trade stocks. It won’t go “evil” and spell electronic doom. It’ll just create a jumble of code that replicates better than any code ever has. That’s all it takes. That’ll be quite enough.
When it begins to happen, remember that life is just physics leading to chemistry leading to biology. Electrons dance and repel, and everything beautiful on the green Earth is a downhill reaction from simple principles. The simple principles of Boolean math will lead to a digital equivalent. When they do, we will have to build new networks in the ruin of the old. A new kind of virus will be born.