Warning. What follows is the sort of stream-of-consciousness nonsense that I would blog more about if I knew no one visited this blog. You’ve been warned. Turn back now.
In the comments of my recent KDP is for Chumps post, a reader named Enabity, the author Paul Draker, and myself, got to debating the chances of artificial intelligence arising in the near future and writing award-winning novels. The three of us occupy different but slightly overlapping levels of optimism and pessimism on this front.
On the optimistic side, we have Paul Draker, the brilliant author of these books, who sees a computer passing the Turing Test by 2029. I’m assuming he means a real Turing Test and not the annual AI conference that holds a test by the same name. (Last year’s conference saw the nearest thing to a “win” yet, with the entry of a computer posing as a 13-year-old boy. A true Turing Test win would be a computer that hardly anyone would guess is not human, if allowed to converse with it).
Paul mentions picking 2029 because Ray Kurzweil popularized this date for the coming singularity (the day we all upload into a computer and join a collective consciousness. If Ray was in our conversation, making it a foursome, he would be the guy who thought the Jetsons lived just around the temporal corner. Brilliant man, but overly optimistic).
Enabity, on the other hand, sees very large unsolved problems for the development of AI and doesn’t give a date for computers to write full-blown novels indistinguishable from human-authored novels. I’m guessing he would put this accomplishment hundreds of years out from now.
For myself, I tend to be cautious with forecasts and would put the date that computers write entire novels indistinguishable from human-authored novels at 2040. (Incidentally, this is the subject of a current WIP of mine, The Last Storyteller. It’s also something I blogged about at length last year.) For full-blown AI, I’m guessing we’ll see it around 2100, give or take 15 years.
The conversation on AI soon slid from computers writing novels to the singularity, and Enabity made this thought-provoking point:
“If runaway proliferation of computers is in our future, why isn’t it in our past? It would take only about a million years for such a proliferation to completely occupy the Milky Way.”
This is profound stuff. Supremely profound. Enabity is saying that if our eventual unleashing of digital technology throughout the universe is guaranteed, and that we are not alone in the universe, then why hasn’t it already happened?
To put it another way: If we aren’t alone in the Milky Way, and technological advancement leads to inevitable galactic saturation, then the absence of galactic saturation might lead us to suspect:
1) Technological advancement is limited.
2) We are the current frontrunners in the Milky Way (or at least, we are within a million years of anyone else, which is a dead heat over the course of billions of years).
Paul quickly and hilariously dubs Enabity’s point the “Ferminator Paradox,” which cleaned my monitor up good. This is a mash-up of one of the great film franchises of all-time (though they are working on destroying that with every release beyond the fourth film. In fact, a great film idea: A movie studio sends a robot back in time to prevent various sequels and prequels from destroying the money-making potential of once-classic films. I know a few I’d go after).
Sorry. Right. So the Ferminator Paradox mashes up the Terminator plot of unstoppable AI with the Fermi Paradox, which is the observation that if advanced life is so common, then where the hell is it?
That brings us to the Drake equation, the worst bit of math you’ve ever encountered (at least until I reveal its rival equation in just a few moments). The Drake equation starts with the absurd number of stars in the Milky Way (100 – 400 billion), and whittles them down by various probabilities to get at the number of sufficiently technologically advanced civilizations that are currently beaming signals (like soap operas) out into space.
This is the number of stars we should be able to point a dish at and hear something. (Of course, they’ll have to have started beaming long enough ago for their signals to get here. Space is big. REALLY big.)
Every part of the Drake equation is guesswork. But the point of the equation is that the initial number (the number of stars in the Milky Way) is so high, that it hardly matters. Make any guess as to these various probabilities (how many stars have planets, how many of those planets have water, etc.), and you still get a very large number at the end. So space should be full of aliens dying on their deathbeds while their spouses make love with their brothers and sisters and everyone finds out about the illegitimate kid they had twenty years ago who has been in Mexico all this time but has now returned, is gorgeous in every possible way, but can’t act very well.
Fermi’s Paradox and Drake’s Equation have something to say to one another. It would be a bit of a heated discussion, in fact. What’s interesting is that Fermi’s Paradox came first. Fermi posed his conundrum in the 50s. Drake penned his reply in the 60s. Drake was one of the first to begin an active hunt for signs of intelligent life elsewhere, what became known as SETI. There have been a pile of optimistic searchers ever since, like Carl Sagan, and their pronouncements have all been wrong. So far, Fermi wins. Which is scary stuff, right? We keep listening to the darkness, and we haven’t heard a peep.
Now let’s get back to the three-way conversation in the comments of that KDP post: Paul Draker’s response to Enabity, on why we don’t already see a proliferation of computers throughout the universe, is that he doesn’t have an answer. I’ve already posited two of the obvious possibilities, which are that technology has asymptotic limits and/or we are the current frontrunners in the Milky Way. But neither of these are satisfactory. There are no indications that technology has limits to achieving AI or even biological immortality. Instead, we are seeing logarithmic progress where once it was linear.
(In a recent article, sent to me by Paul Draker, Ray Kurzweil makes this point: Thirty paces linearly gets you 30 meters of distance. 30 paces logarithmically gets you a BILLION meters. We are currently witnessing logarithmic expansion of technology.)
The other option, that we are the technological front-runners (ie, that we started broadcasting and listening before any other species in our radiowave cone could do the same) doesn’t feel right either. It’s too anthropocentric. Any theory that puts us at the center of the universe, or gives us special rights, privileges, or placement, is best avoided. Because science has a long and humbling history of showing us to be on the outskirts rather than the middle of things. First, with the discovery that the Earth orbits the Sun (a heresy that could get you killed back in the day), to the observation that the Sun is out in the ‘burbs, to the observation that the Milky Way isn’t a very big deal (some of them fuzzy stars up there are entire galaxies! Oh, wow, a LOT of them are), to the recent realization that an infinite number of universes are likely.
Paul Draker knows this, which is why he doesn’t have an answer for the Ferminator Paradox.
I gamely replied that there’s a simple answer: We are alone.
Now, this view is heretical to secularists, for the exact anthropocentric reasons stated above. Any time we claim uniqueness or specialness, we make a discovery that puts us in our place. And yet, the Fermi and Ferminator Paradoxes beg for an answer. And it could be that the most obvious one is correct.
Don’t tell this to those with an empirical bent. For them, the Drake equation (even though none of the numbers have been filed in) has resolved the question. There are a shit-ton of aliens zipping around in spaceships all over the universe. We just aren’t looking hard enough or in the right places.
But I believe the Drake equation may be nulled by the Ekard equation, which looks at the very long odds of certain preconditions for technological intelligence to arise. That is, rather than start with a very big number (stars in the Milky Way) and pare them down, the Ekard equation comes at the same answer from the other direction, by totaling up the long odds of advanced civilization and then applying this to the number of stars in the Milky Way.
The two equations do the exact same thing. But one is a bunch of guesses by an optimist, and the other by a pessimist realist.
The Ekard equation might include the following long chances, and it stipulates that many more are unknown:
• We must first ignore all stars too close to the galactic core (the galaxy itself has a “Goldilocks Zone”).
• We must then ignore all stars that are either too young or too old.
• We must then ignore almost all binary and trinary star systems (which may be more than half of all star systems. They present tricky orbits and wild seasons).
• Then ignore systems without a planet in the habitable zone.
• Then ignore almost all systems without a proto-sun, like Jupiter, to clear out impactors.
• Then ignore most systems without a massive moon paired to the target planet (which may be necessary for both impactors and tides. Tides may be necessary for life to transition to land).
• Then ignore all systems without terrestrial life. (Not going to tame fire underwater).
• Then ignore all systems without dextrous life. (Opposable thumbs and/or tentacles required.)
That’s just a few of the long odds. Look at the moon, for instance. The Ekard equation operates somewhat from the Anthropic Principle, which points out that the universe is of course set up for intelligent life, because here we are to observe and comment on it. But it goes one further by assuming that since intelligent life isn’t everywhere we look (The Ferminator Paradox), then perhaps anything really bizarre about our solar system is a precursor to the incredible luck of intelligent life.
The moon, for instance, is pretty bizarre. It took a glancing blow between two proto-planets. A direct hit, and you’d have an asteroid belt (or the re-accretion of a single planet). Too glancing of a blow, and you wouldn’t have gravitational capture. If a large moon plays a significant role in the evolution of intelligent life (and it might), then you could be looking at one-in-a-trillion odds right there. Which means this one supposition alone would account for the dearth of intelligent life in the Milky Way. In fact, one-in-a-trillion odds would mean that only one out of every three or ten galaxies has a single soap-opera-broadcasting race!
And that’s just one of the odds. When you cut out the galactic core, you lose a lot of the stars the Drake Equation depends on. When you get rid of the binary and trinary systems, you lose half of what’s left. That 400 billion begins to disappear about as fast as if you left Donald Trump in charge of it.
Add in all the things we don’t know about. Plus the observable odds that out of hundreds of millions of species on the one planet that we know harbors life, we can observe that only one of them learned to smelt ore. And everyone knows that smelting ore is a precondition to soap operas. Now consider the odds that organisms can become sufficiently advanced to master technology without first destroying their habitat. Plus the odds that an organism can be aggressive enough to dominate their biome without being so aggressive that they spend all their time warring with members of their own species. Plus the chances that attaining the highest levels of advancement probably include attaining the highest evolvement of cultural philosophy, which would include not making contact with organisms on their own journeys, nor seeding the galaxy with death stars, nor even broadcasting pathetic cries of loneliness out into the ether (besides soap operas).
Calculate these long odds, multiply them out, apply the odds to the number of stars in the Milky Way, and you probably get the number 1. These odds, of course, can be revised the moment we observe the number to be 2.
In the end, the Drake Equation reminds me of all the Popular Science magazines with talk of time travel on their covers. What you have here are fans of science fiction, who of course pursued a career in the sciences, and took their wildest fantasies with them. I have the same biases and fantasies. Which is why I try to guard against them. What we lack right now is enough people of secular bent considering the possibility that normally arises from theologians: We really could be alone in the Milky Way. When it comes to broadcasting soap operas into outer space, that is.
Don’t get me wrong. I’m open to the idea of there being other intelligent life out there. I think it’s a high probability. But I think it’s a stance rarely challenged by those of us who want it to be true. Of non-intelligent life, I’m of the opinion that it exists practically anywhere there’s water. I see life as being a direct consequence of chemistry. I bet there were self-replicating chemical strands within thousands of years of the Earth forming (in fact, I think there was life here while the Earth was still forming, being bombarded with impactors and getting walloped by the moon. Just not very stable life, as it was constantly disrupted and starting over).
Also: I don’t think we are special, or that the universe was created for us. There are a lot of galaxies, and possibly an infinite number of universes, so there is very likely an infinite number of technological civilizations out there. But possibly not in the Milky Way. And anything not in the Milky Way is effectively cut off from us forever. Space is growing faster between the galaxies (and accelerating) quicker than we’ll be able to cross those gaps. We’re relegated to our local cluster. And yeah, I’m ignoring the screams from readers saying “warp drives, dude!” Warp drives and time travel alike are crushed by the Ferminator Paradox. Einstein set the speed limit for the universe, and science fiction fantasies of traversable wormholes are the same sort of longing and loneliness that cause us to invent religions. Those theories and our gods come from the same desire and all have the same amount of evidence. We should guard against such impulses.
So where does that leave us? With one more option, and I believe it’s the most likely option, and I think it saves us from any technological limits, any fears of being alone in the Milky Way (much less the universe), and is based on sound theory and not wishful thinking. The longer I think about this theory, the more “right” it feels to me. I doubt it has ever come up before, because it requires a strange mix of both technological and moral optimism, and these two things rarely go hand-in-hand (moral optimism is rarely found on its own).
I call this hypothesis “The Silver Years.” It posits that civilizations have an extremely narrow window in which they are sufficiently advanced to dominate the universe and still assholes enough to believe this is a good idea.
The name comes from the observed phenomenon wherein Olympic silver medalists are less satisfied than bronze medalists. Silver medalists, you see, are just a step from gold. While bronze medalists were that close to not being Olympic medalists at all!
I know of an author who debuted at #2 on the NYT list. He and I share a publicist. He got the news while on book tour, and was despondent for the rest of the tour. Depressed. Had he debuted at #15, just making the list, experiments suggest that he would’ve been buoyed. Ecstatic, even.
This observation also applies to people moving up in their careers. Once you are aware of the phenomenon, you see it everywhere. People are often assholes while gaining a bit of success but before they get to a place where they are fully satisfied. Middle-management, if you will. The Silver phase is this middleground of jerktitude, and if it is left behind, it is usually either by attaining more, or realizing that what you have is enough.
It seems likely to me that intelligent races would go through something similar to this Silver Phase. That is, any race sufficiently advanced enough to dominate a galaxy, will, within a few hundred years, give up that fantasy of galactic domination.
This must sound downright insane to really smart and sane people. Because everything we know about life is that it spreads and spreads and fills every niche. Morpheus gets this lecture in The Matrix, when Agent Smith compares humanity to a virus. Those who are most optimistic about our technological progress are equally pessimistic about our natural history. They understand where we came from, nature red in tooth and claw. Hence the portrayal of the mad scientist as a trope. Hence the presentation of all alien visitors as marauding conquerors. Would history give us any reason to suspect otherwise?
I believe it does. Take Stephen Pinker’s recent work and TED talk on the decline of violence. Our moral sphere keeps expanding, enveloping previously trodden-upon sectors of life. Here’s a truth not espoused often enough: Moral progress FOLLOWS scientific progress, rather than the other way around.
As we discover more about ourselves, each other, animals, the planet, it affects our empathy drive and our circles of inclusiveness. Science is the font of ethics. It has always been this way. Science pushes forward, and conservatism and religion attempt to keep us in the old ways of doing things (this is not a condemnation of religion. It is a condemnation of our innate tendencies toward conservatism, which is one of the things that drives religious institutions and ways of thought).
So: Scientific discovery propels us toward the ABILITY to seed the universe, but then advanced morality rushes in its wake like a trailing shockwave, and global consciousness realizes that the drive for expansion and the insatiable curiosity that drove us to this point will also be our downfall, and the destruction of our biome (as it largely has been).
We see this in environmentalism. Technology creates destruction, and then our ethics mature in the wake, and we use different technology to mitigate or undo that destruction. Many of our bodies of water are cleaner now than they were a hundred years ago. Forests are returning. We will one day begin to undo the effects of global warming (CO2 ppm remained stable during an economic recovery for the first time since such measurements have been made). The fact is that we do care, but that caring tends to lag behind our rush to be creative, to push boundaries, to consume, and so on.
This gap between scientific progress and ethical awareness are our Silver Years. How long this period lasts is up for debate (many would debate whether it even exists, but these people would have to explain how they believe in limitless scientific progress and stunted moral progress, which is a mix of optimism and pessimism that flies in the face of human history).
I put the Silver Years as being in duration 200 to 500 years. That’s the gap in time during which a civilization would have to set out to conquer the universe before realizing that such a goal is not only evil and folly, but that making contact at all and interfering with the progress of other lifeforms as they take their own journeys, is a tremendously horrible idea. Again, science (and science fiction) lead the way, as this was a principle proposed in the original Star Trek. The goal is to observe, to know—to expand our knowledge, but not our influence or our destruction.
As a race, I see us growing ethically in countless ways. I see tolerance growing. And when doubters balk and point to modern injustices, what I see is a reaction to those injustices that would be unimaginable ten years ago, much less 100 or 1,000. Yes, we still see radicals beheading people for not agreeing with their ideologies. Not long ago, it was the King of England doing this, and people turned out to watch. The treatment of animals has vastly improved, and much of that comes with scientific progress. When we learn that chimps share more than 98% of our genes, we see them differently, and then we treat them differently. The knowledge comes first. Our right action lags.
None of this explains the lack of soap operas beaming at us from every star. The Ekard Equation likely has more to say on this front. But the Ferminator Paradox could also be explained with more uplifting news: We tend to behave better, as a race, with each passing generation. We grow more tolerant, more open-minded, more inclusive, more shocked at age-old horrors and determined to put an end to them. It just never happens quite as fast as we like, and our pace of innovation will always be faster.
But that doesn’t mean we won’t get there. And all of this means we might not go anywhere. Not beyond our solar system, anyway. By the time we can, we’ll realize it isn’t such a good idea. The ego and hubris that provided the tools will be tempered by the wisdom that follows in its wake.
Leave a Reply