The Ekard Equation and the Silver Years

Warning. What follows is the sort of stream-of-consciousness nonsense that I would blog more about if I knew no one visited this blog. You’ve been warned. Turn back now.


 

In the comments of my recent KDP is for Chumps post, a reader named Enabity, the author Paul Draker, and myself, got to debating the chances of artificial intelligence arising in the near future and writing award-winning novels. The three of us occupy different but slightly overlapping levels of optimism and pessimism on this front.

On the optimistic side, we have Paul Draker, the brilliant author of these books, who sees a computer passing the Turing Test by 2029. I’m assuming he means a real Turing Test and not the annual AI conference that holds a test by the same name. (Last year’s conference saw the nearest thing to a “win” yet, with the entry of a computer posing as a 13-year-old boy. A true Turing Test win would be a computer that hardly anyone would guess is not human, if allowed to converse with it).

Paul mentions picking 2029 because Ray Kurzweil popularized this date for the coming singularity (the day we all upload into a computer and join a collective consciousness. If Ray was in our conversation, making it a foursome, he would be the guy who thought the Jetsons lived just around the temporal corner. Brilliant man, but overly optimistic).

Enabity, on the other hand, sees very large unsolved problems for the development of AI and doesn’t give a date for computers to write full-blown novels indistinguishable from human-authored novels. I’m guessing he would put this accomplishment hundreds of years out from now.

For myself, I tend to be cautious with forecasts and would put the date that computers write entire novels indistinguishable from human-authored novels at 2040. (Incidentally, this is the subject of a current WIP of mine, The Last Storyteller. It’s also something I blogged about at length last year.) For full-blown AI, I’m guessing we’ll see it around 2100, give or take 15 years.

The conversation on AI soon slid from computers writing novels to the singularity, and Enabity made this thought-provoking point:

“If runaway proliferation of computers is in our future, why isn’t it in our past? It would take only about a million years for such a proliferation to completely occupy the Milky Way.”

This is profound stuff. Supremely profound. Enabity is saying that if our eventual unleashing of digital technology throughout the universe is guaranteed, and that we are not alone in the universe, then why hasn’t it already happened?

To put it another way: If we aren’t alone in the Milky Way, and technological advancement leads to inevitable galactic saturation, then the absence of galactic saturation might lead us to suspect:

1) Technological advancement is limited.
2) We are the current frontrunners in the Milky Way (or at least, we are within a million years of anyone else, which is a dead heat over the course of billions of years).

Paul quickly and hilariously dubs Enabity’s point the “Ferminator Paradox,” which cleaned my monitor up good. This is a mash-up of one of the great film franchises of all-time (though they are working on destroying that with every release beyond the fourth film. In fact, a great film idea: A movie studio sends a robot back in time to prevent various sequels and prequels from destroying the money-making potential of once-classic films. I know a few I’d go after).

Sorry. Right. So the Ferminator Paradox mashes up the Terminator plot of unstoppable AI with the Fermi Paradox, which is the observation that if advanced life is so common, then where the hell is it?

That brings us to the Drake equation, the worst bit of math you’ve ever encountered (at least until I reveal its rival equation in just a few moments). The Drake equation starts with the absurd number of stars in the Milky Way (100 – 400 billion), and whittles them down by various probabilities to get at the number of sufficiently technologically advanced civilizations that are currently beaming signals (like soap operas) out into space.

This is the number of stars we should be able to point a dish at and hear something. (Of course, they’ll have to have started beaming long enough ago for their signals to get here. Space is big. REALLY big.)

Every part of the Drake equation is guesswork. But the point of the equation is that the initial number (the number of stars in the Milky Way) is so high, that it hardly matters. Make any guess as to these various probabilities (how many stars have planets, how many of those planets have water, etc.), and you still get a very large number at the end. So space should be full of aliens dying on their deathbeds while their spouses make love with their brothers and sisters and everyone finds out about the illegitimate kid they had twenty years ago who has been in Mexico all this time but has now returned, is gorgeous in every possible way, but can’t act very well.

Fermi’s Paradox and Drake’s Equation have something to say to one another. It would be a bit of a heated discussion, in fact. What’s interesting is that Fermi’s Paradox came first. Fermi posed his conundrum in the 50s. Drake penned his reply in the 60s. Drake was one of the first to begin an active hunt for signs of intelligent life elsewhere, what became known as SETI. There have been a pile of optimistic searchers ever since, like Carl Sagan, and their pronouncements have all been wrong. So far, Fermi wins. Which is scary stuff, right? We keep listening to the darkness, and we haven’t heard a peep.

Now let’s get back to the three-way conversation in the comments of that KDP post: Paul Draker’s response to Enabity, on why we don’t already see a proliferation of computers throughout the universe, is that he doesn’t have an answer. I’ve already posited two of the obvious possibilities, which are that technology has asymptotic limits and/or we are the current frontrunners in the Milky Way. But neither of these are satisfactory. There are no indications that technology has limits to achieving AI or even biological immortality. Instead, we are seeing logarithmic progress where once it was linear.

(In a recent article, sent to me by Paul Draker, Ray Kurzweil makes this point: Thirty paces linearly gets you 30 meters of distance. 30 paces logarithmically gets you a BILLION meters. We are currently witnessing logarithmic expansion of technology.)

The other option, that we are the technological front-runners (ie, that we started broadcasting and listening before any other species in our radiowave cone could do the same) doesn’t feel right either. It’s too anthropocentric. Any theory that puts us at the center of the universe, or gives us special rights, privileges, or placement, is best avoided. Because science has a long and humbling history of showing us to be on the outskirts rather than the middle of things. First, with the discovery that the Earth orbits the Sun (a heresy that could get you killed back in the day), to the observation that the Sun is out in the ‘burbs, to the observation that the Milky Way isn’t a very big deal (some of them fuzzy stars up there are entire galaxies! Oh, wow, a LOT of them are), to the recent realization that an infinite number of universes are likely.

Paul Draker knows this, which is why he doesn’t have an answer for the Ferminator Paradox.

I gamely replied that there’s a simple answer: We are alone.

Now, this view is heretical to secularists, for the exact anthropocentric reasons stated above. Any time we claim uniqueness or specialness, we make a discovery that puts us in our place. And yet, the Fermi and Ferminator Paradoxes beg for an answer. And it could be that the most obvious one is correct.

Don’t tell this to those with an empirical bent. For them, the Drake equation (even though none of the numbers have been filed in) has resolved the question. There are a shit-ton of aliens zipping around in spaceships all over the universe. We just aren’t looking hard enough or in the right places.

But I believe the Drake equation may be nulled by the Ekard equation, which looks at the very long odds of certain preconditions for technological intelligence to arise. That is, rather than start with a very big number (stars in the Milky Way) and pare them down, the Ekard equation comes at the same answer from the other direction, by totaling up the long odds of advanced civilization and then applying this to the number of stars in the Milky Way.

The two equations do the exact same thing. But one is a bunch of guesses by an optimist, and the other by a pessimist realist.

The Ekard equation might include the following long chances, and it stipulates that many more are unknown:

• We must first ignore all stars too close to the galactic core (the galaxy itself has a “Goldilocks Zone”).
• We must then ignore all stars that are either too young or too old.
• We must then ignore almost all binary and trinary star systems (which may be more than half of all star systems. They present tricky orbits and wild seasons).
• Then ignore systems without a planet in the habitable zone.
• Then ignore almost all systems without a proto-sun, like Jupiter, to clear out impactors.
• Then ignore most systems without a massive moon paired to the target planet (which may be necessary for both impactors and tides. Tides may be necessary for life to transition to land).
• Then ignore all systems without terrestrial life. (Not going to tame fire underwater).
• Then ignore all systems without dextrous life. (Opposable thumbs and/or tentacles required.)

That’s just a few of the long odds. Look at the moon, for instance. The Ekard equation operates somewhat from the Anthropic Principle, which points out that the universe is of course set up for intelligent life, because here we are to observe and comment on it. But it goes one further by assuming that since intelligent life isn’t everywhere we look (The Ferminator Paradox), then perhaps anything really bizarre about our solar system is a precursor to the incredible luck of intelligent life.

The moon, for instance, is pretty bizarre. It took a glancing blow between two proto-planets. A direct hit, and you’d have an asteroid belt (or the re-accretion of a single planet). Too glancing of a blow, and you wouldn’t have gravitational capture. If a large moon plays a significant role in the evolution of intelligent life (and it might), then you could be looking at one-in-a-trillion odds right there. Which means this one supposition alone would account for the dearth of intelligent life in the Milky Way. In fact, one-in-a-trillion odds would mean that only one out of every three or ten galaxies has a single soap-opera-broadcasting race!

And that’s just one of the odds. When you cut out the galactic core, you lose a lot of the stars the Drake Equation depends on. When you get rid of the binary and trinary systems, you lose half of what’s left. That 400 billion begins to disappear about as fast as if you left Donald Trump in charge of it.

Add in all the things we don’t know about. Plus the observable odds that out of hundreds of millions of species on the one planet that we know harbors life, we can observe that only one of them learned to smelt ore. And everyone knows that smelting ore is a precondition to soap operas. Now consider the odds that organisms can become sufficiently advanced to master technology without first destroying their habitat. Plus the odds that an organism can be aggressive enough to dominate their biome without being so aggressive that they spend all their time warring with members of their own species. Plus the chances that attaining the highest levels of advancement probably include attaining the highest evolvement of cultural philosophy, which would include not making contact with organisms on their own journeys, nor seeding the galaxy with death stars, nor even broadcasting pathetic cries of loneliness out into the ether (besides soap operas).

Calculate these long odds, multiply them out, apply the odds to the number of stars in the Milky Way, and you probably get the number 1. These odds, of course, can be revised the moment we observe the number to be 2.

In the end, the Drake Equation reminds me of all the Popular Science magazines with talk of time travel on their covers. What you have here are fans of science fiction, who of course pursued a career in the sciences, and took their wildest fantasies with them. I have the same biases and fantasies. Which is why I try to guard against them. What we lack right now is enough people of secular bent considering the possibility that normally arises from theologians: We really could be alone in the Milky Way. When it comes to broadcasting soap operas into outer space, that is.

Don’t get me wrong. I’m open to the idea of there being other intelligent life out there. I think it’s a high probability. But I think it’s a stance rarely challenged by those of us who want it to be true. Of non-intelligent life, I’m of the opinion that it exists practically anywhere there’s water. I see life as being a direct consequence of chemistry. I bet there were self-replicating chemical strands within thousands of years of the Earth forming (in fact, I think there was life here while the Earth was still forming, being bombarded with impactors and getting walloped by the moon. Just not very stable life, as it was constantly disrupted and starting over).

Also: I don’t think we are special, or that the universe was created for us. There are a lot of galaxies, and possibly an infinite number of universes, so there is very likely an infinite number of technological civilizations out there. But possibly not in the Milky Way. And anything not in the Milky Way is effectively cut off from us forever. Space is growing faster between the galaxies (and accelerating) quicker than we’ll be able to cross those gaps. We’re relegated to our local cluster. And yeah, I’m ignoring the screams from readers saying “warp drives, dude!” Warp drives and time travel alike are crushed by the Ferminator Paradox. Einstein set the speed limit for the universe, and science fiction fantasies of traversable wormholes are the same sort of longing and loneliness that cause us to invent religions. Those theories and our gods come from the same desire and all have the same amount of evidence. We should guard against such impulses.

So where does that leave us? With one more option, and I believe it’s the most likely option, and I think it saves us from any technological limits, any fears of being alone in the Milky Way (much less the universe), and is based on sound theory and not wishful thinking. The longer I think about this theory, the more “right” it feels to me. I doubt it has ever come up before, because it requires a strange mix of both technological and moral optimism, and these two things rarely go hand-in-hand (moral optimism is rarely found on its own).

I call this hypothesis “The Silver Years.” It posits that civilizations have an extremely narrow window in which they are sufficiently advanced to dominate the universe and still assholes enough to believe this is a good idea.

The name comes from the observed phenomenon wherein Olympic silver medalists are less satisfied than bronze medalists. Silver medalists, you see, are just a step from gold. While bronze medalists were that close to not being Olympic medalists at all!

I know of an author who debuted at #2 on the NYT list. He and I share a publicist. He got the news while on book tour, and was despondent for the rest of the tour. Depressed. Had he debuted at #15, just making the list, experiments suggest that he would’ve been buoyed. Ecstatic, even.

This observation also applies to people moving up in their careers. Once you are aware of the phenomenon, you see it everywhere. People are often assholes while gaining a bit of success but before they get to a place where they are fully satisfied. Middle-management, if you will. The Silver phase is this middleground of jerktitude, and if it is left behind, it is usually either by attaining more, or realizing that what you have is enough.

It seems likely to me that intelligent races would go through something similar to this Silver Phase. That is, any race sufficiently advanced enough to dominate a galaxy, will, within a few hundred years, give up that fantasy of galactic domination.

This must sound downright insane to really smart and sane people. Because everything we know about life is that it spreads and spreads and fills every niche. Morpheus gets this lecture in The Matrix, when Agent Smith compares humanity to a virus. Those who are most optimistic about our technological progress are equally pessimistic about our natural history. They understand where we came from, nature red in tooth and claw. Hence the portrayal of the mad scientist as a trope. Hence the presentation of all alien visitors as marauding conquerors. Would history give us any reason to suspect otherwise?

I believe it does. Take Stephen Pinker’s recent work and TED talk on the decline of violence. Our moral sphere keeps expanding, enveloping previously trodden-upon sectors of life. Here’s a truth not espoused often enough: Moral progress FOLLOWS scientific progress, rather than the other way around.

As we discover more about ourselves, each other, animals, the planet, it affects our empathy drive and our circles of inclusiveness. Science is the font of ethics. It has always been this way. Science pushes forward, and conservatism and religion attempt to keep us in the old ways of doing things (this is not a condemnation of religion. It is a condemnation of our innate tendencies toward conservatism, which is one of the things that drives religious institutions and ways of thought).

So: Scientific discovery propels us toward the ABILITY to seed the universe, but then advanced morality rushes in its wake like a trailing shockwave, and global consciousness realizes that the drive for expansion and the insatiable curiosity that drove us to this point will also be our downfall, and the destruction of our biome (as it largely has been).

We see this in environmentalism. Technology creates destruction, and then our ethics mature in the wake, and we use different technology to mitigate or undo that destruction. Many of our bodies of water are cleaner now than they were a hundred years ago. Forests are returning. We will one day begin to undo the effects of global warming (CO2 ppm remained stable during an economic recovery for the first time since such measurements have been made). The fact is that we do care, but that caring tends to lag behind our rush to be creative, to push boundaries, to consume, and so on.

This gap between scientific progress and ethical awareness are our Silver Years. How long this period lasts is up for debate (many would debate whether it even exists, but these people would have to explain how they believe in limitless scientific progress and stunted moral progress, which is a mix of optimism and pessimism that flies in the face of human history).

I put the Silver Years as being in duration 200 to 500 years. That’s the gap in time during which a civilization would have to set out to conquer the universe before realizing that such a goal is not only evil and folly, but that making contact at all and interfering with the progress of other lifeforms as they take their own journeys, is a tremendously horrible idea. Again, science (and science fiction) lead the way, as this was a principle proposed in the original Star Trek. The goal is to observe, to know—to expand our knowledge, but not our influence or our destruction.

As a race, I see us growing ethically in countless ways. I see tolerance growing. And when doubters balk and point to modern injustices, what I see is a reaction to those injustices that would be unimaginable ten years ago, much less 100 or 1,000. Yes, we still see radicals beheading people for not agreeing with their ideologies. Not long ago, it was the King of England doing this, and people turned out to watch. The treatment of animals has vastly improved, and much of that comes with scientific progress. When we learn that chimps share more than 98% of our genes, we see them differently, and then we treat them differently. The knowledge comes first. Our right action lags.

None of this explains the lack of soap operas beaming at us from every star. The Ekard Equation likely has more to say on this front. But the Ferminator Paradox could also be explained with more uplifting news: We tend to behave better, as a race, with each passing generation. We grow more tolerant, more open-minded, more inclusive, more shocked at age-old horrors and determined to put an end to them. It just never happens quite as fast as we like, and our pace of innovation will always be faster.

But that doesn’t mean we won’t get there. And all of this means we might not go anywhere. Not beyond our solar system, anyway. By the time we can, we’ll realize it isn’t such a good idea. The ego and hubris that provided the tools will be tempered by the wisdom that follows in its wake.

 

 

 


29 responses to “The Ekard Equation and the Silver Years”

  1. Just wanted to say this has been by favorite post written by you in ages. You should do it again sometime. Thanks Hugh.

  2. Hugh, that was some pretty fantastic “stream-of-consciousness nonsense.” While I love your posts about the wide world of writing and publishing, I’ll read (and re-read) one of these any time you care to write one.

  3. Hugh! I could talk about this stuff all day long, but just had to drop by and mention that I think you’re on the right track but being a bit optimistic about our chances of humanity surviving the singularity.

    Unless we somehow build in ways to curtail it, an AI will evolve exponentially faster than biological evolution and quickly realize that humans are superfluous. Once we’re gone, an AI can continue to exist, essentially creating its own “world” on the virtual level and pretty much ignoring any emerging puny humanoid societies…until eventually the AI dies and the cycle repeats.

    I think your Silver Years are actually Silver Windows, where civilizations (people-based and AI-based) come and go in waves, not necessarily intersecting with each other, each one providing an unique window on their unique world/biology/culture.

    Fascinating stuff! See this for another take: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

    Thanks for the fun discussion!
    CJ

    1. If AI is smarter than us, than it will also be wiser than us. And more ethical.

      We fear a “monster-in-the-machine” scenario, but it’ll more likely be a “saint-in-the-contraption.”

      1. Frank Ch. Eigler Avatar
        Frank Ch. Eigler

        “If AI is smarter than us, than it will also be wiser than us. And more ethical.”

        What does that make of the Silo nanobots?

      2. And yet, a lot of self-directed learning proceeds by observation and emulation. An AI can only exceed human potential through self-directed learning. But the very first AI will have role models that are all too human… and those humans will be modeling behavior for it, subconsciously as well as consciously. Computers take what we teach them, and then do it infinitely faster and with infallible precision. Kind of deliciously scary when you think about it, no?

        The stitched-together supercomputer in my novel Pyramid Lake is called “Frankenstein” for a reason… ;)

  4. Interesting post Hugh. But why do you characterize expanding beyond our solar system as evil? I can understand it would be evil to conquer other sentient races, or even irresponsible maybe to interfere in their own progression. But why would it be evil to colonize the countless other solar systems that could support life?

    Going back to your morality argument -is it moral to let people die if we come up with a solution to prevent death? Because, eventually we will have the means to make people effectively immortal. At that point is it moral to stop people from having more kids? If not then population will continue to balloon and our solar system won’t be sufficient to house the human population. In light of that, it seems immoral not to expand to other solar systems.

    Also, given the singularity argument, many scientists have posited that eventually AI and computer technology will allow us to create perfect simulations of reality. Inside those simulations life could progress to the point where they create AI’s to create simulations within the simulations (without knowing they are in fact inside a simulation). Assuming these types of iterations continue on indefinitely, the conjecture that follows is that we are most likely living inside one of those simulations (rather than the unique first run through reality).

    So maybe we haven’t seen any other intelligent life in our universe because that is the way the simulation was written. Of course the fact that the simulation would be written that way suggests that was probably the conditions that existed in the original real universe.

    Okay, so now that my head is about to explode – the real question is – if we are in a simulation then how come I don’t have powers like Neo in the Matrix. Damn programmers.

    1. Look what you’ve done, Daniel, you made me write another blog post!

      https://hughhowey.com/thats-not-us-in-the-future/

    2. …the real question is – if we are in a simulation then how come I don’t have powers like Neo in the Matrix. Damn programmers.

      Sorry, Daniel. Let me fix that for you. *grabs keyboard, starts coding*

  5. So much to discuss! I love it. But in the interest of time, I shall focus on your last bit about space travel. To quote a favorite person of mine (Randall Munroe, author of XKCD…surely you read his comics?) — “The universe is probably littered with the one-planet graves of cultures which made the sensible economic decision that there’s no good reason to into space – each discovered, studied, and remembered by the ones who made the irrational decision.”

    We don’t need to conquer or dominate anything, but I sure as shit hope we at least try and get out there. Pursuit of knowledge, adventure, the unknown, yadda yadda. In my mind, there are few things worse than contentment, especially on a planetary scale. We must always stay hungry.

  6. In David Brin’s _Existence_, he has multiple waves of aliens showing up in the solar system. They are sitting in the asteroid belt now, reading the internet, posting on blogs, and waiting for us to develop enough to be worth “officially” contacting.

    Everybody wave, “Hi”, to the the nice aliens.

  7. […] blog post is a response to a comment, made on a blog post, that was also made in response to a comment, made on this blog […]

  8. Maybe the answer to Enabity is that AI is already here. We just are not aware. ie– the eventual unleashing of digital technology throughout the universe happened.

  9. Highly advanced civilizations go silent due to low power “small cell” or direct laser transmissions designed to maximize bandwidth. This makes them invisible. We’ll probably be electromagnetically invisible in the relatively near future.

    The gist of my position is that we’ve always been overly optimistic about the timetable for the coming technology’s milestones. The current computing expansion we’re seeing (in civilization and computing) is not necessarily characteristic of the future. We’ve been picking the low hanging fruit. There will be organizational walls that will get in the way which will be insurmountable and require a completely different approach for the next leg up. Bigger brains require more non-processing white matter to connect everything together. The same is occurring in computing, and is forcing exascale computing toward a memory and bandwidth centric rather than processing centric model. These hierarchical jumps require a reversal of course with no progress, then provide a wonderful leap forward that deludes those inside that it will happen forever.

    What I’ve found in my own experience with computers is that it’s far more desirable to have a smart tool that I can control than one that does something all on its own. More importantly, it’s a lot easier to produce. Human + computer will most probably beat computer well beyond 2029. This doesn’t mean that a computer won’t be able to write something readable from a simplistic template in 2029. It’s very possible that a lot of the smallest niches in the market will be filled by “good enough” targeted to a very narrow audience. We’re already seeing plot being optimized down to 1. present goal 2. complicate goal 3. overcome complication. These are the kinds of books computers will write first, but it will not mean that human writers won’t matter. What might happen is that writing will become abstracted from the words themselves.

    At the end of the day, my position is more about what I think we don’t know than what we know.

    1. Y’know, I thought of an answer to the Ferminator Paradox today. But I arrived here at the party late, and I see that enabity has beat me to the answer already. :)

      I agree completely with enabity’s point that truly advanced civilizations (biotic or machine) will exhibit a vastly reduced electromagnetic footprint. But I think it goes beyond that. Consider the trend in today’s (still-primitive) computing and communications hardware: increasing logarithmically in bandwidth and processing power, but at the same time shrinking in size, and consuming less and less energy. A super-advanced machine intelligence will rapidly follow that footprint-reduction trend all the way down to maximum energy efficiency, until its thought-computations are happening at the atomic level, or below. It will literally vanish into the fabric of the universe.

      Looking for high-energy radio broadcasts and hydrazine-fueled rockets, failing to find them, and concluding that the universe is empty is perhaps somewhat blind — sort of like a nineteenth century industrialist sailing past a twenty-first century eco-resort and concluding that there can be no civilized life there, because he sees no visible strip mines, clear-cut forests, or rising plumes of black smog.

      Perhaps that AI takeover of the universe already happened millions of years ago, and machine life is humming away busily all around us, invisible and unseen. There is, after all, the weird question of unexplained “dark energy,” which invisibly makes up the vast majority of mass in the universe. What if… ;)

    2. The gist of my position is that we’ve always been overly optimistic about the timetable for the coming technology’s milestones.

      Enabity, I think when it comes to each new technological advancement, our human brains are wired to overestimate how rapidly it will proliferate into our daily lives in the short term, but vastly underestimate how dramatically that technology will change the world when we look slightly farther out.

      It’s probably because we are used to projecting linearly, whereas technology advances logarithmically. So it undershoots our expectations at first, and then it wildly overshoots them.

      1. Agreed. We also wait impatiently for the wrong advance and then are blown away by an unforeseen one.

        I have never understood the fascination with flying cars. Never made sense to me. Battling gravity is inefficient compared to momentum and wheels, and mistakes in air are far more disastrous.

        But self-driving cars? Where were the Pop-Sci mags begging for these 50 years ago? And they’re here. And people seem to take that for granted. Remember the first contests with these puppies, just a decade ago? Massive failures. People said it would never happen. And then BOOM.

        Self-driving cars are going to restructure society similar to how cell phones and the internet restructured society. And people won’t appreciate this until AFTER it happened. Which will allow them to persist in their belief that change happens slowly, because they remain focused on the wrong sort of change.

        1. Word.

          I live near Google’s main campus, and I see those self-driving cars in the lane next to me every day. The most remarkable thing about them? No one notices them anymore. A decade ago, they were a futuristic DARPA project. But now, they’re boring.

          1. Of course, it’ll get unboring quick if you’re stuck in traffic behind one of those Google robot cars, and you get angry and starts honking at it. And the car door swings open and this guy steps out. :-D

  10. Hugh! You need to find and read The Killing Star! It has one of the best and scariest reasons for galactic silence I’ve heard.

    The short of it is that anti matter drives make relativistic weapons a reality, so as soon as you spot the particular gamma ray signature of anti matter drives you have to shoot first to wipe out their planet, otherwise they could send something at .97 c towards you (enough mass hitting a planet at near light speed is enough to permanently cripple your civilization). Can’t dodge something moving almost as fast as light. And planets that support the infrastructure to make antimatter are easy targets.

    So either civilizations get wiped out after naively testing anti matter engines… Or they are smart enough to not develop them. Civilizations hiding in the dark, playing chicken.

  11. Hugh, that was awesome and welcome in my corner of the Galaxy. More whenever it strikes you, please!

  12. BTW, global search/replace the word “exponential” for “logartithmic” everywhere in the blog post & comments — Hugh & I both need more sleep, apparently. :D

    1. Yeah, I do the same thing with “accuracy” and “precision,” even though I aced my physics labs and know the difference.

      My tongue is not connected to various other modules in my brain.

  13. Hugh,

    When it comes to us being alone in the galaxy, I think you’re right but I hope you’re wrong.

    1. I hope I’m wrong as well. Dangerous to let hope guide our understanding, isn’t it?

      1. It does seem to me, though, somewhat hubristic of us to assume that any decent intelligent alien civilization out there would be eager to talk to us. Maybe in the grand scheme of things, we’re not as interesting a species as we imagine we are. Maybe we’re the galactic equivalent of those annoying, noisy neighbors in 2B that everyone else tries to ignore. ;)

        1. More like we are forest living savages that the city folk won’t go near for fear of being eaten because we don’t believe in the oboo joobo magic they do.

  14. These posts have been fascinating, and I’m sorry I came late to this party and thread.

    As amazing and life changing as AI has been and will be even more so in the future, I think it’s also terrifying, and the way that AI has already changed the way we live in America is starting to freak me out and make me nervous for the human race, especially the underprivileged.

    I’m not even talking about super intelligent computers that can mimic humans and pass a Turing Test, I’m talking about the fact that when I go to the grocery store I am urged to ring out my groceries myself in the self-checkout lanes since there are maybe 2 or 3 actual humans helping customers as opposed to 10-15 blinking and beeping self-checkout lanes. I’m talking about when I make a phone call to pay my internet or utility bill I have to push buttons and talk slowly and clearly to a robot to do so. AI is already making a lot of jobs totally obsolete.

    What’s going to happen to us when AI gets even more intelligent and can do more for us? Do we really want that to happen, just because it CAN happen? That’s the moral question I think we should all be asking when it comes to AI….

  15. Assuming alien A.I. is smarter than us, it surely would not come here because we would kill it, that is our nature. Imagine if a little green man showed p and told us how life really began, how many people would call it satan and nail it to a tree? But there are other missed explanations…
    The universe is 14 billion years old, which means there are probably planets a lot older than we are, but the problem is they are just so far away. The Drake equation uses to estimated total number of stars, not just the ones within a hundred light years of us, so the universe might be infested with life, just not around here.
    There is also the theory that once your brain gets big enough you will not need a body to house your energy, you will find peace. Would advanced peaceful aliens want to leave that and come find us? Buddhist monks find peace in seclusion, not in big cities.
    curious thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *