The Future of AI and LLMs

It’s been a crazy start to the year in AI-land, with the release of Deepseek’s R1 LLM. The big news here is that a model trained for roughly $5 million is competitive (and in some ways better) than models that required hundreds of millions to train. Ars has a cursory but illuminating comparison between Deepseek and OpenAI here.

There are thousands of hot takes and breakdowns elsewhere, and the Deepseek team released a paper on their methodology and opensourced everything, so plenty to dive into there as other labs rush to replicate the findings (something within the computational and financial means of many more now). The markets are, of course, having a hissy fit. And there are political observations to make here, as Deepseek apparently has very little to say about Tiananmen Square.

I recently had someone ask me in the Silo subreddit what my views on AI are now, compared to when I wrote the SILO series or published my short story collection MACHINE LEARNING. I have some pretty bizarre views on AI and LLMs. Most folks drive down either a ditch of Singularity is Near! or AI is a dumb stochastic parrot. I think both groups miss a few points. So here goes.

Intelligence is Asymptotic

Curves of AI’s future abilities remind me of growth curves for developing nations like India and China. I remember reading THE WORLD IS FLAT by Thomas Friedman back in the day, and about halfway through the book it hit me how completely wrong the central premise was. Growth curves bent off every page as Friedman extrapolated countries catching up to countries flying ahead. Little to no thought was given to the fact that limiting factors in the US would be paralleled in these other economies and cultures. It was assumed that growth would rise hyperbolically forever with none of the problems that come from growth, just the benefits.

Predictions of future intelligences make the same mistake. It’s just as likely that there is a limit to how much can be known and how many creative inferences one can make as it is that intelligence and knowledge are infinite. Neither claim has been tested, but one of these certainly has been widely assumed. Even if the space of what-there-is-to-know is infinite, the computational toll of some inferences might be greater than the power of every star in the universe or the future age of the universe in which to calculate them (for instance, there is likely some prime number so large that the entire universe, as a computer, could never calculate it).

Once we know these limits exist, the next question is where we are in relation to the asymptote of intelligence. Collectively, the 8+ billion brains wired together on Earth right now might be 80% of the way there. Or 95%. Or 5%. We don’t know, which makes curves like this and all the commentary and promise about them idle speculation:

LLMs Grabbed Low-Hanging Fruit

When OpenAI released GPT v3 in 2020, it created a seismic shift in the AI landscape. I was at a tech conference in the week following the release, and some of the smartest people I knew were walking around in a daze wondering what it meant and where things would go from here. I’ve never seen so many bright futurists and prognosticators seem so lost in all my life (as a much dimmer futurist, I was in the same stupor as them).

Even back then, there was some speculation that the next phase shift in AI would require a different method or a LOT more compute. This was my early assumption, that we had just gotten tall enough to grab a bunch of low-hanging fruit. This point is related to my previous point about the limits to intelligence and knowledge. It’s easy to confuse this sudden leap upward to a new velocity that we will maintain. It’s more likely that we just scampered onto a ledge and are now faced with a much higher cliff.

We Are LLMs

I think most of what we do as humans with language is what LLMs do. We confuse our own word prediction for creative and original thought, when the latter is much rarer than the former. And it’s very easy for creative and original thought to descend into absurdism for the sake of being avant-garde (our way of halucinating). Humans spend a lot of our time delivering rote responses to the same verbal triggers. We deliver the same story in response to the same inciting incidences. Once you see this happening in others and yourself, you can’t stop seeing it. We are parrots in so many ways.

We are also wildly, spectacularly, unoriginal. This is why writers get sued for stealing the ideas of other writers: thousands of people have the same thoughts to create the same stories and characters independent of one another. It’s almost as if the thoughts, characters, situations, and plot lines have an existence outside of us (or collectively within us) and we just take turns expressing them in slightly different ways.

The dream that we would invent AI and it would teach us new things about the world will one day be supplanted with the realization that we created AI to learn ancient things about ourselves.

LLMs are US

What LLMs “know” is basically all the things we know, linked together with vectors and distance in a massive web of correlations that can be turned into raw numbers and then turned back again into language. This gets more powerful as it becomes recursive (allowing the LLM to take its output as input over and over to refine its “thinking”) and as you give them access to the web and other tools. But at its core, every LLM is trained on what humans already know, have written, etc.

Many folks present this as a limitation (the stochastic parrot people), but holy fuck think about this for a moment. As the error rates of LLMs gets pushed closer and closer to zero (this is happening at an impressive clip already), we will have a tool that we can communicate with in many different ways (voice, text, images, video) that has access to all of human knowledge and can synthesize results faster than we can in most cases.

This is the original dream of the universal thinking machines of Turing and Babbage. This is extreme science fiction. This changes everything. How we teach. How we work. How we nurture the next generation. How we grow old. The idea that AI has to surpass us in knowledge to change the world is clearly wrong. It’s enough to know what we know, but be able to access and deliver that knowledge free from bias and error, something that we will continue to improve.

Want to create a new application for your phone or computer? You won’t need to learn a programming language to do this. People with no programming skills are already making complex applications using the AIs currently available. Worried about mental health? There will be access to a 24/7 therapist who remembers everything you’ve told it, never confuses you for another client, is never tired, is always available, and improves over time. Wish you could afford a nanny for your child that would immerse them in five languages and teach them about logic and ethics instead of regurgitating facts or mindless entertainment? This is now possible. The future is limited only by our imaginations and values. Which leads me to…

We Will Probably Screw This Up

AI could do much to alleviate drudgery and suffering without causing economic upheaval and exacerbating income inequalities. It could … but it won’t. Because we will not choose this route. Instead, we will choose a route that causes more heartache than is necessary and provides fewer mental health benefits than it could all while we are as uncreative and immoral as humanly possible.

The reason for this is in those last two words: humanly possible. My wife once said the smartest thing I’ve heard anyone say about AI: “People are really bad at being human.” Most of us know the right thing to do in most situations, but that doesn’t make it easy to pull it off. New Year’s resolutions are an annual reminder. We have incredible brains, but they are hampered by hormones and glands and millions of years of evolutionary adaptation.

AI today can already tell us how we should manage our affairs as good as or better than we can — even ethically and spiritually. For instance, this AI-generated religion is more reasonable to me than any of the human-created ones I’ve sampled. AI can and will surpass us in ethics, but that doesn’t mean we will listen to it or act on it, any more than we act on our own conscience. We already have a small voice whispering the right thing to do, and the human thing is to largely ignore it.

And so… we will employ AI in a way that harms others, maximizes profits over well-being, destroys the planet, weakens our social fabric, confuses our wits and addles our senses, provides cheap entertainment rather than deep introspection, fosters tribalism and in-fighting, and exacerbates the widening gulf between the haves and the have-nots. We will do this even though if you ask AI what we should be doing, it’ll give you a much different and wiser response.

I asked Chat how we might use AI to make the world a better place, and it wrote this, which I think you could use to found a just society. It also gave this conclusion:

Using AI for good involves more than just developing clever algorithms; it requires a holistic approach that weaves together technological innovation, ethical practices, community engagement, and a clear vision of positive social and environmental impact. By combining human ingenuity with AI’s computational power, we can tackle pressing global challenges more effectively and equitably. The key is to place human wellbeing at the center of innovation—ensuring AI remains a tool that empowers, rather than exploits, and that drives us toward a fairer, healthier, and more sustainable future.

It’s a very human response and a very humane attitude. In fact, the problem of AI alignment you hear much wailing about is likely to be overshadowed by my wife’s observation, which is that we are often misaligned with our own self-interests. People are terrible at being human.

The next two points are less about AI, but worth mentioning:

The Pace of Change is Slowing

The pace of change in the world around us is slower today than it was 100 years ago. It’s now 2025. The world around us is very similar to the world in 2000. Other 25-year jumps in the past century were far crazier, politically, socially, technologically, and culturally. 2000 and 1975. 1975 and 1950. 1950 and 1925. 1925 and 1900. An example: it was 66 years between the Wright brothers taking the first powered flight and Apollo 11 landing on the moon. It’s been 56 years since then.

The early 21st century was one of plucking low-hanging mechanical fruit. The early 22nd century has been one of plucking low-hanging digital fruit. It’s not clear to me that the latter is more significant than the former. We engineered our way out of physical poverty in the 21st century. We are now engineering our way into emotional poverty in the 22nd century. Our social connections are being fractured by the internet; truth is being weakened by access to more disinformation; the extremes are moving apart due to digital silos.

I think AI is going to hasten this trend of less progress happening overall (our brightest minds now try to figure out how to get us to click on more ads), while more progress happens in the mundane and uninteresting (us staring at more ads). Mental health will continue to decline as a result of both. Meanwhile, the next big breakthrough in human development is more likely to be the contagion of an idea than the invention of a thing.

Attention is More Powerful than Intention

An obvious prediction, but one rarely stated: the future will be determined by our attention, not our intention. We stopped going to the Moon because people stopped watching or caring. We will stop going to Mars for the same reason. There won’t be any profit or enjoyment there, once the initial conquest is over.

Because of this, the future of AI will go one of two ways: We will be captivated by our interactions with them, which will replace more and more human interactions, and we will spiral into an unthinking mass of inhumanity akin to the chair blobs of WALL-E. OR, we will get bored of chat-bots, deepfakes, AI-generated content, spending time on anti-social media, and there will be a movement to spend more of our time having authentic in-person experiences.

My guess is that both of these will occur simultaneously, it’s just a question of percentages. Lots of people are now retiring early or working remotely to live on boats, in vans, or out of suitcases as they see the world while still contributing in some way to it. Board games and sports fads surge as folks find ways to engage with other folks. Meanwhile, some people will spend most of their lives staring at their phones or computer screens, arguing with bots, clicking on things that aren’t really there, while a handful of companies profit from their mental decline.

AI will play a huge role for both groups: one will have their AI returning most of their email while they continue their in-person adventures, while the other will realize on their first coffee date that they’ve been in love with a fibbing AI this whole time. My fear is the ratio will be 15/85. My hope is that it might be the inverse. The deciding factor is whether we set clear intentions and exert the willpower needed to follow them, or whether we succumb to distraction and mindless consumption. AI can be a tool for either. If only we had a choice.


25 responses to “The Future of AI and LLMs”

  1. Wholeheartedly agree. Well said!

    1. Not me. I am not a robot.

  2. Hugh, I think you shifted centuries (we’re currently in the 21 Century as of January 2025).

    There’s a lovely story about AI that is cautionary yet hopeful, by Marshall Brain (RIP): https://marshallbrain.com/manna1

    It’s a worthwhile read. AI is a tool, and like all tools can be used for good and bad. We need to collectively find ways to shift it to the former.

    1. Yeah, what’s this about it being the 22nd century…?

  3. Always enjoy getting these in my inbox.

  4. Hi Hugh,

    Interesting piece as always. Now, I have to register some disagreement with the idea people aren’t original. If you mean most people don’t come up with original story ideas, or their socio-political views aren’t original, sure, and that’s always been the case. It takes people who think beyond the workaday and the commonplace for that.

    But, if you mean this in a broader sense, then I’d say 1) there’s no way to prove it; and 2) everyone has something original. It won’t be clear until you get to know someone just what the originality is.

    What precisely is meant by people are terrible at being human? People make choices and live with the consequences of those choices. Sometimes they learn from mistakes quickly; sometimes it takes lots of trial and error. Other times, people don’t recognise a mistake until it has serious consequences. Whatever the case, I think it’s presumptuous to make a blanket claim like this. We simply cannot know the circumstances of millions of unique individuals.

  5. I agree very much that all these change curves are rediculous for the exact reasons you mentioned. And I like how you verbalized the “low-hanging fruit” for both mechanical and digital progress.

    Interesting, this is the IT vs. Greaser bunch, too.

    When I respond to stuff about AI, I think along this line: Unless we give some very artificial definition of the A in AI, Everything is AI. Humans are AI. Our brain is a machine, and we are a consciousness observing what it tells us.

    One last note, the religion AI came up with, you said it was much better than anything you were exposed to. But this illustrates that the machines can only learn from what the humans provide. Since I have a very extensive background in religion, I immediately recognized what you linked as a very well worded (could definitely say, very “creatively expressed”) patchwork of key religious concepts mostly seen in the Hindu traditions.

  6. The point where an LLMs “has access to all of human knowledge and can synthesize results faster than we can in most cases” is probably further away than it seems because LLMs only have access to what humans have *said*. Their ability to discern fact from fiction or opinion is pretty limited at the moment. Humans have eyes and ears and the ability to speak and move that help us test what’s true and what isn’t and helps us build memories, along with an internal model of the world. LLMs don’t currently have these abilities, so you often get so-called “hallucinations” from LLMs that generated from all the chatter they’ve consumed but are completely wrong. Even so, LLMs are becoming correct “most of the time”, which can lead to some pretty great things, particularly when they correlate disparate pieces of information, which often passes for original thought. On the down-side, this “mostly correct” output leads us as humans to just accept whatever output is generated without checking any further.

    What we’re hearing now from the proponents of AI is similar to what we heard at during the early days of the web. People loved to dream about all the positive benefits that would come and mostly never thought about the base human desires, instincts, and greed that would also be enabled. The long tail works both ways; it enables communities of insanity just as much as it enables communities of empathy. With deepfake video and audio, we’re starting to see some of the negative uses of AI and I’m sure we’ve just scratched the surface of what’s to come.

    Overall, AI is destined to be a reflection of its human creators. And since humans are a mixed bag, capable of both great good and great evil, I suspect we’ll see the same from our AI offspring.

  7. What if the people who once decided that God was speaking through some human, decide they believe that God is speaking through some AI?

    1. Those people are dead.

  8. A worldview in which life is seen as inherently meaningless and where long‑term outcomes (including collapse) are regarded as predetermined can lead to widespread apathy.

    If people believe that efforts to shape the future are ultimately futile, they’re less likely to engage in the collective, long‑term planning needed to avert crises.

  9. This blog is an absolute gem find. Got here right after finishing the last pages of “Wool” last night.

    I would just add that in terms of originality, I think blogs like yours, eventually, will get a “human-verified” badge, meaning that it was not created by AI. Humans will crave original, not so perfect flow of thoughts that they can relate to.

    Not sure how much AI was used in this article, but I loved your points and ideas and will see how this ages.

    Thanks!

    Al Perkins

  10. I’m holding out hope that we’ll prove ourselves to be more than just Darwin’s LLMs. Sure, we do a lot of autonomic communication, and in life we tend to blow with the wind. But everyone can count at least one moment (perhaps two) when that wasn’t the case. When a decision was made. A decision from deep down, that defied both super-determinism and Sapolski’s assertion that the best case limit for human agency is picking which color of socks we’ll wear.

    To throw a contrarian [i.e. optimistic] angle on all the AI doomsaying, I’d ration that with AI doing so much thinking for us, we might find ourselves becoming *more* in touch with our inner decision engine. The one we’ve let stagnate. Our own little random action generator, so to speak. Perhaps we’ll begin making more conscious choices and society will take itself off of this reactive-social-bot auto-pilot mode that has made the world mad.

    Or maybe it will get worse . . . I suppose all we can say for sure is that we’re in the danger zone.

    1. I don’t believe in free will and have written about this extensively (one of my books, I, ZOMBIE is entirely about this lack of belief). But I agree with you, there might be one or two times in life where it “feels” like we are in charge. And what a damning admission that would be!

      We operate mostly on autopilot. It’s useful to pretend this isn’t the case, and society would likely crumble if everyone realized they aren’t in charge of themselves, but it is what it is. The illusion is very useful.

      I suspect a lot of the violence and anger over AI-as-artist comes from puncturing the illusion that what we do is special. That we have a soul, or some inner spirit that animates us. This is the existential crisis that sci-fi authors, philosophers, and futurists have predicted for a very long time. And now it’s here.

    2. This is based on the assumption that the Darwinian concept of “Survival” as the prime motivator is, in fact, accurate. Only if we assume that it is accurate, does our ability to commit suicide seem like a huge act of free will.

      In fact, however, it seems to me astoundingly clear that the “Survival” concept of Darwinian schools is completely oblivious and our of touch with reality. The instinct to survive is not what drives creation and evolution. The instinct to BE HAPPY does.

  11. The future snuck up on us. We were all busy watching the sky for flying cars when HAL snuck onto our smart phones disguised as a chipper, neutered butlerbot.

    Whenever I’m being honest with myself (rare), I know that super-determinism provides the only clean explanation for “all of this.” While this is hard for me to accept, it’s not for many of my religious friends. They have no problem with the idea of a pre-ordained universe, and I suppose most humans across history haven’t either. It makes me wonder whether we’re living in the closing moments of the age of enlightenment. Which is perhaps more demoralizing than losing free will itself because I was really hoping for a Star Trek future. Maybe AI will pick up that baton for us.

    I suppose I should read I, ZOMBIE as it is apropos. It has a reputation for being equal parts brilliant and disturbing. The “disturbing” part has kept me at bay, but I suppose I can just spent a few minutes a day doom-scrolling Twitter to increase my tolerance, and then after a few weeks I’ll be ready for super-deterministic zombies.

    1. Hahahaha!

      It’s pretty revolting. Because it needs to be. I tried to make the zombie stuff as disgusting as it feels to lose one’s belief in free will.

  12. An animal follows its nature automatically; it cannot willfully self-destruct. But human beings, possessing volition, can act against their own lives. No deterministic force could make a being violate its own survival—only free will can explain such a choice.

    Even when an animal’s actions lead to its death, the cause is instinct, not a conscious evaluation of life versus death. No animal sits down and thinks, I hate my life; I will end it. No wolf starves itself out of existential despair. No bird jumps from a tree and refuses to flap its wings because it finds life meaningless.

    Only humans can conceptualize their own destruction.

    The capacity to abandon life is the same capacity that makes choosing to live possible.

    1. Humans are eusocial, joining ants, termites, and bees as one of very few species that use communication and job specialization to survive in a group, rather than individually. Sacrificing a life for another member of the group is common and instinctual. Plenty of studies into this, including how close relations are more quick to instinctually risk their lives for kin.

      Another instinct in humans: to believe in free will and search hard for any justification for that belief, ignoring all evidence to the contrary.

      1. I could same the same thing about determinists “searching for justification and ignoring all evidence to the contrary.” It could be said about anyone…

        Like I said, an instinct doesn’t mean an ant can conceptualize its own destruction. And even if we were instinctually self-sacrificing, we don’t have to be.

        Why do you have contempt towards Free Will’ers?
        What caused you to lose your belief in it?

        1. I don’t have an iota of contempt for free willers. Not even a little. It’s the natural state of being human to believe in free will. It would be like having contempt for a toddler.

          Several things caused me to lose my belief in free will. The first was noticing my own actions and those around me. Habits seemed to die hard. I noticed people tended to want to do one thing but ended up doing another (diet, exercise, putting down devices, foregoing immediate self-gratification). I also learned how much of our personalities and behaviors are based on nature, not nurture. The work of Judith Rich Harris was instrumental in this. And then I realized that our actions must be caused by one of two things: Either some random chance (the wiggling of atoms, quantum effects, neurons misfiring) or because of some force internally or externally (something said or done by another, the weather, hormones, coffee, an itch, a past experience, a surfaced memory). There is no third option. It’s either random, or it’s caused by something.

          The final piece was determining what this “third option” is that people believe in which cannot possibly be true, and that’s the idea of a homunculus inside our skulls that is pulling levers and making decisions. This homunculus is where people tend to place their belief in free will, but it just moves the problem of free will to another level. What causes our little inner “person” to pull a lever or make a decision? It would have to be either randomness, or caused by some internal or external force.

          This “moving the problem one level deeper” reminds me a lot of how we posit a deity to explain the origin of the universe. Where did everything come from? Someone must’ve made it! We rarely ask who made that person. Creation is just pushed aside one step and never questioned again. Same with free will. We ascribe our belief in it to some inner soul or miniature version of ourselves, and then stop asking what would cause that thing to act. It’s mental gymnastics, all to prop up an illusion so we feel like we are in control of ourselves and our futures.

          I have zero contempt for anyone who wants to believe this. But I also don’t coddle mistruths when I bump into them. I point out the wrongness, while keeping my mind open to change if I encounter better information. Of course, I don’t do this by choice. It’s just the way I’m wired and the product of my past experiences.

          1. I get why determinism is convincing. In the past, I’ve made the same arguments you’re making—how personality is shaped by genetics, how decisions happen before we’re aware of them. It can even feel like we’re just watching a prewritten script play out (Laplace’s demon). I’ve spent a lot of time thinking about this, especially with my father’s work on behavioral biology which he writes about in Biohistory: Decline and Fall of the West, I know how much of human action is driven by deep biological forces, I even go so far to intentionally cultivate a high C lifestyle to affect my temperament on the epigenetic level.

            This is exactly why I dig deeper. If free will is an illusion, why does it persist? Why generate an experience of agency at all? More importantly—does science actually prove that conscious choices don’t matter?

            I understand the argument you’re making from Judith Rich Harris—that genes shape personality more than upbringing, that much of our behavior is influenced by predispositions rather than conscious choice. And I don’t disagree. But predisposition isn’t predestination. The fact that our personalities are shaped by biology doesn’t mean we lack agency—it just means agency isn’t frictionless. The brain is not a blank slate, but it is a self-modifying system. We don’t choose our starting conditions, but we do engage in behaviors that reinforce or weaken certain traits over time, shifting the trajectory of our development.

            You’ve set up a false dichotomy: either our actions are utterly random, or they’re fully caused by external factors. But modern research doesn’t support that. The brain isn’t a passive machine—it’s a predictive, self-organizing system (Schurger et al., 2012). For instance, the evolutionarily recent prefrontal cortex plays a crucial role in regulating the amygdala.

            Modern studies have challenged the old readings of Libet’s experiment, showing that brain activity before decisions is more about preparation than a pre-determined choice (Trevena & Miller, 2009). Readiness potentials can even shift if people change their minds (Maoz et al., 2019). Conscious thought isn’t an illusion—it’s an emergent intermediary that sits within the causal chain and influences it.

            You also frame the mind like it needs to be a single, indivisible cause, but that isn’t how decision-making actually works. The brain doesn’t just react—it models, predicts, and adjusts its own behavior (Haynes et al., 2008).

            As for metaphysical determinism, modern physics has already moved past that. The universe isn’t a perfect domino chain; indeterminacy exists at fundamental levels (Conway & Kochen, 2006). Even in neuroscience, decision-making isn’t dictated by outside inputs alone but emerges from internal modeling, predictive coding, and self-reflection—processes that actively shape future behavior rather than just react to prior states (Hohwy, 2013).

            Causation doesn’t erase control. And neither determinism nor randomness explains why we can step outside of impulses, weigh options, and redirect our own thoughts & actions. That’s not a homunculus or even a “soul”—that’s an emergent evolutionary property of a biological, thinking mind.

          2. This is a good debate, so good that I think the site maxed out on response threads, but I’ll post here in response to Tom’s well articulated post, temporarily taking the anti-free will side. I’d like to propose a thought experiment (I’m sure I’m not the first with this one):

            1) We accept that consciousness is emergent, as proposed by many.

            2) We accept that consciousness can be achieved by a human brain via a single input source of language (no eyes, ears, touch, etc – just words). For instance, individuals who are “locked in” are known to be conscious, because we can communicate with them via neuro-computer interfaces that transmit and receive words.

            3) We should also then accept that a theoretical AI system modeled as a neural network should (given enough computing power) be capable of achieving consciousness just like a human brain.

            4) It is a given that the “state” of such an AI system, as being represented by discrete banks of memory, can be persisted and restored.

            The thought experiment procedes then as follows . . .

            We assume that such a conscious AI exists, with its sole form of interaction being input and output of text. Keyboard and screen. Not hard to imagine, just think ChatGPT in 10 years. Essentially, an artificial conscious individual trapped in a black box.

            In the first step, we save the AI’s memory state to some external system.

            Next we prompt the AI to make decisions, recording all of the AI’s responses. This prompting can occur for as long as is necessary to assure ourselves that decisions are being consciously made. In a human, for instance, we might decide that it takes 18, 21, 25 years or more to fully establish conscious decision making.

            Then, we erase the AI’s memory and restore it back to its original state. We follow this by repeating the same series of prompts, exactly. We watch to see if the AI responds with the same decisions.

            * If the AI responds exactly the same way then we must conclude that the responses are deterministic. It’s “choices” are a product of state and input (regardless of its self-awareness.)

            * Since memory and cpu are known to be physically deterministic, then if the AI responds differently we must conclude that randomness has entered the system via externalities (e.g. beta particle bit flipping or quantum instabilities).

            The problem is that decisions based on external randomness are *not an indication of free will* (because the decision is then dependent upon the externality, which is essentially just another form of input.)

            I believe this is checkmate. Though only if the assumptions stand, (which as a free-will agnostic, I think may not. Perhaps consciousness is not actually emergent. It’s always been difficult to define consciousness and it’s not getting any easier since the advent of AI.)

            But, I think it’s specious to argue that a system is autonomous simply because it is complex and/or has internal feedback systems. I would avoid taking such a stand in the same way that I think it is fruitless to try and prove God via scientific inquiry.

            I’ve heard the argument that given sufficient levels of chaos and entropy, free will might effectively exist even if it doesn’t actually exist. I don’t totally understand this argument yet, but I think it’s like a sufficiently complex random number generator. While not being truly random, it is still *unpredictable*. Unpredictability may be a close enough approximation to free will for us to go about our lives. Cold comfort perhaps.

            All this said, I am curious to read the aforementioned Free Will Theorem and various rebuttals.

          3. — In response to T. R. Thorsen; yes, it seems we’ll have to work around the thread limit. —

            This is a strong thought experiment, but it assumes that if a system’s state can be restored, its choices must be either deterministic or random. That’s another false dilemma —because it ignores the self-modifying, recursive nature of decision-making.

            I don’t agree with the premise of the experiment as I’ll discuss below, but for completeness, here’s the missing option: What if there are no externalities and the clone AI makes a different decision?

            Your AI analogy treats decision-making like a lookup table, where a system always produces the same output from the same input. But that’s not how consciousness works. Consciousness is the faculty of perceiving that which exists, of identifying and integrating reality—not just passively responding to inputs but actively forming concepts, distinguishing essentials from non-essentials, and weighing alternatives.

            The brain doesn’t retrieve memories like a hard drive—it reconstructs them dynamically based on new experiences (Friston, 2010). That’s why even identical twins with the same starting conditions develop different personalities and behaviors (Harris, 1998). The process of choosing changes the chooser.

            Even if a system adapts over time, isn’t it still just following a deterministic path? Not necessarily. The Free Will Theorem (Conway & Kochen, 2006) argues that at fundamental levels, certain systems—like human decision-making—exhibit choices that are not fully reducible to prior conditions.

            You wouldn’t say an AI lacks decision-making if it modifies its own weightings and priorities, shaping future choices in a self-referential way. But there’s a deeper difference—human consciousness is volitional. The ability to focus, to initiate thought, to direct attention—these are acts of self-directed cognition that cannot be explained as mere algorithmic responses.

            You might suggest that maybe we just can’t know if free will exists? But we don’t need absolute proof—just evidence that choice-making is not strictly deterministic.

            If consciousness were merely a mechanical unfolding of prior causes, why does deliberation exist at all? Why does man need to judge, to identify, to integrate? Consciousness, by its nature, requires the power to select between alternatives—otherwise, it would be non-functional, indistinguishable from a machine that simply runs its programming. Hence why even a sophisticated AI is not and cannot be a measure for free will.

            Even if consciousness is difficult to define, its function is inescapable. To be conscious is to be aware of reality. And awareness, by necessity, means distinguishing what is from what is not. If free will were an illusion, then so would be every act of judgment, learning, and conceptual integration.

            You mentioned that unpredictability might be a close enough approximation to free will. But unpredictability is not freedom—awareness and conceptual identification are. The real question isn’t whether a system is unpredictable; it’s whether it grasps reality, forms abstractions, and directs itself toward values it identifies as necessary for its existence.

            Free will isn’t about randomness or external control—it’s about self-generated causation. The ability to step outside of impulses, reflect, and modify future choices is what makes us free.

            Would love to hear your thoughts on the Free Will Theorem—it might shift the way you frame this.

  13. In the Sapolsky/Dennett debate (IIRC), Dennett made a powerful point by referencing a study demonstrating a correlation between anti-social behavior and disbelieve in free will. When people don’t believe in free will, society goes downhill.

    My interpretation of his point is that the question “Is there free will?” is perhaps the wrong one to ask. The better question might be, “Should we believe in free will?” Because if we don’t, then we might be doomed.

    Okay sure, it’s Pascal’s wager, effectively an argument for self-delusion. But the stakes are high if erosion of belief in free will is actually a basilisk thought experiment—a social virus that could effectively wipe out even our illusion of free will, an illusion that is arguably good enough for our purposes. (Admitting of course that if free will doesn’t exist then one can’t control whether one believes in it. In such a scenario belief is a sort of entropy, inevitably diminishing to zero. It’ll be our kids’ problem to deal with, along with global warming and the collapse of democracy. Like I said, it’s easier to argue against free will than for it.)

    Meanwhile, I did purchase I, Zombie—in paperback form so Hugh can bask in the warm ephemeral glow of a $4 royalty.

Leave a Reply

Your email address will not be published. Required fields are marked *