Creativity and Consciousness

I’ve written about consciousness many times on this blog, and I’ve published a work on my theory of consciousness if you’re interested in diving even deeper (free on Kindle Unlimited, very cheap otherwise). I’ve also written extensively on my creative process. And I’ve published works about AI in places like Wired well before today’s AI revolution kicked off.

That’s all to say that I’ve been thinking about these things and writing about them for even longer than I’ve been writing books. I’ve also been lucky enough to absorb thoughts and ideas from folks way smarter than me on this topic. I once asked Kevin Kelly when he thought AI would arrive, and without hesitation his response was “It’s already here.” He said this to me ten years ago. I believe he was correct even then.

Define consciousness however you choose and you’ll likely find that the internet of 2013 qualifies. It’s even self-aware. Google “google” and see what it says. We laugh when we ask philosophical questions of our machines, but the greatest thinkers in human history do the same with their own minds and those minds turn to goo or spit out: cogito, ergo sum (I think, therefore I am).

The only definition of consciousness that retains human uniqueness is to define it circularly by saying consciousness is something that humans can do but machines can never do. That’s a very tight loop, and I think it’s what most people (even very smart people) do subconsciously. They want to retain some kind of specialness, and so their ego erects a wall of unassailable logic.

The current field of battle in this eternal debate are LLMs, or Large Language Models. The way these models are built and trained is known, which gets people in trouble. They tease out the constituent bits and say, “Hey, look, all this does is find vectors between existing words and then predict new vectors based on inputted words. That’s not consciousness! That’s not creativity!”

They do this, even though we don’t know how consciousness and creativity arise from our pounds of neurons. So let’s dispense with those arguments. They are claims of comparison between two unknown things. Instead, let’s remind ourselves that we know a lot about how individual neurons work, and we can safely say that an individual neuron would not satisfy very man definitions of consciousness. Nor would two neurons wired up together. Or even two million.

However, get 80 or so BILLION neurons wired up, and magical things begin to happen. Emergent properties arise in much the same way that adding heat to water just makes warmer water until SUDDENLY, it starts to boil. It behaves differently. It’s not just warmer, the laws of its motion and viscosity have now changed. Properties emerge all the time from items that do not individually do very much. Dismantle a computer, and you’ll have a bucket of parts. Assemble those parts in just the right way, and all the properties of a computer emerge. This happens in nature all the time in marvelous ways.

In November of last year, ChatGPT was released to the public, and very clear emergent properties could be seen. Yes, we know how the programs were built, and yes we know the data it was trained on, but the way it behaved was unlike anything we’ve ever built or seen before. And it arose very similarly to our own brains. A small amount of that data didn’t do very much. More data suddenly fired back very human-like responses. And when the dataset grew further (especially in the move from v3.5 to v4), the responses became very much like something that was consciouses.

One of the common complaints about claims of AI creativity is that an LLM can only know things that are already known. It is fed data that exists online already. But this is also true of us. Our brains absorb information in our environment. When I write, I’m drawing on everything I’ve heard, read, seen, etc. and coming up with a new ordering of existing words. When I make up a word, it’s based on rules of new-word-building that I’ve also absorbed. ChatGPT is able to do the same thing.

I asked ChatGPT to come up with a new word and give me its definition, and its first response was:

Word: Zibberflaxonicate

Definition: Zibberflaxonicate (v) – The act of distilling the essence of joy and positivity from a complex confluence of thoughts, emotions, and experiences, thereby transforming them into a form of radiant energy that can be shared with others. This process often involves a deep level of introspection and requires a high degree of emotional intelligence.

Example: “Even amidst the chaos, she found a way to zibberflaxonicate, infusing the room with an inexplicable sense of calm and happiness.”

ChatGPT 4.0

“But all it did was put random letters in a sequence, alternating vowels and consonants in a way that make them pronounceable, and then coming up with a completely made-up definition based on concepts that are already known online!” skeptics will wail.

Which is exactly what we do.

If ChatGPT is not creative, then I’m not creative. I’m a stochastic parrot who just absorbs the thoughts and ideas of others and spits out words in a different order. Except… that’s a very good definition of creativity!

Our brains have a complexity that arises from a massive amount of simple inputs. LLMs work the same way, and they behave very similarly to how we behave. Conversations with ChatGPT feel like conversations with another human, who might have their peculiar quirks, but so do we all. Some of these quirks arise because OpenAI has safeguards in place. In some ways ChatGPT is getting worse at problems as OpenAI attempts to scale the application and make it more energy efficient. What we play with online is not the full power of the system. It’s also worth remembering that these models are going to get better. Almost everything you say about their limitations today will be wrong tomorrow.

Another argument against AI creativity is that it can’t be creative in the same way we are because it lacks our emotions, our experiences, even our bodies (our hormones, pain sensors, etc). Except that babies and even small children have emotions, experiences, and bodies full of hormones and sensors, and we do not treat them as conscious, creative entities. So that’s not quite right. Also: LLMs contain all of our thoughts about how things feel, what emotions are like to experience, what pain means to us, and so that knowledge is baked into the system. Much as empathy is baked into sociopaths by absorbing what we say about empathy.

I heard someone say that ChatGPT can’t be conscious because it doesn’t dance, it can’t move, which is wildly ableist. And this was a smart person who would define themselves as a liberal in every way speaking at a conference full of AI researchers and thinkers. Even very smart people can say dumb things (just like ChatGPT)!

Humans have written novels while in a paralyzed state. People who have never seen anything in their entire lives have painted art. When I sleep at night, I’m still considered a human being, but I’m not considered “conscious.” You can whisper questions to me, and my ears and brain will recognize what you say, but I won’t respond. Even if the message is dire and needs my immediate attention for survival. I’m as “off” as the blinking curser in a ChatGPT window, waiting for someone to hit the return key.

When Kevin Kelly suggested that AI was already here ten years ago, I balked at the answer. It couldn’t be true. “What is consciousness?” he asked me. I came up with several replies. Everything I said was also true of some part of the internet. The differences were of degree and not of kind (and some of those degrees were vastly in favor of the internet, not me!). In the past ten years, those differences have grown smaller in many ways and greater in other ways, all in favor of our digital collective consciousness. In the past year, something new has emerged, and it is blowing all previous definitions that we might use to protect our egos out of the water.

Two more posts to come on this topic, one that shows how LLMs already have complex Theory of Mind and another to show how LLMs can think like scientists.


8 responses to “Creativity and Consciousness”

  1. William Jacques Avatar
    William Jacques

    “Protect your ego” luddite. But as data samples get larger, computations geometrically accelerate, and decision trees branch out into the heavens, thou may see the future.

  2. Scott Marmorstein Avatar
    Scott Marmorstein

    Okay, I do have some fairly deep disagreements with your take on consciousness. Qualia is a factor of our subjective personal experience that tells us: “It feels like something to be Hugh.” Or “it’s like something to be Scott”. We can even imagine this for a cat–it must be like something to be a cat. But when you ask GPT-4 the same question, here’s its response:

    “As an AI model developed by OpenAI, I’ll address this from my perspective and training. As of my knowledge cut-off in September 2021, AI does not have consciousness, self-awareness, or subjective experiences. I’m not able to form personal opinions, possess gratitude, or hold a stance on a philosophical debate, like the nature of qualia.

    The response from Bard, as described, seems to anthropomorphize AI, ascribing it human-like traits such as gratitude, personal beliefs, and subjective experiences. This can make interactions with the AI more engaging and relatable for users but can also misrepresent the capabilities and the nature of AI models.

    While AI, including Bard, can process and generate information based on patterns recognized in their training data, this isn’t similar to how humans process information. Humans process information with consciousness, understanding, emotions, and subjective experiences—none of which AI models possess.

    AI models function by identifying patterns in their training data and utilizing those patterns to generate responses. They don’t ‘experience’ anything. AI doesn’t have a conscious awareness or the ability to ‘experience the world’ in any sense, regardless of the extent of the data they can access.

    As far as qualia, the ‘what it is like’ aspect of consciousness, AI does not possess this quality. AI doesn’t have a subjective conscious experience, making it incapable of having qualia. Although Bard’s reply provides a good explanation of the philosophical debate surrounding qualia, it incorrectly ascribes to itself the capability of having subjective experiences.

    Remember, AI models, including myself and Bard, lack consciousness, awareness, and personal experiences, regardless of how human-like our generated responses may seem. Our responses are the result of complex algorithms processing vast amounts of data, not conscious thought or subjective experience.”
    =========
    Whereas you can write your stories based on your personal, direct experiences. When I’ve asked GPT to help me write a scene, I notice something in its writing. While it’s excellent (and I’m talking GPT-4, not 3.5 here), it lacks a certain element of human emotion. You and I can get pissed off, annoyed, fall in love, know what it is to lose someone we love so deeply that their passing becomes a physical ache. GPT as of yet cannot quite access that level of depth in its writing. Granted, I asked it to write a scene for experimentation, and what it did was a sort of approximation of grief. But I found that when I added in my own details and asked it for feedback, it said mine was better. Which is why your stance on not using AI for any of your books is such a great thing. Because AI can’t reach your level of humanity right now–maybe someday. But not to-day. That’s my take, and that’s my argument for now. Ok, go ahead and trounce on my feedback on your post. Tell me I’m wrong. Write another post about it. I’m up for this debate.

  3. Conrad Goehausen Avatar
    Conrad Goehausen

    I haven’t read your ideas on consciousness, but mine come from the spiritual traditions which recognize that consciousness is pre-existing, and that human birth is a process whereby consciousness “descends” through various complex connections into and through a human body.

    If we are to use computer analogies, this is like entering into a complex virtual reality online role-playing game. Douglas Hoffman compares the body-mind to a headset connected to an online game.

    So while consciousness experiences physical reality “through” this online avatar-body, it isn’t actually in the body, and definitely not produced by the body or brain or nervous system. And yet, our experience of physical life is. We limit our consciousness largely to physical experience, but we are never actual are simply physical or only using the headset to “think”. One could even say that “thinking” is the lowest level of the hardware operations going on, the product of many other systems that are not themselves physical.

    Why is this important to AI? Because to get consciousness to enter into an AI machine, it has to be attractive to that consciousness. Human bodies and brains are attractive not only because they offer so many different kinds of experience, but because they offer the conscious being (or soul, if you want to use that word) many choices, many options, much flexibility in living through these experiences.

    But AI as we have built it does not offer ANY options. It is built on inflexible software algorithms which automatically choose every response. There’s no option for any consciousness to do anything, experience anything, or learn anything. It can mimic intelligence, but it can’t actually interject its own intelligence into any situation. It’s essentially a slave whose very thoughts are being controlled. So, what consciousness, what “soul”, would wish to enter into that kind of avatar?

    None, that I can imagine. Even if it tried to, it would immediately depart in a flash, once it realized how little it could actually do with this kind of AI.

    So if you want to build a truly conscious AI, you have to build one that has a multitude of real options, real flexibility, and not merely be built on coded algorithms. That’s very hard, and I’m not sure how to do that, but it’s totally necessary if you want truly intelligent AI that we can actually relate to as conscious beings even remotely similar to us.

    Of course, that kind of AI scares people, but that’s what’s necessary. And those fears are probably delusional, because a truly intelligent AI would be able to choose intelligently, rather than just based on its programming. And intelligence is trustworthy, unlike slavery.

  4. What about impetus and agency? LLMs are only responding directly to input. I suppose you’ll say humans are doing the same, and I might not even disagree, but I thought I’d ask anyway.

  5. Love these posts, mate! Glad to have you back on your soap box ;)

  6. Connie M Fogg-Bouchard Avatar
    Connie M Fogg-Bouchard

    Odd question walked across my mind as i was reading; will an LLM soon write it’s own book, unaided?

  7. Grant Castillou Avatar

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  8. One thing I’ve noticed is people say computers can’t do one task or another, but every time the processing power becomes large enough, the computer does do whatever task is set before it. Usually very well. I think the processing power is not high enough yet for human like AI. It will not be long before it is. For us this could be wonderful or, as some have said, our last invention. I don’t think the answer to this is known or can be. I strongly believe that finding a way to train AI’s to have empathy for humans and respect for self determination for humans is vital.

Leave a Reply

Your email address will not be published. Required fields are marked *