I’ve written about consciousness many times on this blog, and I’ve published a work on my theory of consciousness if you’re interested in diving even deeper (free on Kindle Unlimited, very cheap otherwise). I’ve also written extensively on my creative process. And I’ve published works about AI in places like Wired well before today’s AI revolution kicked off.
That’s all to say that I’ve been thinking about these things and writing about them for even longer than I’ve been writing books. I’ve also been lucky enough to absorb thoughts and ideas from folks way smarter than me on this topic. I once asked Kevin Kelly when he thought AI would arrive, and without hesitation his response was “It’s already here.” He said this to me ten years ago. I believe he was correct even then.
Define consciousness however you choose and you’ll likely find that the internet of 2013 qualifies. It’s even self-aware. Google “google” and see what it says. We laugh when we ask philosophical questions of our machines, but the greatest thinkers in human history do the same with their own minds and those minds turn to goo or spit out: cogito, ergo sum (I think, therefore I am).
The only definition of consciousness that retains human uniqueness is to define it circularly by saying consciousness is something that humans can do but machines can never do. That’s a very tight loop, and I think it’s what most people (even very smart people) do subconsciously. They want to retain some kind of specialness, and so their ego erects a wall of unassailable logic.
The current field of battle in this eternal debate are LLMs, or Large Language Models. The way these models are built and trained is known, which gets people in trouble. They tease out the constituent bits and say, “Hey, look, all this does is find vectors between existing words and then predict new vectors based on inputted words. That’s not consciousness! That’s not creativity!”
They do this, even though we don’t know how consciousness and creativity arise from our pounds of neurons. So let’s dispense with those arguments. They are claims of comparison between two unknown things. Instead, let’s remind ourselves that we know a lot about how individual neurons work, and we can safely say that an individual neuron would not satisfy very man definitions of consciousness. Nor would two neurons wired up together. Or even two million.
However, get 80 or so BILLION neurons wired up, and magical things begin to happen. Emergent properties arise in much the same way that adding heat to water just makes warmer water until SUDDENLY, it starts to boil. It behaves differently. It’s not just warmer, the laws of its motion and viscosity have now changed. Properties emerge all the time from items that do not individually do very much. Dismantle a computer, and you’ll have a bucket of parts. Assemble those parts in just the right way, and all the properties of a computer emerge. This happens in nature all the time in marvelous ways.
In November of last year, ChatGPT was released to the public, and very clear emergent properties could be seen. Yes, we know how the programs were built, and yes we know the data it was trained on, but the way it behaved was unlike anything we’ve ever built or seen before. And it arose very similarly to our own brains. A small amount of that data didn’t do very much. More data suddenly fired back very human-like responses. And when the dataset grew further (especially in the move from v3.5 to v4), the responses became very much like something that was consciouses.
One of the common complaints about claims of AI creativity is that an LLM can only know things that are already known. It is fed data that exists online already. But this is also true of us. Our brains absorb information in our environment. When I write, I’m drawing on everything I’ve heard, read, seen, etc. and coming up with a new ordering of existing words. When I make up a word, it’s based on rules of new-word-building that I’ve also absorbed. ChatGPT is able to do the same thing.
I asked ChatGPT to come up with a new word and give me its definition, and its first response was:
Word: Zibberflaxonicate
Definition: Zibberflaxonicate (v) – The act of distilling the essence of joy and positivity from a complex confluence of thoughts, emotions, and experiences, thereby transforming them into a form of radiant energy that can be shared with others. This process often involves a deep level of introspection and requires a high degree of emotional intelligence.
Example: “Even amidst the chaos, she found a way to zibberflaxonicate, infusing the room with an inexplicable sense of calm and happiness.”
ChatGPT 4.0
“But all it did was put random letters in a sequence, alternating vowels and consonants in a way that make them pronounceable, and then coming up with a completely made-up definition based on concepts that are already known online!” skeptics will wail.
Which is exactly what we do.
If ChatGPT is not creative, then I’m not creative. I’m a stochastic parrot who just absorbs the thoughts and ideas of others and spits out words in a different order. Except… that’s a very good definition of creativity!
Our brains have a complexity that arises from a massive amount of simple inputs. LLMs work the same way, and they behave very similarly to how we behave. Conversations with ChatGPT feel like conversations with another human, who might have their peculiar quirks, but so do we all. Some of these quirks arise because OpenAI has safeguards in place. In some ways ChatGPT is getting worse at problems as OpenAI attempts to scale the application and make it more energy efficient. What we play with online is not the full power of the system. It’s also worth remembering that these models are going to get better. Almost everything you say about their limitations today will be wrong tomorrow.
Another argument against AI creativity is that it can’t be creative in the same way we are because it lacks our emotions, our experiences, even our bodies (our hormones, pain sensors, etc). Except that babies and even small children have emotions, experiences, and bodies full of hormones and sensors, and we do not treat them as conscious, creative entities. So that’s not quite right. Also: LLMs contain all of our thoughts about how things feel, what emotions are like to experience, what pain means to us, and so that knowledge is baked into the system. Much as empathy is baked into sociopaths by absorbing what we say about empathy.
I heard someone say that ChatGPT can’t be conscious because it doesn’t dance, it can’t move, which is wildly ableist. And this was a smart person who would define themselves as a liberal in every way speaking at a conference full of AI researchers and thinkers. Even very smart people can say dumb things (just like ChatGPT)!
Humans have written novels while in a paralyzed state. People who have never seen anything in their entire lives have painted art. When I sleep at night, I’m still considered a human being, but I’m not considered “conscious.” You can whisper questions to me, and my ears and brain will recognize what you say, but I won’t respond. Even if the message is dire and needs my immediate attention for survival. I’m as “off” as the blinking curser in a ChatGPT window, waiting for someone to hit the return key.
When Kevin Kelly suggested that AI was already here ten years ago, I balked at the answer. It couldn’t be true. “What is consciousness?” he asked me. I came up with several replies. Everything I said was also true of some part of the internet. The differences were of degree and not of kind (and some of those degrees were vastly in favor of the internet, not me!). In the past ten years, those differences have grown smaller in many ways and greater in other ways, all in favor of our digital collective consciousness. In the past year, something new has emerged, and it is blowing all previous definitions that we might use to protect our egos out of the water.
Two more posts to come on this topic, one that shows how LLMs already have complex Theory of Mind and another to show how LLMs can think like scientists.
Leave a Reply