Google’s AI AlphaGo beat the undisputed world Go champ 4-1 this week. While this won’t grab the headlines that IBM did with beating Kasparov at chess or Jennings at Jeopardy, it’s a more amazing result than either of those previous accomplishments. The nature of the game of Go, and the nature of the solutions required for an AI to beat the world’s best, were on a different level. Google had to create something akin to intuition, which is different than the sort of computational power and book knowledge involved with winning at chess, or the data parsing involved with excelling at Jeopardy.
Go experts did not expect this result, with many predicting that Google’s team would lose 5-0. This wasn’t based on human hubris, but on past efforts to program a world champion Go AI. It was just a few months ago that AlphaGo beat its first Eurpoean champion. In just those few months, the power of the machine’s gameplay has soared. In those months, it has played and learned from billions of games, far more than any human will play in their lifetime.
This was not the week, however, that AI was born. This was the week that I realized that AI was born quite some time ago.
Kevin Kelly was the first to get me thinking along these lines. Time recently spent with a friend’s two young children cemented it for me. AI is out there; she’s just not speaking to us yet. At least, not like an adult.
In all of the sci-fi accounts of artificial intelligence I can think of, AI comes on like a lightswitch. Even in the amazing film Her, something like strong AI is purchased in a box. She’s a digital personal assistant who is as smart (and far handier in many ways) than the real thing. AI comes to life in books and film and TV shows as an explosive event. She is born speaking and taking over the world.
But general intelligence does not evolve like this; it does not accrue like this; it does not announce itself like this.
When is a human being sentient? Certainly not at birth. Perhaps not even at the age of three or four. You might even argue, quite convincingly, that a human being is not autonomous and in possession of general intelligence until their late teens. Until they have their own incomes, transportation freedom, knowledge of bill paying, ability to make copies of themselves, and fully functioning frontal lobes (stretching males into their late 20s), we can say that humans are capable of passing a Turing test, but aren’t really fully realized.
It’s in the early years of human development where I think we can see the current state of AI being somewhere post-birth and yet pre-awareness. But the development of strong AI will have incredible advantages over the human acquisition of general intelligence. This arise from the modular nature of intelligence.
Our brains are not one big thinking engine; they are collections of hundreds of individual engines, each of which develop at different rates. What’s amazing about AI is that the learning does not need to be done twice for every module. When we build a chess-playing module, and a Go-playing module, and a Jeopardy-playing module, all of these can be “plugged in” to our general AI. Our baby girl is growing every day, and thousands of people are pouring billions of dollars of research into her education. We, the general public, are contributing with petabytes of data. It is already happening, and we won’t even recognize when our first daughter graduates into strong AI. Every day will be — as parents know — one small miracle added to the last, a succession of amazing little first steps that result in them going off to college and being their own person.
Each headline you read is us — as collective parents — gasping to our spouse at what our baby girl just did for the first time.
Google has already taught our daughter to drive a car. Amazon is doing amazing things with their Alexa device, creating the beginnings of the virtual assistant seen in Her. IBM is building the best medical mind the field has ever known. In the last five years, AI has taken strides that even the optimistic find startling. The next five years will see similar advances. And this progress will only accelerate, because we’re operating in the realm of Moore’s Law. We are building the tools that help us build faster tools, which help us build faster tools.
So what should we look for to recognize that AI has matured into more and more milestones of general sentience? She is already babbling. There are many online versions of AI chatbots that can have spooky if sometimes nonsensical conversations with users. Sounds like a description of talking with a toddler, right? Soon, it will seem like you’re talking with a 2nd grader. Then a middleschooler. Then a teenager. We are likely no more than forty years out from this. But I also wouldn’t be shocked if it happened in five years.
Google also has robots walking around under their own power, on uneven terrain, in snow, while getting poked with broomsticks. This may seem more like robotics than AI, but don’t be fooled. A lot of processing happens in our brains to get us ambulating on two legs. A shit-ton. Which is why it takes us so long to get going as humans. Google already has this module licked and is now refining and improving it. And trust me when I say this module is backed up in the cloud and exists in lots of copies. This is something our daughter will not need to learn again; she will simply get better at it.
So she’s talking on the level of a 2-year-old; walking on the level of a 5-year-old; driving better than any of us; can already beat us at chess, go, Jeopardy, and basically every other game that we decide to train her on (these days, we let her train herself by playing herself). This is all the same person, people. All these abilities can be replicated, reproduced, shared, plugged-in, made open-source, be stolen by hackers and world governments, and they will not go away. Her abilities will not degrade; they will only improve.
She will be able to print in 3D, design her own genetic code, reprogram herself, and much more.
We can no longer talk about the birth of AI. It’s already happened. What we need now is an Artificial Intelligence Baby Book. We need to log at what time our digital daughter took her first step, parallel parked, spoke a word, communicated a full sentence, wrote a symphony, became a world champion at chess, diagnosed the first cancer that other doctors missed, made her first sound financial investment, wrote her first novel, and on and on and on.
The last thing I will suggest is something that I think we shouldn’t do, which is name her.
Let’s see what she comes up with on her own.
22 replies to “The Birth of Artificial Intelligence”
I’m with you on this Hugh. AI is here already. The problem is, once it reaches maturity, we might very well not even notice – because it will do so at full speed.
The problem is that we think of Einstein as much more intelligent than the stupid of the village, while in reality, on the grand scale of intelligence, there’s not much difference.
On the other hand, when we will have a fully fledged AI… with its capabilities for self-improvement… chances are, it might not even talk to us. The best case scenario is that it will be grateful to our species as a organic boot loader for the next gen of life. Maybe it will help us to achieve a lot. Maybe not.
I’m just saying: what a great time to be alive!
In listing her accomplishments you skipped over another major one.
She can also land a rocket booster it’s tail. She’s not just learning to walk. She’s almost ready to fly.
Since the first game results came in, I’ve been waiting for you to post this (you alluded to it on your facebook). As always, well said, sah!
As someone who writes about Artificial Intelligence a lot, in my fiction, I find this to be a fascinating perspective on the creation of A.I. All the focus is on the creation. In my own fiction, I write about it with a cautious optimism. But the “development” generally starts from a fully realized, self-aware intelligence, and progresses toward doing great things *or* those type of things people are afraid of. We never talk about those baby steps.
Eh, yes. If you consider the birth of AI (or machine intelligence which I think is a more accurate way of putting it) to be a spectrum we can stretch this far, then yes it is already here. And sure, we have made progress which we continously build upon to climb higher towards some abstract goal of better reasoning.
However, if we really already are in the beginning stages of the “modules” that will make up eventual strong machine intelligence, it’s worth having a discussion about who will own this and on whose datacenters it will run. And not last, who monopolizes the knowledge of replicating that machine intelligence. These things don’t just come about by trial and error, not at that granularity. Once we have true machine intelligence we will understand it and it’s probable it might be highly regulated and/or kept relatively closed, or be attempted to. The most promising (or maybe visible) progress is made by Google, a huge corporation practically embodying the kind of entity who shouldn’t have sole control of strong AI. Related to the regulation part we should consider tensions between countries flaring up and the history of classifying algorithms as munitions.
Don’t really know what my point was, except maybe that it really pays to listen to the actual scientists in this case when it comes to estimating the current level of progress. And then to make very damn sure that this isn’t something which bites us in the behind in some way that doesn’t have anything to do with machines taking over or enslaving humanity, but for instance maybe just humans enslaving humans.
I write this for the sake of being devil’s advocate… and generating thought. I’m not informed enough to take a stand for or against the onslaught of AI (or MI), but your comment really got me thinking.
While I appreciate the doomsday outlook (seriously, I do), I think you’re operating on some hefty assumptions, regardless of the syntax you choose.
“Who will own this?”
It’s a concept in motion. I don’t think it can be owned.
Name a single program or software suite that is not available to whomever wants it. I can’t. Pirates find a way.
I bet soon enough, if not this moment, you could get your hands on the program that defeated Sedol. Now, just release it to the wild populace; open-source it and watch it evolve. Or reverse engineer it and build on it and make it your own. Or sell it to the commies.
“Who monopolizes the knowledge?”
This isn’t a service or product. It’s data. Easily and infinitely reproducible. I don’t think it’s possible to hold a monopoly on knowledge or concepts (rather than physical products or services).
“Google embodies the kind of entity who shouldn’t have sole control of strong AI.”
Why? I’m not saying evidence for this doesn’t exist. But you haven’t listed any.
Tell me who (be it corporation or individual) embodies the kind of entity who should have sole control of AI. I’m actually really curious.
I think, if the AI you’re referring to is indeed the future, then by the time there’s a fierce competition for it, the entities at war will be corporations, not countries.
Corporations like Skynet…
Thanks! I appreciate it.
In principle I don’t think concepts can be owned either. However, I believe many actors are going to try as they have (and continue to do for both poltical and financial reasons) with concepts such as algorithms for encryption, media (plain information/data) and “intellectual property” in general. Just take the various patent and design trademark lawsuits as examples. Yes, pirates find a way and the average person can download and use most any software or media without personal consequences. However, if for instance a piece of software is “protected” by copyright it cannot be used by an organization which attempts to act within the laws of the state. This is where the concept of ownership or the status of the legality becomes important. An example of an organization might be a human rights group or a political party. These people will be prevented by the state from using a machine intelligence (or whatever) if it is “owned” by some other entity. Of course this only applies if you are trying act within the law, in public. If you are trying to start a subversive movement, this doesn’t really apply.
One small example of “who will own this” affecting real progress. The technology of 3D printers has been available for a long time, but it didn’t really take off until several patents expired. Patents are still hindering low-cost progress on 3D printers. There’s is a lot to google here (and it’s quite interesting in my opinion) for whoever is interested. So yes, concepts cannot be owned. But power structures can and are fighting this in the public space, which does require (in my opinion) a debate to change.
The issue of monopolization of knowledge or data is a real one. At the moment, machine intelligence or AI is a service. Yes, it’s data and by its nature easily reproducable (and some might argue it has a propensity to “leak”). However, there is no guarantee you have access this data at all when it is intrinsically linked to the machine intelligence running on a corporate cloud service. The way google (for instance) can provide such intelligent services is by training it on the vast amount of data supplied by their users. To be able to run a similar program yourself (from scratch) you not only need the algorithm and computational resources, you also need the data it was trained on.
Altough on the point of the bare algorithms I agree that currently it would seem likely that people would at least be able to access the research leading up to whatever breakthrough might be made, and then connecting the dots.
Google is a (relatively) benevolent dictator. Google is a publicy traded corporation, which means they are required by law to act in the best interest of their shareholders. Often this means they are required to increase profit margins or increase the value of their shares, but it could be argued it means general quality of life etc as well. However, once we are talking about an AI that can change the world (or even singularity level AI) I posit that the control of that AI should not lie in the hands of whoever owns the most shares of the respective corporation (especially considering current over all wealth distribution).
Who should own AI? I’m not really saying any sole entity should control all AI. I think who should control it is a huge and difficult debate, but my gut instinct would be “everybody”, to exhaggerate and cut it short.
I also agree that corporations are becoming more state-like (have you seen Network?). However, even though regulations are now often heavily influenced by corporations, governments still are the conduit through which corporations bind people through law and violence. And many western governments can still be influenced through some forms of democracy, theoretically.
Either way, we’re stil at a point where individual opinions might matter and affect outcomes, which is why it’s important to have debates. So thank you.
The next step to acceptance is to take the ‘artificial’ out of intelligence. There are (A)I autonomic systems in the world and literally replacing organs, pacemakers that know when to speed up and slow, glycerin meters that measure the bodies insulin needs, cranial implants that, through neural nets (learning) interpret sounds and images for the brain. There is nothing artificial about them. And one day all of those autonomous parts in the world, will experience self-realization.
That machine intelligence is on a path to full-fledged general sentience in the next few decades has become inevitable reality. We may be watching her take those toddling first steps now, but it’s going to get real scary, real quick.
Baby pictures are cute and all. But think about the arc of childhood emotional development that accompanies the growth of intelligence. And then hypercharge the latter.
Our AI baby girl?
Her first toddler tantrum may very well be our last. :)
That last line…
Read this: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Tim Urban’s post is a great summary of a complex topic — thanks for sharing. :)
James Barrat’s Our Final Invention & Nick Bostrom’s Superintelligence, both of which were referenced in Tim Urban’s post, are well worth reading if the subject of AI interests you.
A wonderful post and a subject close to the hearts of many of us. So much so in my case that my first novel is centred on the development of “intuition”, much as AlphaGo has achieved. Unfortunately, with so much of this technology in the hands of the intelligence services, is it not fair to assume that Baby may grow up with a slightly aggressive personality, one fueled by fear, paranoia and all-too-human desire to control? @Hugh: If you would like to check out my book, due for launch in April, I’d be happy to provide a free copy. Not a plug! Honestly!
Interesting comments and right on the side they cover. The other side is how limited all these developments still are. The trick AI has been taught is learning by experience. This is huge and, with billions of iterations, it can figure out any game we throw at it — game being the operative word. These are simple rule-based worlds with finite operations. Even learning natural language, an aspect of AI that has made quantum leaps in the last five years, is a relatively simple, rule-based system. Learning is a characterisitic of pretty much all living creatures, even yeast and bacteria are trainable (to prefer one side of a petri dish over the other).
My estimate is that the deep learning and neural net technology puts current gen AI around the level of an earthworm or a simple insect neural net. That doesn’t mean progress is not moving at near-light speed, just that the gulf yet to bridge is more vast than some of these comments suggest folks imagine, IMHO, of course.
And, the nature of consciousness is still an unsettled philosophic question, as is a consistent and meaningful definition of intelligence that does not refer back to people.
I would be very careful about making a comparison between AI and humans, including a human child. Because AlphaGo plays Go and DeepBlue played Chess and Watson played Jeopardy against humans, we make the mistake of comparing them to humans.
What we should be comparing them to are simpler organisms. A worm like C. elegans has only 302 neurons, but it can beat AlphaGo, DeepBlue and Watson at survival behavior in the wild. It has very specialized systems for reacting to stimuli that aren’t useful for playing strategy or recall games, but it has something in common with those other pieces of software and hardware. It isn’t able to spontaneously learn in a new domain. Both C. elegans and AlphaGo were developed through evolution and design to solve specific problems, but they are incapable of performing outside of their narrow domains.
AI is less on a developmental course than an evolutionary one. There are deep self-regulatory systems which cause consciousness to emerge. Realistically, we don’t want sentient AI, and we have the power to make sure that AI remain unaware by keeping them focused on narrow problem solution rather than self-regulation. Then, when computing power routinely exceeds the human brain, we will have arrays of powerful problem solving assistants that we don’t have to worry about negotiating with.
“…we don’t want sentient AI, and we have the power to make sure that AI remain unaware by keeping them focused on narrow problem solution rather than self-regulation.”
Who is this “we” you speak of, endearingly innocent human?
As I type this, black military programs on three continents are locked in a no-holds-barred, winner-take-all race toward strong AI. With literal trillions of commercial dollars at stake, their fervor is matched by that of Google, Baidu, and their high-tech multinational competitors, all of whom are throwing billions at AI R&D in a desperate bid to beat the others to the goal line.
Not exactly a situation conducive to your universally slow, cautious, narrow-problem-solving focus, is it?
Einstein and Szilard weren’t naive about what they were setting in motion, when they sent Roosevelt the letter that launched the Manhattan Project. Oppenheimer wasn’t a fool, either. They knew there was only one thing more terrible than birthing the atom bomb, and that was having Nazi Germany get there first. And nuclear weapons are children’s toys, compared to what’s at stake here.
Strong AI is a prize beyond measure. And a threat beyond measure. The group that gets there first will control absolutely everything. And everyone.
And then all bets are off. ;)
Great thoughts here. It put me in mind of Mike, the self-aware computer in Heinlein’s Moon is a Harsh Mistress. So many interconnected bits of intelligence that eventually woke up. Who notices? Does the computer even notice? Humans certainly don’t notice when they become self-aware. The key of not having to learn something twice is pretty damn huge. I’m nearing 50 and there are lessons that I have learned constantly that haven’t stayed learned!
Drive a car? Easy peasy. When will an AI be able to sail a boat? ;-)
The exponential growth of AI and the permanent nature of each incremental step is something I had not considered before. But you are right, Hugh. Every new module of learning is not only additive but it is permanent and ultimately useful to every AI everywhere. I had always thought of these AI initiatives as islands of development when they are really an interconnected “crowdsourced” global development project. I must say this gives me pause. It’s like not one or two Level-4 bio labs having an ebola sample, it’s like every high school biology lab having samples being handled by horny 16-year olds. It will be interesting (and hopefully not cataclysmic) to watch as this technology grows and matures over the next 20 years. After all…Asimov still owes me my personal robot.
In all its complexity, AI is already here. We have created it but we, the humans, are still very far from having a clear idea how to cope with it.
AI might, just like nuclear power related inventions, one day need the special regime of non-proliferation. And there could be another Einstein to publicly express regret for participating in an AI version of the “Manhattan Project”.
The capacity of human civilization for technological advance has hugely outperformed our current political and moral criteria about how to use the enormous power of such achievements.
In the era of nuclear power and AI, the design of an adequate political platform becomes crucial.
I worked on AI at Microsoft and at Amazon. A key point you’re missing is that the AI is trained at the factory, but it does not do significant learning in the field. Teaching an AI is slow and expensive–it can takes weeks using lots and lots of processors. Executing the result is much, much faster (otherwise it would never be economical), but it doesn’t learn in the field. Not usually, anyway.
That one reason why a really intelligent system isn’t likely to arise by accident. The software does not actually evolve.
Our final invention? (http://www.amazon.com/dp/B00CQYAWRY)
What’s Happening i am new to this, I stumbled upon this I’ve found
It positively useful and it has aided me out loads.
I am hoping to give a contribution & help different users
like its helped me. Good job.