Humans Need Not Apply

Computers will write novels one day.

Most of the people I mention this to tell me I’m crazy. It doesn’t matter that computers are already writing newspaper articles or stock analyses. It doesn’t matter that computers are already conversing with humans who are convinced that these are people on the other end of the line. Or that computers can beat us in chess (once thought to be more art than mechanics) or Jeopardy (once thought to be a puzzle no machine could ever crack).

Those who don’t believe fall prey to the fact that some past predictions have not panned out. The flying car is a popular distraction. This is as bad an error as the opposite mistake, which is to assume that every wild idea is an eventuality, given enough time. What makes more sense is to look at trends, see what is taking place in laboratories today, and make reasonable estimates.

This video (shared by a commenter on a previous post) does a fair job of this. You should watch the entire piece; it’s brilliant:

In a previous comment, I shared how I thought this might progress. It won’t be all-at-once, and it won’t happen in my lifetime, but it will happen.

First, computers will learn elements of storytelling just as they’ve already learned elements of language. Hundreds of novels will be extensively “marked up” by researchers. Parts of sentences will be tagged, but also elements of plot and structure. Programs will absorb all of these marked up novels and learn general principles. This is already being done with movie scripts, where machines can gauge audience reaction by looking at the placement of certain beats throughout the work.

The earliest examples of computer-written novels will be slight alterations of existing material. That is, a human author will write a “seed book,” and a computer will be able to modify elements of this to match readers. For instance, geography and place-names will match where the reader lives. All events in the same novel will take place in every reader’s home town, with street names and descriptions that match what people see in real life. The protagonist’s gender can be switched with the press of a button, with all pronouns tweaked to match. Or relationships can be same-sex to give more reading diversity for the pro LGBTQ community. Names can match the ethnicity of the reader or their preference.

This application is already within the scope of today’s technology. We might assume it hasn’t been implemented due to lack of demand, but I think the first publisher to try this will see that big data and the ability to customize the product will result in a higher level of satisfaction and engagement. Which will lead to more repeat customers and more sales.

The first place this will be seen is in the translation market. The advances happening here are amazing and destined to compound. Translators and foreign publishers will be creamed by this revolution, but readers all over the world and writers will benefit. Every book will be available in all languages, and computers will play a large role in “writing” these books. The same arguments about nuance and the meanings of words are precisely why no one thought a computer could win at Jeopardy. Keep in mind that the computer revolution is only a few decades old. We’re talking about what will happen 50 to 100 years from now.

In my lifetime, we’ll see grammar checking software put copyeditors out of work. With a few clicks, every novel will be error-free. Eventually, these programs will look beyond punctuation mistakes and typos; they’ll look for continuity errors and also historical inaccuracies. You’ll enter the dates covered by your historical fiction, and any mention of tech that doesn’t fit will be highlighted. Look at coding software and how it can debug as you code for examples of how much progress is being made.

Eventually, writing computers will advance until they’re able to pen individual scenes. There are already infant AIs that can converse with humans (a quasi-win went to an AI this year in the annual Turing Test), so dialog will get better. And then someone will program a computer to write an infinite array of bar fights. That one plug-in will join thousands of others. And then books will be written with the guidance of humans, who create the outlines of plot while computers fill in the details (a maturation of what James Patterson does with his ghost writers today). Eventually, even the ability to create plot will be automated.

If you doubt that computers can have a role in art, look at how digital painting has evolved. It starts with digitizing analog art, which is then filtered or modified to look different. Then photographs are manipulated to look like original art, which some artists are now doing for profit, passing off the entire affair as if hand-drawn. Music and fine arts are already computer-created and enjoyed. The same will be true for literature one day.

At first, people will balk. Then they’ll find out many of the books they’ve already read and enjoyed were written by a computer, and the author was an actor hired to sign books and talk about “motive” and “theme.” There will be a small market for “hand-written” books, just as there is an Etsy for those who prefer things made without the intervention of a machine. In ten years, we’ll see people chatting in the front seats of cars that are driving themselves. We will live with robots more and more (mine just vacuumed the house). And these things won’t seem quite so insane.

 

 

 


82 responses to “Humans Need Not Apply”

  1. There was a novel in the “Danny Dunn” juvenile SF series in which Danny and his friends used a computer to generate school essay papers for them…by typing in the entire text of their textbooks and letting the computer summarize them. (Then the professors whose computer it was had it create an original music composition for them.)

    While in some ways the book stretched credibility (right, like high school kids will find typing entire books into the computer to be less work than actually writing a paper on their own), the computer was more Clarkian magic than solid science, and of course it missed the boat on things like OCR scanners, it nonetheless was kind of prescient.

  2. Hugh. You are 100% correct. However we may find that moment where computers are generating this content for us to be far too brief.

    As soon as these story AI’s get around to the problem of improving how they do their jobs Its unfortunately likely that these expert systems will ‘foom’ out of control. And maximizing return for publishers on stories probably isn’t the core values we want a self reflective improving AI to have.

    I sent out this in twitter but I’ll drop it here as well. A fantastic short tale as a blog post about the likely incorrect assumption that we would have the time to react once we create something that ‘thinks’ at silicon click speeds.

    http://lesswrong.com/lw/qk/that_alien_message/

  3. Can a computer write a story? Sure. Can a computer capture all the emotional subtleties that make a story good? Not likely.

    1. This is an argument from incredulity. Life has only gone through the type of change were about to see twice before. The first chemical replicator that led to RNA and DNA. And the arrival of human intelligence in cro magnons.

      Its likely.

      1. I’m with you. Connecting transistors to one another and building up their number and complexity is an entirely new sort of thing. We’ve only been at it for a single generation. In 100 years, computers will be doing things that seem impossible today. Not just unlikely, but outright impossible. They’ll be doing the unlikely things in 20 years.

        I think the people who believe this probably view the human body as an essentially mechanical object. Complex as hell, but mechanical. Those who think computers can never be artistic (despite the music example in the above video) view the human body as imbued with some sort of mystical animating spirit. That’s probably where we diverge. Dualists vs. monoists.

        1. I know we are mechanical, no ‘spirits’ within. Maybe because I have a few years around here more than you, or because I didn’t see my first computer until the age of 18, but promises are always broken. It’s not just the flying car, it’s the automated home, the self driving car. Both these technologies have been around for a decade, but because they exist doesn’t mean they will ever become used…much.
          Maybe this idea of AI touches the religious and nonreligious differently? I don’t believe in anything higher than man, so maybe i have trouble thinking my computer will outclass us someday. Others believe we live in a world with a creator who can do absolutely anything, so it is easier for them to believe in absolutely anything? I don’t know, it’s an idea I need to spend more time on. One would expect an Atheist like me to be the first one saying AI will rule the world some day, but i don’t see it. Computers are made by us, so I have a hard time thinking we can make something better than ourselves… I guess.

          1. An appeal to age as a measure of authority?

            Let’s just agree to disagree. No harm in that.

          2. I never claimed authority, but I have watched the youtube video, a video with no links to any proof of any kind, please let me read these articels that robots have written, then I will have something to go on. It is very easy to simply say robots are already writing sports stories, but unless someone points to the proof, i will choose to take that statement with a grain of salt.

          3. Hi John. We are more likely to create something unlike us that has a completely different set of objectives when trying to quantify existence.

            We basically know what it is to be human and try to seed computers with the same knowledge and motivations as us. I dare say that being an AI is different to being a Human. Not necessarily better or worse. All we can really do is give them the tool kit that we have developed over time to question things and let them figure it out on their own.

            Other than energy consumption, they will have different wants and needs to us. I suspect that they would not be great with art and novelty in a human context simply because it’s not interesting. Conversely we might see their art as strange and unappealing, if we can even see it.

            That said, many weird and wonderful things are yet to come. But there may be some rough patches on the way.

        2. My comment was for John.

          Here’s one link. There are quite a few of these programs out there. A Google search will turn up a lot of reading material. Of course, a computer is directing that search.

          http://www.nytimes.com/2010/11/28/business/28digi.html

          1. It is interesting, no doubt, but it writes as I did five years ago, so since computers get twice as good every 18 months, I am safe until next summer, lol.

        3. Actually, it is a question of semantics. What do you mean by “artificial”? A human mind, constructed in silicon, is no more “artificially” intelligent than you are or I am. Emergent intelligence is not “artificial”, no matter it’s physical basis.

          The examples cited are in no sense “intelligent”; they are merely automated. There is a fundamental disconnect. Yes, it is possible to automate writing a book. That is not the same thing as creating a book.

          Also, your view of future advances in computing power is a rather inflated. Moore’s Law is not a blank check.

          1. That was also my point. It is easy to say a human brain has ‘such and such’ million transistors and therefore when a computer has the same number it will be as intelligent as we are, but that has no basis in reality. We just don’t know, a computer might have to have a thousand times the number of transistors before a spark of self awareness can come to it, and this might be thousands of years away yet. Also intelligence might be biological, so no matter how big a computer is it will never be more than a slave mind.
            It is like a rocket, we can think if it keeps getting faster it will eventually move through time, but we can’t know the reality, there might be a point where the rocket just can’t go any faster, or it blows up. It is possible that even a computer a million times smarter than us will hit a brick wall, and the idea of AI is a point it will never reach.

      2. Lol, ‘argument from incredulity’? when you are using an argument from ignorance? Computers are not alive, so it doesn’t matter how many time life has gone through any kind of change, that has nothing to do with your macintosh.
        Apples and oranges.

    2. I agree with Sara.
      There are 7 billion people on Earth, about 5 billion can write but maybe 1% can be writers. To assume because a computer can be taught to think it can suddenly write has no basis in logic. Colleges are filled with people with doctorate degrees in language who can’t write a book anyone wants to read. You could have all the knowledge there is to have, it still doesn’t mean you can write.

      1. Computers already write. Further improvements will be of degree more than kind.

        Everything we are is based on biology, which is based on chemistry, which is based on physics. Simulating this is mostly a matter of processing power. Real AI will happen. When it does, there isn’t anything we can do that AI won’t be able to do as well. I’m predicting someone will see real AI in 100 or so years. If it takes 1,000 years, it’ll happen.

        1. Hugh, I’d love to hear your reaction to the link I posted above. The problem with the timescales you are propis

          1. Phone glitch.

            The problem with the timescales youre proposing is that it could be a matter of minuter hours or a handful of days that we would have to react. Neurons transmit so slowly that we wouldnt even see it coming. So maybe 50 years until AI, but 50years and two days is when the discussion about the AI is moot as its already rocketed past us.

          2. Yeah, I’m reading a book right now called SUPERINTELLIGENCE, and it’s about this very problem. Great book.

          3. Consciousness is not just a product of processing power. It is a function of the right information coming together contained within a minimum threshold of computing power with enough modeling hierarchy to conceive of sufficiently complex representations of objects. If intelligent machines don’t have the information required to regard themselves as entities, they will not become conscious.

            We get our sense of self from the connection between our minds and bodies. It’s more complicated than that, but that is the gist.

            The point is that computational power isn’t the only component of sophisticated behavior. That computational ability needs to be organized in a deep hierarchy that is plugged into the right information.

        2. Well if we scale up to 1,000 years, then okay, lol, by my little brain doesn’t think that far ahead. But will every computer Ai be the same? Because Einstein couldn’t have written Wool, so intelligence has little to do with artistic creation. If there will be a million computers with a million different personalities, than maybe one of them will write something i will want to read….

          1. Something that can improve its own intelligence cannot be compared to even the most intelligent human that can’t rewire it’s own source for very long. Imagine billions of Einstiens along with billions of Hugh Howey’s running at speeds billions of times faster. If it really wanted to find out what JohnMonk would have written it would just simulate billions of copies of you. Let’s just hope that its simulations aren’t so good that they are sentient, and if they are…. Your more likely to be one of the Sims than a real JohnMonk.

            The future will be interesting.

        3. “Everything we are is based on biology, which is based on chemistry, which is based on physics.”

          And you know this how?

        4. This line: “Simulating this is mostly a matter of processing power.”

          Actually, this statement is false. The problem is not processing power at all. The problem is rules. What are the rules? How are the rules made? Who makes the rules about how rules are made? When do the rules about making rules change, who changes them and why? Who makes the rules about changing the rules that make the rules about deciding whether to change the rules or not? And so on.

          As long as there are rules, we are in the realm of automation. When we step away from the rules, into free will (don’t get hung up on absolutes here), we are in the realm on intelligence. There is no middle ground, and no path from one to the other.

          So “real AI” is an oxymoron. And there are a lot of implications to that oxymoron.

  4. Sorry, still don’t buy it. Computers beat us at chess not because they are thinking, but because they have the knowledge of all the games played by humans before, so they simply make the move that proved best in the past. Same with Jeopardy, it is a knowledge game, not a thinking game.
    I have also seen the AI’s that ‘sound’ human, if you ask prescribed questions, it is more of a parlor trick than a sign of understanding. Things like siri can’t even understand speech yet, even after 20 years of voice recognition, and you can’t make speech until you understand it first.
    A dog can be taught to answer to its name and do tricks, it doesn’t mean the dog understands you or will ever answer you back.

    1. You say: ” Computers beat us at chess not because they are thinking, but because they have the knowledge of all the games played by humans before, so they simply make the move that proved best in the past.”

      This is factually incorrect and your knowledge and understanding of AI is clearly deficient. Deep Blue and its ilk don’t think like humans do, but they certainly don’t merely and only rehash old moves in chess games. There are more chess combinations in the first 20 moves of a game than there are atoms in the known universe, every game is different, and as such, your thinking is clearly wrong.

  5. You think that’s air your breathing? Neo will save us!

  6. So…

    Perhaps the computers will keep us around due to needing readers…

    Or, computers will also read and criticise and it will will become a self referential computer black hole that vanishes into itself, leaving us stumbling around wondering what the heck just happened…

  7. I can just imagine what the future will be like:

    “ASHBURN, VA. — Out here in the woods, at the end of not one but two dirt roads, in a data center equipped with a picture of the Ada Lovelance, a high-speed connection and a copy of Kidder’s “Soul of a New Machine,” Galactico’s dream of dominating the publishing world has run into some trouble.

    The DATABOT 348C, who summers in this coastal hamlet, is a best-selling writer — or was, until Galatico decided to discourage readers from buying books from his publisher, Computo, as a way of pressuring it into giving Galatico a better deal on e-books. So it wrote an open letter to its readers asking them to contact DATABOT 128D, Galatico’s chief executive, demanding that Galatico stop using writing software as hostages in its negotiations.

    The letter, composed in the data center, spread through the literary software community. As of earlier this week 909 computers had signed on, including household names like DATABOT 568E and DATABOT 461T. It is scheduled to run as a full-page ad in The New York Times this Sunday.”

    1. Hopefully when computers are involved… we won’t have any more WhaleMath (TM). :-P

  8. I agree with you, Hugh.

    AI can beat us at chess because it knows the outcome of every game based on every move and position its opponent could play. AI can win Jeopardy because it has access to databases filled with random trivia.

    Why is it such a leap to think that an AI with access to a database filled with millions of books (like that of any retailer selling ebooks) couldn’t pick up story rhythms and cues? Surely it can understand nouns, verbs, sentence structure, dialogue, story arcs, etc. I’m not saying the first few (thousand?) books would be any good, just like how the first cars, rockets, and computers weren’t very good. But there’s no reason to think that with retooling and programming the books couldn’t get better.

    I think they would be most successful at “choose your own adventure” types of books. I can see a huge market for readers who want to read a specific type of book with certain types of characters and plot twists, but can’t be bothered to hunt around looking for one that’s already been written. To take it a step further, I can imagine a reader taking a public domain book, say Wuthering Heights, and tell the software that they don’t want any of the characters to die and they all should live happily ever after. Hey presto! The reader gets the version of Wuthering Heights they always wanted to read.

    I’m not saying this is right or wrong, ethical or not. I just think there would be a huge market for services like this, and money talks. I think this will be in our future, sooner rather than later.

    1. I can think of really crazy applications, like e-readers that track your eye movement to see what parts you are re-reading, check your pulse to see how you are reacting, look for dilation (and also ambient light to account for that) and so on.

      This sounds nuts, I’m sure. Imagine telling Henry Ford that one day cars would drive themselves, scan the road ahead looking for speed limit signs and obeying them, while scanning the driver’s eyes to make sure they’re staying awake, while the climate control samples the interior air and adjusts to maintain a set temperature, with zones for each occupant.

      Flying cars were never a good idea. A mishap equals a near-certain death, rather than a changed tire on the side of the road. Not to mention the damage to property below. Focusing on that rather than the miracle of the modern car leads to a stunning blunder on the part of the cynics: The car of today — the Tesla S, say — is far more fascinating and astounding than a flying car, which was achievable 50 years ago but hasn’t materialized due to obvious idiocies associated with that pipe dream.

      In 100 years, we’ll have computers that can write novels, and people will take that for granted and say: “Where’s our moon base?” As if that was ever a good idea.

      1. Just to keep the arguement going, lol, you call the flying car an idiotic idea(and i agree).
        Can in not be argued that AI is an idiotic idea as well? If a human like me knows the planet would be better off with half as many people, then surely an Ai running things would come to the same conclusion? If the AI decides you would be better planting crops to feed people, would we listen? There are people smarter than all of us out there that we ignore every day, the only difference is AI might gain the ability to force us to do what is best for all…and that is scary. AI means loss of freedom, no super intelligence would allow crazy man to do as he pleases.
        I see no benefits to AI, medical advances so we can work the fields longer?

        1. There will always be a market advantage to be gained by having a slightly better expert system than the other guy. Whether its DARPA and other countries, or google vs baidu, or trading companies on wall street.

          There is no point that we would get to and say:” no faster, no more resolution, no smarter, no more capable,”

          It’s going to get pushed to the limit.

          The challenge is to see it coming, accept it, and race, RACE to finish theoretical research to get a framework of values that the AI would self correct towards.

          Do the math to get a rational agent that would correct and self modify in a way that only makes it more rational and more reasonable.

          So we can get the ball rolling with at least some confidence that we would get ‘what we want’ rather than ‘iron is useful for making more paperclips, and there is iron in mammalian bloodcells’.

          There isn’t many researchers with decent budgets working on the field of mathematically provable friendly AI theory. But there should be.

          1. “There will always be a market advantage to be gained by having a slightly better expert system than the other guy.”

            An appeal to history will show this is not the case. It can only be the case when a very specific enviroment exists. That environment is being taken for granted by Hugh (and many others here). The problem is that environment is fragile and gets more fragile as one proceeds down this path. The people doing this work are fallible (and have many other failings). They also take the necessary environment for granted (in all examples I’m aware of, which are many of them).

            So in fact, the system is self-limiting, and the limiting factor is most often the Law of Unintended Consequences (there are some others). Being alive when the limit is reached, I expect, will be very unpleasant.

          2. “There will always be a market advantage to be gained by having a slightly better expert system than the other guy.”
            Really? VCR didn’t beat Beatamax because it was better, it beat it because ti was cheaper. The same reason Windows beat Mac, cheaper.
            Better means nothing, price is everything

      2. Flying cars would work great if they were driven by an entirely new form of physics, such as anti-gravity devices that would keep them from ever falling or even bumping into anything else. See, just needs a little imagination.

      3. Contrawise, flying cars would work great if they were driven by an entirely new form of physics, such as anti-gravity devices that would keep them from ever falling or even bumping into anything else. See, just needs a little imagination.

      4. The contrarian in me says that flying cars would work great if they were driven by an entirely new form of physics, such as anti-gravity devices that would keep them from ever falling or even bumping into anything else. See, just needs a little imagination.

      5. I have my own predictions. In 30 years, everything will about “AI” and AI with be “almost there.” AI will “almost” do things like write novels (if they are formulaic enough — which is what they do now to generate content).

        In 50 years, someone will produce a mathematical proof why (so-called) AI will never advance beyond that level. It will be entitled: “Turing Machines Don’t Dream.”

        1. Lol, people forget we heard all of this thirty years ago, and here it is again, a prediction for 30 years. I predict in 30 years we will be repeating all of this again.
          We are not smart enough to know how complicated a computer has to be to achieve AI, we might be a million years away. A computer ten million times more complex than todays best might still not be half smart enough to construct AI>

    2. Deep Blue did not have the capacity to know every possible outcome of a chess game. The number of different piece configurations on a chess board for placing merely 12 unique pieces is over 1,000,000,000,000,000,000,000. It takes over a single byte to represent each configuration, 10 bits to be more precise. The memory necessary for this incomplete representation of all possible chess configurations is about 2 zettabytes, a number much higher than anyone ever talks about. This doesn’t include any information about how these positions are linked, and is an extreme lower bound on how big the problem is.

      1. True, but big blue did know every move ever made by man in every recoded game, so it didn’t have to calculate. I don’t need to know how to play chess to win, not if i have a record of every game ever played, and can see what kinds of moves won games before.

  9. Now there is a book idea, for all your readers, for free, take this idea and run with it….
    A robot that writes stories in an office of other robots writing stories, but one day a new robot writes something that blows thier circuits away, is the new robot that much better? Or does he have a pet human at home giving him pointers?
    Free, lol

  10. “This application is already within the scope of today’s technology. We might assume it hasn’t been implemented due to lack of demand, but I think the first publisher to try this will see that big data and the ability to customize the product will result in a higher level of satisfaction and engagement. Which will lead to more repeat customers and more sales.”

    Actually there are a couple of publishers who do novels on demand already (maybe not written by computers yet, but close enough to personalize them), one is in French (and I got my name near the author’s on that personalized novel), the other is English and it’s expanding its titles – it had started with personalized romances http://www.ustarnovels.com/personalized-romance-novels/ and now it personalizes even classics… I bought one of those when there was only the UK site, it was fun! ;)
    Now let me go write that SF story about writers of the future, or the most famous AI of the planet… Or maybe it will be an alien hybrid, who knows! :D
    Great post, Hugh!

  11. By the time computers/AI can write (what we would consider now) good books, the book industry (traditional, indie, and otherwise) will be long dead, having been replaced by virtual reality sims, enhanced social media, entertainment modules, etc.

  12. It may well be that someday computers will write novels. But that won’t stop humans from writing them too. The impulse to tell stories is so much a part of the fabric of human life that it’s not likely to end simply because computers can write them also. And that makes a big part of the difference. I don’t think computers need to write novels. They will do as they are told by human programmers, but there’s no existential need in them that requires this of them. And that existential need is a part of what makes novels written by humans different from anything a computer might write. Even if the writing is passable, the inner need isn’t there, and I’m not sure it will fulfill the inner need of readers to share in that inner life of the author. But at least it will give humans more competition in the story-telling arts. Which isn’t necessarily a bad thing.

  13. As for the long-term future of the novel in light of technological advance, I’d suspect that technological advance will itself be supplanted by spiritual advance. In other words, as we learn the actual physics of consciousness, the need for machines will diminish. As will the need for computers. The ability to communicate without words will supplant language itself. And new art forms will emerge from that. Technology as we know it is merely a temporary step on the way to a much greater understanding of ourselves and the universe we live in. Computers themselves will be found to be sorely lacking in comparison to the advanced use of consciousness.

    And yes, that’s a theme of a novel I’m writing, in case anyone was interested.

  14. If computers can produce novels people want to read, there is really no reason to expect books will be written for the market. Consumers can just order up a book with the elements they want. Thriller, female lead, diamond mining, Amsterdam connection…

    Then the computer spits it out and makes it available to the consumer who ordered it.

    1. Thing is, most readers want to be surprised. If they already know the story because they created it, why would they want to read it?

      1. They are free to input whatever they want. Now, they read the blurb and get an idea about the book, but the blurb holds lots back. If computers can write the book, at one extreme, they could simply ask for a thriller. At the other extreme, they could outline the whole book. Consumers decide where on that spectrum they want to be.

        Computers could free consumers from authors. The big disruption.

        1. In that regard, I think AI would more likely be an assist to authors and other creative types, rather than a replacement for them. Whatever AI computers could do on their own, creative people could use to make things even the computers wouldn’t have done as well on their own. Maybe authors won’t have to be so good at the writing itself, but they still would be able to dream up stuff the computers wouldn’t do as well. Already computers provide valuable assistance to engineers, designers, and all sorts of creative people building stuff. But they don’t actually replace them. I think the same will hold true for writers of the future, who will be more like producers and designers than authors agonizing over every word choice or grammatical construct. They will even be involved in designing a “style” for the computer to write in. But not replaced entirely, simply because the combination of the human with the AI computer will probably always beat either one operating entirely on their own.

          1. I don’t know what the future holds. But we have seen a pattern where those who are displaced are sure they are so important they can’t be replaced. They regularly tell us. Authors are subject to the same failing.

          2. As long as human beings value what other human beings have to say, authors will indeed be valuable. If that changes, all bets are off. But then, in such a world, what will be the need for human beings at all? We may need machines, but will machines need us?

            I think people will opt for a world in which people still matter, and what people think and say still matters. If that’s not the case, as a person, it’s beyond my ability to imagine.

  15. A couple of things. I enjoyed the Video, and I think its conclusions are obvious to people who have given the matter some thought. We do have to plan for fewer and fewer people having jobs. Philip Jose Farmer’s “Riders of the purple wage” was a thought provoking look at the problem.

    So far as computer-written books are concerned, I agree that they also are likely ineveitable. What I do wonder, however, is about the nature of a computer which could write a great piece of literature, or even a very good one. Might such a computer actually need to be self-aware, a “person” in its own right?

  16. Here’s the real question, Hugh: what do you see as the upside of this?

    AI (in the sense you mean seem to mean it) is not going to invent itself. People will have to invent it and others will have to adopt it. Why? Are they lazy? Are they just dumb? Are they servile? What is the motive here? If a “computer” can do everything better than you, what do you do? You are proposing a divergent series here, in which ultimately you personally have no value as there is nothing you can do or be that has any value. So you can’t do whatever you want, because the logic of you propose implies you are not allowed to want.

    So how do you make it converge? (Assuming you want it to.) To what? Where is the balance point? How is it maintained? And by whom?

    1. The upside to AI, if used properly, is that it’ll be a tool that makes our lives easier. Fire, wheel, language, lever, steam engine . . . it’s why we tinker and build things.

      There are programs that are better at reading medical scans and diagnosing cancer than any human doctor alive. Knowing this, who would you trust to read your scans? A doctor? Or the machine?

      The upside to discovery and science is that we are only going to be here for a limited amount of time, even as a species. Why not fill that time gathering as much information as possible and seeing what we can build? Why be content to go through cycles of birth, war, death, while trudging through a 9 to 5 and eeking out a living?

      There are all kinds of downsides, of course. But none as bad as apathy.

      1. The “if used properly” is of course the gotcha. So the question there is: “What does properly mean and how is it enforced?” One thing I’ve noted in my many years of following the development of AI is that people pursuing it do not take that question seriously. They are primarily utopians, ungrounded in reality. But as “AI” (or whatever you want to call it) moves forward, that attitude must change faster than the technology itself advances.

        Having said that, I could take issue with the idea that AI is a tool that makes our lives easier. This is not a given. People have a blind spot when it comes to technology — they assume that how they view it is immutable. Your view of AI — and the potential affect it has on you and your behavior — is predicated on the fact you were not raised with it. A person raised in an AI environment will be a rather different person than you are. As such, AI may make their life harder — in that they may be more prone to depression, for example — because the pervasive affects of AI stunt their development. (This sort of thing is already being hinted at in social metrics, for example.)

        You are entirely correct regarding gathering as much information as possible and seeing what we can build, and not being satisfied or apathic. But that has nothing to do with AI. Could not AI be the most powerful force for making people apethic? If I had to, I could make good argument that the only effect of AI on society would be to lead to near-total apathy. (As an aside, did you ever read Larry Niven’s story about “wireheads”?)

        So my point is that the views of AI and its effects have been quite naive, because (right now) people feel they can be. The “Internet of Things” (a horrible idea) shows this. It’s time for people to take a much more mature view of AI and related technologies. I doubt they will, however. But time will tell.

        I very much appreciate the response.

      2. I remember how all the kitchen and laundry inprovements were supposed to make our lives easier too, but stay at home mom’s spend as much time in the kitchen and laundry room today as they did a hundred years ago.
        My greatest fear of AI is the same as my fear of the Democrats. Control. Democrats want government involved in every part of their lives, governing them, and if you want that for AI, for something to tell you what to do, then I am afraid.

        1. John…I have read enough of your posts to know you will likely never accept an idea that collides with your world view.

          You say that SaHM’s (or dads) spend just as much time in the kitchen and doing laundry as they did 100 years ago?

          Consider this:
          1. Life expectancy is near double what it was 100 years ago in the “developed” world.
          2. I can start a washer or drier and walk away, letting it do the work while I do something else.
          3. You have shown through your replies, and stated just as bluntly, that you cannot imagine a world different from the one you live in. That is the primary foundation for all of your arguments. It is not a good foundation. 100 years ago, no one had likely imagined have a device a little larger than a pack of cards that could access the sum of human knowledge in the course of a few seconds.

          A Cromagnon man would not have had the foundational knowledge to understand our world. We start out with a data-dump as well (Kindergarten through Highschool). Generalized topics to provide a basis of understanding. It is our software that allows us to manipulate that information in unpredictable ways. Software can be replicated, and the first AI may be an offshoot of downloading a human’s mind into some technology. We can only wait and see.

  17. Arthur C Clarke: http://www.wimp.com/predictingfuture/

    “If by some miracle some prophet could describe the future exactly as it was going to take place, his predictions would sound so absurd, so far-fetched that everyone would laugh him to scorn.”

  18. Machines will be more than capable of writing books soon. But we won’t be reading them.

    We’ll be too busy screaming and running away from the swirling drones, reaperbots, and nanoswarms.

    The only thing we’ll be reading is humanity’s dwindling population-score, projected on videogame-style wallscreens mounted everywhere.

  19. Good post. Agree with everything except…

    Cars will NOT be driving themselves in 10 years. This is a lot like the flying car idea. The technology is possible, but the logistics are insane. The slow-moving behemoths that are car companies alone will delay this longer than that, but think of the infrastructure that needs to exist for self-driving cars. Think of all the special cases, like pulling over for a police car or emergency vehicle, or stop lights not working, or flooded intersections, or detours, or school speed zones, etc.

    So far from completely removing human judgment from driving.

    1. I think you are disregarding the timeline for this to take place. 10 years is a _very_ long time for technology. The prototype cars are already handling exception scenarios like this with ease (stoplights, pedestrians, construction work, and bicyclists) and the new Tesla that Hugh mentioned above (which is significantly cooler than any flying car) is already handling the basics with a form of “smart” cruise control that can handle speed adjustments, lane boundaries and sudden stops.

      I won’t go so far as to say it’s inevitable that cars are driving themselves in 10 years, but it certainly seems likely.

      1. 10 years is a long time for technology, not for infrastructure. The flying car was conceived what, over 50 years ago? I could see limited-use systems, like highway driving, being computer-controlled, I just think the sci-fi fantasy we have in our heads is a long way off.

      2. If ten years is a very long time in technology, why haven’t we far surpassed the X-15? In fact, I don’t think we’ve even matched it.

        Don’t confuse cute toys (iPad, iPhones) with technology

        1. And don’t compare apple and oranges.

          There are a host of reasons why your comparison is largely flawed, but to stick to the time issue you seem stuck on: don’t look at the time from the X-15’s record breaking flight to now, but instead the time from the initial inception to that flight.

          The span between the first jet flight to what the X-15 accomplished was only about 20 years. 20 years to go from jumping off the ground at a few hundred miles and hour to touching space at thousands of mph. And that’s with jet engines! Surely 10 years to go from now to self driving cars doesn’t seem so impossible.

          The immediate future belongs to the “cute toys” you mention above, a technology whose limitations are still largely unknown. Not to experimental aircraft.

  20. James McCormick (J.E. Mac) Avatar
    James McCormick (J.E. Mac)

    This video isn’t so much about automation and which jobs can potentially be replaced. It’s about which jobs that run our economy, that make up the bulk of it can and will be replace humans with automation. It’s about what a future looks like when a large majority of the population is unemployed. It’s about how the economy shifts or will have to shift. And what will that economy even look like?

    Whether or not machines can create art is somewhat irrelevant. And this is where we don’t take the same leap into the future, Hugh.

    In an economy of the future, the idea of capitalism is problematic. The very idea of money is flawed. Your time as a human being isn’t valuable. In fact, it’s a detriment. We’d see a radical shift in how the economy works at that point in time. (Or we’d see some massive, and probably pretty violent, revolution).

    In an automated economy that leaves the majority of the population with no manual tasks to do. The economy of the future becomes a pyramid scheme that has placed the bulk of humanity at the bottom. But humanity are the ones making the rules, we just change the economy to suit that bulk at the bottom.

    Money becomes non-existent. The idea of capitalism and commerce is gone. Automation is far too cheap and easy. Everyone, everywhere has food and clothing and shelter.

    With mechanization and automation of daily living and no need to do anything other than simply exist in day to day life, humans are left to follow their heart’s desire. Quite literally.

    The arts wouldn’t be monetized — because the economy no longer supports the idea that your time is worth anything. It’s not. Not when machines do anything you could do faster and more efficiently.

    There’d be absolutely no reason to monetize the arts, as the idea of capitalism would be eradicated, meaning there would be very little incentive to automate the arts either. Not to mention, the majority of past-times will most likely be spent delving into and experiencing the art and creativity of others. In such an economy, allowing machines to do what’s left as the only thoughtful past time for humans will likely be seen as blasphemous.

    The point here is that the economy of tomorrow, that supports automation, will be unable to support capitalism.

    1. I think you are overlooking the very complex and extremely fragile enviroment that is required to support such technology. That comment is parentheses is well taken.

  21. Kenneth Stevens Avatar

    Perhaps computers someday will generate stories, paintings, songs, and so on that are better and far less expensive than anything that humans can create. One anticipates that machines will have similar effects on science, philosophy, and even politics. But something tells me that we will not enjoy these things much: Nietzsche’s prophecy finally will have come to pass, and our world will be that of the Last Man, that passive consumer who cares about nothing save warmth, comfort,–and who speaks of happiness, but blinks when he does so.

  22. To believe that machines and IA will one day entirely replace humans: isn’t it like believing machines are the next God? Isn’t it like replacing faith in God by faith in Science?

    I would be as skeptical in these spiritual matters with Science than with Church.

    The video was very interesting, though. Science and technology are very good in asking us the question: are humans at the service of economy, or economy at the service of humans? We could already have much more unemployement and machines, but our value, don’t allow us to do that because we know that if we have 45% unemployed persons, it would become vital to begin sharing economic resources.

    And even that solution wouldn’t be perfect, because most of humans have to bring meaning to their lives by working, and many of them still need to do manual work.

    Still, the principle of sharing resources is present on the Internet and, to some extent, in the self-publishing world. Who own the machines, by the way? Do we want to be collaborative people or enslaved ones (except for the one human who would one day own all the machines ;) )?

  23. I think the point everyone is overlooking is this very subtle irony: Johnmonk was never a real person at all. He was always a SpamBot! (Cue Twilight Zone music).

  24. I doubt computers will be able to replace copyeditors altogether anytime soon, but that’s less because it isn’t possible with current technology and more because programming it is a far more complicated task than most people would expect. The location of a comma can change the entire meaning of a paragraph. A single word might be one of four parts of speech—any one of which would change the rest of the sentence. US vs UK vs Australian English all differ.

    And then there are details of grammar that are optional, or that depend on overall context, or that depend on overall style. (Some cases in point: Those commas in the previous sentence were optional, my capital “t” on “those” is style, and my quotation marks around the letter and word-as-word could have been italics, instead.)

    One of the difficulties is going to be restrictive/essential vs. non-restrictive/nonessential phrases and clauses. “The yellow house at the end of the lane” = yellow house that’s at the end of the lane, whereas “the yellow house, at the end of the lane” = yellow house—you know which one I’m talking about—which happens to be at the end of the lane. How is a computer to necessarily know which the author meant? Even an editor doesn’t always know which the author meant, without querying the author.

  25. […] Humans Need Not Apply | Hugh Howey […]

  26. I can understand robots writing new stories. They are basically a summary of facts. But I cannot conceive of them writing fiction. Fiction is concerned with the world of human emotions and interaction. I think that would be a more difficult challenge than learning facts to win trivia prizes or to learn the logical merits of a whole universe of chess moves.

  27. […] while back, I blogged about the possibility that one day my job will be taken over by machines. I think it’s important for all of us to consider this possibility, whatever it is that we […]

  28. Computers can probably take over every facet of our economy except for the only one that it will not function without: consumer demand. Its hard to imagine computers wanting to buy novels, while it s not that hard to imagine them writing novels. So either the economy of the future will figure out a way to allow humans to continue to act as consumers, in other words to maintain/increase our standard of living, or the economy of the future will not be a consumer economy, in which case all we really care about is how to survive. Either things continue to do what they have always done, i.e. get better on average for people, or we wind up fighting the machines for survival.

  29. […] unas décadas, los robots realizarán muchas de las funciones del ser humano y seguramente habrá ordenadores que escriban textos y los traduzcan a cualquier […]

  30. 这里的内容真的不错,希望大家都来支持。

  31. Not sure you still read these comments, but the concepts proposed in this post may be closer to actualization than anticipated:

    http://www.cnn.com/2015/02/05/tech/mci-robojournalist/index.html

    Interesting article, hope you enjoy!

  32. […] novels at 2040. (Incidentally, this is the subject of a current WIP of mine, The Last Storyteller. It’s also something I blogged about at length last year.) For full-blown AI, I’m guessing we’ll see it around 2100, give or take 15 […]

Leave a Reply

Your email address will not be published. Required fields are marked *