The Book that Writes Itself

We need a German word for “thinking you had an original idea and then realizing many other people not only already had that idea but are well on their way toward implementation.” I hope someone can get on this. I bet someone already has.

A while back, I blogged about the possibility that one day my job will be taken over by machines. I think it’s important for all of us to consider this possibility, whatever it is that we do, and however outlandish the idea seems by current technology standards. How else will we see it coming? There’s a reason no one does. They all think their status is wholly unique until about three weeks after it isn’t.

Will machines ever write novels? That is, will novels ever write themselves? I believe if humans can stick around for another thousand years, it is inevitable. I’m also open to the chance (though skeptical) that some unforeseen advance in computing power or technology makes this possible in fifty years. Perhaps an actual quantum computer is constructed. Maybe in 50 years, a program like Watson gets more refined and has access to enough data and processing power that an emergent quality arises from what previously seemed wholly mechanical. That is, consciousness might flip on like a switch.

But how could this ever happen? How could computers ever learn to be creative? One of the answers to that question might be that creativity is more mechanical than we give it credit for being. This would be the Joseph Campbell school of thinking, where every protagonist is an archetype and every journey a hero’s journey. Or look at the work of Vladimir Propp, who detailed the 31 steps he thought every folk tale could be reduced down to.

Game theory and evolutionary psychology hint at the way that the human mind is both universal and largely predictable. Cultural relativists have no good answer for why we can read Sir Gawain and the Green Knight today and have it resonate with us across such vast time and in so many translated languages. The vast majority of fictional stories that work do so by satisfying our expectations. My romance friends will tell you that it isn’t a romance without an “HEA” or “Happily Ever After.”

This doesn’t diminish any genre by saying it’s predictable; it simply says something incredibly poignant about who we are. Imagine a murder mystery without clues or where the gumshoe never solves the case. Can you do it? Sure, but at some point the avant garde becomes a self-mastabatory exercise in trumping tradition just for the sake of obstinance. Mystery readers crack a book expecting the hero to nab the villain. The suspense is in how that happens. Just as the suspense in romance is how torn lovers will make their impossible union work.

In the formulaic there is the possibility of formula. In pondering how this might work, I drew on my expertise as a young writer who employed an advanced creativity enhancer known as Mad Libs. By supplying a few words into an existing framework, I created the unique with the help of some scaffolding. Perhaps the same could be done with fiction.

In some ways, we are already comfortable allowing computers to randomly assemble elements of our story. They’re called random name generators. It is often helpful to have a computer throw options in our faces rather than having to make up names ex nihilo, especially when we need a lot of names or a name that isn’t important but might bog down our writing flow. Here’s a random name generator that allows you to pick the gender and how common or rare the name is. There are many out there.

What if rather than names, we had plot elements thrown at us by a random story generator? If this feels like a violation of our creative sensibilities, then what are story prompts? What do we call it when a random newspaper article inspires us to write a piece of fiction? I can imagine a tool that works something like this (with the thing in brackets coming from a massive bucket of like-type options).

[Protag] desires [Thing]

[Obstacle] is encountered.

[Protag] meets [Friend]

Duo meets [Enemy]

[Item] is discovered. [Obstacle] overcome. [Enemy] defeated.

Could this tool write a story from beginning to end? No way. Could it create a prompt to get someone started? Absolutely. Just like a random name generator can give us a boost.

Thinking this was an original idea, I then discovered that a group of coders have made considerable progress on just such a tool. Using Propp’s 31 criteria, they have assembled a random-plot generator known as the “Bard.”  It is extremely crude, but the idea is to create plot prompts and ideas for low-intensity, high-volume situations. Like video games, where a group might need to come up with thousands of side quests. Just as we might not subject our protagonist to randomness, a video game studio might not yet use a tool like this for the main plot, but if these programs improve, then side quests would certainly qualify.

I put many hours into a video game back in the mid 90s known as Daggerfall that had randomly generated quests. You met random NPCs, who wanted a random item, which was guarded by randomly generated enemies in a randomly-generated dungeon (no two were ever the same). It was a much-touted advance at the time. It was buggy, but we loved it. A program was telling us stories. Many who played the game probably had no idea this was even taking place.

Sports lovers are in the same position today. They have probably read a synopsis of a game that was written by a computer program. The Big 10 Network has been using the technology since 2010. These programs are getting better, and computers are getting faster and faster, while their repository of data and examples grows exponentially. This is like neurons piling up. At some point, the level of complexity creates a new sort of output.

This is how computers obviated the need for legal discovery teams by learning to “read” millions of legal documents to prepare for trial. This is not a simple search function. These programs use something like an understanding of language to draw inferences and find precedents that teams of people would miss. And at a fraction of the cost and time.

Let me step back a moment and say here that humans will never stop writing stories and telling stories. It’s in our DNA. But that doesn’t mean the consumption of stories from other people is in our DNA. If we can’t tell the two apart, how do we stick to our principles as consumers? We won’t be able to.

In one of his dozen championship games with Deep Blue, Kasparov said he saw something that felt like true creativity in his opponent. He even intimated that the IBM team was cheating at one point by employing a grand master to take over the machine. He may have been one of the first people to gaze into the eyes of infant AI, feeling that sense of wonder at a small hand closing around his finger in response to being touched. Ken Jennings may have been the next to feel this. Make no mistake, what Watson did in mastering Jeopardy wasn’t just astounding, by some informed people it was considered beyond the realm of possibility right up until the day it happened. Also keep in mind that chess was once considered one of the highest expressions of our creativity. Everyone seems to forget this. Once the future arrived, we rewrote the past to make it seem inevitable.

Now imagine a tomorrow where people-written books and computer-written books sit on the same shelf. Both sets have authors’ names on them. All variably please some readers and not others. The prices are the same, so we can’t distinguish that way. But the computers can write billions of stories a year, and humans can only muster a million. What happens then? The odds are against us. And if you think we’ll be able to “badge” human stories, you might want to hear that master-level chess players have been caught employing computers during tournaments. Similar cheating will take place in this future. Like the sports fan, you’ll read what you think was human generated, but some of it will be from a computer. Who can guess which of my character names, if any, came from a machine?

There are many “first steps” in this process. In the world of coding, which has much in common with writing fiction if you subscribe to the Campbell and Propp view of storytelling, DARPA is working on a system that will automate programming from very few inputs. Automatic novel generation will happen when random plot generators are married to just this sort of text expander. “Fist fight” results in a gripping blow-by-blow action scene among the story’s characters. “Love scene” gives us a tender moment of appropriate length and reading age. Is it in the realm of science fiction today? Absolutely. But someone is already working on it.


47 responses to “The Book that Writes Itself”

  1. This will work great if computers start buying novels and stories to enjoy in their leisure time. But what humans are going to read those billions of stories and find which ones are best? We are going to have to come up with digital critics to read through that computer slush pile to pick out the best ones. But humans will still have to see if they like them or not. And who knows, computer tastes might be very different than our own. They may prefer erotic stories about wall plugs and the threat of lightning strikes to 50 Shades and chainsaw murder thrillers.

    Of course, the real issue is in the programming. Until we have genuine AI that can write its own programming, it’s human beings who are going to have to program these novel writing machines. So it depends entirely on humans who can write the best formulas for writing novels. The machine just carries out those instructions. So there’s still an author at work, at the level of programming itself. And then there’s still the question of whether people will actually prefer these programmed novels to human-written ones. Lots of people already prefer garden or organic grown produce to the Big Ag Monsanta products. I would suspect there will always be at least a niche market for human story producers. Because, writing isn’t just about coming up with stories, it’s about the relationship between story-teller and reader. I’m not sure what kind of relationship a computer will have to readers of the future, but that does depend on what readers are like in the future also. Don’t presume that readers won’t evolve also, and that computers might not be able to catch up with them.

    1. Because, writing isn’t just about coming up with stories…

      Exactly. My favorite stories are my favorites because I feel like I have been traveling through the landscape of the writer’s psyche. The world she presents, the characters she creates, the moods and feelings she visits (and how), and the themes that undergird the entire creation breathe the essence of the author. I want far more than just an entertaining story. YMMV

  2. I’d actually love to work with the early novel-writing AIs – I’d expect there to be a collaborative, hybrid step before they start writing publishable works on their own, where a human writer runs their prose through a computer editor for suggestions on where to tighten up or expand the story, or a human editor takes the raw output of their computer and shapes it into a more compelling, more emotional tale. Eventually, I’d expect to see human-curated or machine-curated imprints, where readers know what sort of story they can expect from each “name” whether it’s a human or AI.

    @Broken Yogi, regarding the need for “digital critics,” as our computers advance to writing novels, our discovery algorithms will also advance. Amazon already does a decent job of recommending books I might enjoy. Pandora, drawing on the Music Genome Project, does an even better job of taking a seed song from me and creating a whole station of the sort of music I’m in the mood for. I then tailor it with feedback on the individual songs that play.

    There is also the social aspect of reading and writing. I just pre-ordered “Misty, the Proud Cloud” specifically because one of my favorite authors spoke about it on his blog. (I don’t see the post on the site, but it was in my Feedly feed.) Just like today, human authors will be able to connect with their audiences socially as well as through the text of their publications. (So will AI authors, and AI-human teams.) Just like today, some huge percentage of books will never sell more than a handful of copies – but I suspect that the small percentage of titles that are hugely successful will be a blend of those authored by humans and by AI, much like today the bestsellers are a mix of traditionally-published and self-published titles.

    1. That blog post was premature. It’s scheduled for tomorrow. My overseas trip has scrambled my calendar somehow. :)

      Hope you enjoy Misty. I’m really excited about this release.

      1. I actually ordered two copies – one for me and one for a friend with kids. :)

        Looking forward to it!

    2. Amazons’ critical recommendations come from analyzing the buying patterns of other humans and the genre classifications and overall critical assessment by those rating the books. THey don’t come from computers actually “reading” the book and deciding on that basis if the books is any good. They can do this because of all those people reading and reviewing the books they sell. But if computers write billions of books, humans aren’t going to be able to do that job. Computers will have to take up the slack, critiquing other computers and what they have produced. And from that computer critique, humans will have to evaluate the narrowed down “best” books, to see whether these are the sorts of things humans will want to read and buy. The better the computer evaluations get, the less humans will be needed to critique the work. But they still have to buy it, and read it. Unless, of course, a big part of the publishing economy is taken over by computers, who buy and read the work of other computers. At lightning speed. In fact, what might prove interesting is if computers prefer human-written books to their own products. You never know.

  3. While everything stated here is absolutely true, it’s important to remember that, at some future point, EVERY single job will be able to be performed by a machine, robot, or AI. EVERY single job will be displaced. Our capitalist economic system, in which the vast majority of people earn a living through work, will crumble far before we even reach this point—in the (possibly) near future where so many jobs get replaced by machines that the system is no longer viable. So, for those who fret over the fact that writers will be replaced by machines, take heart in the fact that artistic endeavors will probably be some of the last human activities to be fully rendered obsolete.

    1. As jobs are replaced, we keep making up new ones. There isn’t a finite amount of work to be done. Over half the labor force was once needed to feed the rest of the population. Now it’s less than 5%. All those displaced workers found something else to do.

      Imagine how much industry could be consumed by settling the Moon or Mars. Or building ships to take us to other stars. None of these bigger things get done when we cling to manufacturing jobs here on Earth as somehow crucial to our economy. They aren’t. They’re holding us back. Let’s automate as much as we can and move onward and upward.

      The alternative is to dismantle our tractors and give more people jobs breaking rocks and ploughing fields. That’s the equivalent of what we do today by having people drive cars for a living. The idea that this is the only thing these people are capable of doing holds us back, I believe.

    2. I’m not worried. Anyone who’s read the Dune series knows that when AI take over the world, it only takes one curious robot killing a child for the sake of “science” and learning what it’s mother’s reaction will be to start “The Butlerian Jihad” (my fav Dune book), rather, the human revolt against all thinking machines.

      That, or a few minor nuclear explosions in important places can take out ALL electronics that aren’t shielded within the XXX mile radius via EMP (electromagnetic pulse).

      If either of those happen, it’ll be back to plowing the fields with our horses and gas tractors and writing on paper and snail-mailing manuscripts for at least ten years after. The universe tends to balance nature and powers.

  4. Robert Heinlein had an author character named Jubal Harshaw who did just that in Stranger in a Strange Land. Jubal would just toss out premises to his computer and have it handle the actual writing for him. A writer’s dream for sure. :)

    In games, we call the way that many people come up with the same idea at once “parallel development.” We’re all swimming in the same cultural stew and clinging to the same touchstones. With so many of us able to create and bring those creations to others these days, it’s not surprising to see this phenomenon popping up faster than ever.

    1. Many of the great scientific ideas we attribute to a particular scientist were being thought of by someone else in the field at a similar time. Darwin literally had to race to get On the Origin of Species out before the competition. This was probably less the case in the ancient world when ideas were shared less widely.

    2. I had an idea for a phone with an e-ink screen on the back for ereading, then last week I saw that a Russian company already made one and is releasing a better one shortly.

  5. The chess metaphor also suggests what I think is the more likely outcome (in the technological short-to-medium term). The very best chess programs can beat the very best chess players. But it’s recently become clear that a pretty good chess program plus a pretty good chess player can handily beat both. The future isn’t “Terminator,” it’s “Ghost in the Shell.” The cyborgs will rout the robots every time.

    1. One of my favorite reads of 2014 was SMARTER THAN YOU THINK, which is about this very thing.

    2. I wouldn’t count on this being the case forever. At some point, the human brain as it currently is will become the bottleneck in a combined system with inorganic computers.

  6. I’ve got to disagree with this post, at least, in theory. Yes, Mad Libs were fun, and yes, technology will continue to displace human workers, but people don’t buy and actively read other people’s Mad Libs stories. If an AI were to become self aware and capable of writing a novel from its heart, I would read that novel. But if we’re talking about a randomly generated plot with other randomly generated variables, count me out, regardless of the number of variables. Yes, there are only so many plots and so many types of stories, but these are the equivalent of musical notes. There are only so many notes, after all, but artists are still writing new songs. A book brings me a writer’s version of a story including their take on character and theme. I realize, as an author, I’m biased, but I believe books incorporate a bit of magic requiring artistry. That’s what keeps me reading and going to the movies. I’m aware Batman is going to triumph over evil, but I’m still going to read the next story because I want to see how he does it and what emotions the storyteller brings out of me through their deliberate choices which will reveal a meaning, intentional or not. I’ll stop reading before a computer can be developed that randomizes enough variables to make a story worth reading. But I’m sure authors will continue to use story generators as a springboard for their novels. Humans read to gain the experience and insight of other humans. Take that away, and I’m not sure there would be much point in producing new stories.

  7. Speaking on music, did you ever read/hear this story on computer generated music scores that sound just as good as human produced ones?
    http://www.radiolab.org/story/91515-musical-dna/

    It’s very much along these same lines. I believe a lot of this is also mentioned in The Second Machine Age (Erik Brynjolfsson and Andrew McAfee). Then, there’s the question of the continued lack of scarcity when all our economic understanding is built around scarcity. That’s already fraying and changing due to digital goods. It’s all really interesting stuff, but I think Hugh’s right – standards of living will rise and new jobs will be created. I’ve almost always had a job that didn’t exist 10 years prior to me holding it.

    1. Speaking *of* music…

  8. The general problem with this sort of SF analysis is that it presumes the potential of computers is unlimited, but the potential of human beings is very limited. I tend to think the opposite is the case. While it’s true we have barely begun to tap the potential of computers and other machinery, it’s also true that we have barely begun to tap the potential of human beings and their consciousness. I tend to think the potential of human consciousness is actually much greater than that of machine-driven computing. That suggests a very different kind of future than most SF writers are speculating about. But I guess I’m in the minority on that around here.

    1. Consciousness within the brain is entirely mechanical. People with lesions in their dorsal midbrain are prone to losing consciousness. This results in them passively reacting to their environment without awareness or falling into a coma, depending upon the extent of the damage. The leading thought on the matter is that some combination of internal model building (high order thought) in the cortex combines with the internal status monitored by the primitive brain (necessary for attending to survival) to create an awareness of self. Antonio Damasio has written and spoke quite extensively on the subject. He is most probably wrong on some points, because this is a very difficult subject to get a grasp on, but is a good read.

      1. And what if the “leading thought on the matter” is entirely and foolishly wrong (and not actually the leading thought on the matter at all)? After all, they have no real idea how consciousness even comes into being, what interactions could possibly create such a field, and no evidence to back up the sorts of ideas you cite. And by consciousness I don’t just mean the machinery of perception and processing by the brain, but the field of awareness itself. That’s the “hard question” of consciousness that has all the scientists baffled. Why should atoms create this internal field of self-awareness, when the machinery of brain and body could do their feedback processing just fine without it?

        The other, far more likely explanation is that consciousness is fundamental to existence itself, the ground from which phenomena arise, including the phenomena of big bangs, quarks, atoms, bodies, brains, thoughts, and personal consciousness. If that is the case, and it probably is, we are not really understanding the basic process of our own life and awareness properly. If we did, we might be able to make use of its potential, in ways that might dwarf the power of mere machinery.

        1. Well said. Computers today are thousands of times as fast as they were 30 years ago, and yet no one have even achieved the level of consciousness of an insect yet, so it is all fantasy. If we can’t create something now that can think as well as a bug, it is silly to think we can create something that can out think us.
          First show me a computer that can feed itself, protect itself and make decisions even when it doesn’t have all the information first, then i might believe someday it might be as smart as me.

        2. Consciousness is a mechanism. One that has been observed to fail. The scientific community could choose to move in the direction of handwavium fed theory over evidence based theory, but technology doesn’t work that way.

          With every great mystery that has been figured out, the story tellers that claimed something exists because it was created that way have been wrong. The notion that consciousness is a fundamental property of the universe is exactly the same thing. I promise you that matter, energy space and time don’t work that way.

          The current thinking on consciousness might be wrong, but consciousness isn’t magic either. And most certainly, what we think of as self awareness is not exhibited in most animals, and most certainly not in matter. The thing that distinguishes us from what has come before is that we drastically change our environments to suit us. We don’t change to suit the environment, we change it to suit us. That is a product of our consciousness of self and knowing that we have a future self to attend to. Matter doesn’t do that. Energy doesn’t either. Neither do other organisms. Consciousness of self is an unusual property that we have that isn’t intrinsic to the universe.

          1. I’m not talking about the merely mechanical notion of “self-awareness”, but the experiential dimension of it. That’s what distinguishes consciousness from a merely mechanical function. This is the consciousness that merely observes the functionality of the body, mind, and world. It’s a mystery because no one really has a clue as to why it should arise. Scientifically, there’s no purpose to it, because while there’s a purpose to the whole sensory-feedback and processing power of the brain, there’s no reason why some “consciousness” should have the experience of observing and feeling it. That consciousness doesn’t add anything in functional terms that wouldn’t already be there. And there’s no known mechanism for even detecting the existence of this consciousness, other than that we personally experience it. In fact, it’s not just something we experience, it’s who we are. Without it, we would literally “just be going through the motions”.

            And it’s not explained in any evolutionary sense either, in that even a higher primate like man could develop all the mechanical intelligence and functionality without the experience of observing and feeling all of that. If we really are simply biological machines, why and how does this experiential sense of personal awareness arise? How could we even confirm, apart from our own personal testimony, that it does arise? What test is there to show that some one, or even some animal or plant, is or is not internally experiencing conscious awareness? I don’t mean in the higher sense of human ability to recognize our image and respond accordingly. I mean even in the primitive sense of simply having an internal experience of existence as a body.

            It’s a very strange mystery in that it’s the most obvious thing about us, and yet it’s the least detectable thing about us by any objective measure. That’s pretty odd, don’t you think? And the notion that no one but humans has this internal experience of awareness seems pretty quickly contradicted by anyone who has a dog or cat, or really, just about any pet. Somehow, we can recognize the phenomena fairly easily. We just can’t say exactly what that is in any scientific sense. Maybe someday we will. And when we do, we may be quite surprised at the mechanisms involved.

          2. I’m with you that consciousness should be studied scientifically, but I wanted to point out that there are MANY organisms besides humans that change their environments to suit themselves.

            Look at the way beavers dam streams to create ponds that serve their purposes.

            Look at the birds which build nests out of materials that would not serve the purpose of shelter without being intentionally formed into shape.

            For that matter, I watched a documentary on tropical birds that showed a male bird of paradise carefully cleaning a patch of jungle floor to give himself a smooth, uniform dance floor. Other birds have been recorded digging insects out of their burrows with twigs – using tools.

            I’m not going to argue that rocks or atoms are conscious, but even within the human species there are different degrees of self-awareness, and if you expand the definition of consciousness beyond self-awareness to awareness and responsiveness, then everything alive could be argued to fall somewhere on a spectrum of consciousness. Part of the difficulty is in agreeing on a definition, as with so many things.

          3. Anthea,

            There is a distinction between us and what a beaver or an ant does. They are performing simple behaviors that are literally determined by evolution. They don’t meaningfully change their behavior outside of biological evolution. Humans have developed technology and changed behavior in a startling manner in 100 generations (amazing changes have happened in several generations), a trivial period of time in evolution. The difference between the two is awareness of self and the future. We plan, animals react.

          4. Yogi,

            Feeling is extremely functional and is a result of nervous and hormonal communication between the body and the mind. Emotion is a system by which mammals are made aware by their body that a certain resource is needed and is also a system to express to their peers these needs. Insects, fish and reptiles don’t outwardly manifest emotion, but still have a motivational system for connecting satisfaction of needs to behavior.

            Within the primitive emotional brain there is operant conditioning. This is implicit conditioning, where emotion gets associated with objects, organisms or activities. It is a primitive system for linking behavior to satisfying the needs of the body. It is the basis of the connection between mind and body for all multicellular behaving organisms.

            Awareness of needs plus an internal modeling system capable of containing a representation of self and the world allows the very simplistic links between need and satisfaction of need to be represented in a more explicit context that brings awareness of other actors, self as an actor, past needs and future needs. This is roughly what consciousness is. Emotion tells the organism what it wants and the higher order thought processes make decisions to obtain what the emotion wants.

            We have been massively advantaged by our sense of self. It has allowed us to break the tyranny of generational learning by natural selection. When there is a change in the environment, we can adapt by a means other than a massive die-off, though we are still prone to death in the face of catastrophe. More importantly, we can do things in the present to prevent a future repetition of a catastrophe.

            Emotion is not uncommon, but it isn’t intrinsically consciousness. An animal that displays emotion doesn’t experience the same things we do. Most, when confronted with a reflection, can’t recognize that they are seeing themselves. Clearly, an organism that can see and can’t figure out a reflection isn’t another animal is not conscious of self. They are only a reacting machine. Even those that can discern that they aren’t facing another animal don’t necessarily experience a full realization of self. I’m not as conversant with the experiments on the topic of animal consciousness as is necessary to draw the line where it belongs, but from what I remember of what I’ve read, very few animals display meaningful behavior that indicates awareness of self.

          5. Neither sensory or emotional “feeling” nor sensory amd emptional processing in the brain implies an experiential dimension that literally “feels” these things. If it is all just biomechanical action-response through biochemistry, what aspect of that creates this internal “field” of consciousness that I am experiencing right now? All the advantages that come with all of that complex biochemistry can be had in a purely mechanical way, without it creating some internal sense of experience as we all have. Even emotion in that sense is just a more complex set of cells in the brain doing their thing. And the body that has developed such physiology certainly has advantages. But how does any of that create an internal experience of consciousness? And why would it, since the responses and intelligence of the mechanism aren’t in any way affected by it. And what is that consciousness to begin with? How does the simply experiential dimension arise in the first place? What biology or neurology or physics gives rise to it? And how can we detect its existence? All we can detect are the activities of cells in the brain and body, we can’t detect consciousness itself. And yet, it’s the most obvious thing about us. And you aren’t even addressing the issue, just making claims that it’s obviously the body, because that’s what you’ve assumed from the start, but you have no evidence for it. You’re just concluding with the assumption you made in the first place. That’s not rational, and it’s not science either. Most scientists investigating consciousness are very much aware of this problem, and admit that they haven’t come close to addressing it, because they simply don’t know how. That’s why they call it the “hard problem” of consciousness.

  9. “Plotto” by William Wallace Cook does something much like you’ve described, and it was published in 1928. Many pulp fiction writers, including Erle Stanley Gardner, used it:

    http://www.amazon.com/dp/B005ZGL72Y/

  10. I agree that if humans refuse to engineer themselves, they will eventually be intellectually obsoleted by inorganic computing technology. However, once computers are faster than humans, I think it isn’t clear that the task of writing a book will be taken over by a quantum computer.

    Quantum computers are only powerful for optimization problems. They can in one cycle pick parameters, albeit with high uncertainty – requiring multiple samples, that correspond to the global maximum or minimum of an equation. This means that in a set number of cycles, they can optimize any equation, no matter how large the search space is. They are extremely powerful, but only for a limited set of problems, because they can exhaustively search for solutions so long as you can define exactly what you’re looking for.

    One part of writing a story, at least for me, is that I don’t know what it will look like until I’ve gone through the process of producing it. Essentially, the parameters or the equation is unknown at the outset. Intuitively, there is some level of hierarchical complexity that will defy direct definition, creating a landscape that won’t be within the quantum superposition of the computer. Therefore, I suspect that some traditional traversal or sampling of the search space will be necessary.

    It could be that a quantum computer will be able to handle sub-tasks (like language selection) that end up being critical to the process of writing a story. It’s also probable that a quantum computer could be extremely useful in writing formulaic forms of story. However,I suspect that a lot of the processing in creating a story that isn’t formulaic will require conventional algorithm computing.

    1. I’m afraid this view is naïve. First, prove is possible to be “smarter” than a human mind. People look at how fast computers crunch numbers and get all kinds of goofy ideas. What *exactly* do you mean by “intellectually obsoleted by inorganic computing technology”? What will this inorganic computing technology be able to do intellectually that we can’t do as well or better now?

      The problem is that critical problems do not submit to a process-oriented solution, in the sense that faster is not better, and more is not better. It just means that inorganic computing technology might be able to reach a really stupid decision in a split-second instead of a human making that same stupid decision in a second. So what? If you can’t define what “smarter” is (which no one ever has), the whole exercise is silly.

      The point is, this whole question is fundamentally misguided, as it is fiddling with concepts that cannot even be defined yet, and for which no metrics exist. If you don’t believe me, read “Islands and Beaches” by Greg Dening. Then explain how inorganic computing technology is going solve the problem described better than a human mind.

      BTW: We can get a benchmark of the capacity of the computing power of the human mind by comparing all the processing power, supported by every hour of human develop, ever expended in the development of chess-playing programs. One of these played Kasparov to a draw.

      Yet playing chess is an extremely small proportion of what Kasparov’s mind does. Crunch the numbers and you’ll come up with am interesting result, especially considering that our minds are vastly underutilized.

  11. I have my doubts about this. Not for technical reasons, but for economic ones. The technological issues are more complex than you realize, but the economic problems seem insurmountable. We already have more stories than anyone can possibly read. We probably have more in the public domain than anyone can read. Supply is not the problem.

    I want to be able to tell my reading device something like “I want a story like Sherlock Holmes/Lovecraft/Borges”. Whether it writes a story or finds existing stories is irrelevant. It needs to learn what I mean by that (which might not be what other folks mean). I shouldn’t have to say “but without the explicit racism”. There is real money in something like that. And software development mostly follows the money.

  12. I’ve written a short story, Shopping, where a bio-mechanical AI creates a video game. The game becomes hugely successful and the company having conceived the AI earns a lot of money until…

    I have to stop here to avoid spoiling the story. ;)

  13. How do I know that Hugh C. Howey isn’t a machine?

  14. Cavemen had brains as big as hours, yet it still took thousands of years to develop more complex pathways to be able to figure out simple things like why the sun came up. I think even a computer a million times more complex than anything that exists to day still might not be half smart enough to actually think. There also might be a finite size a computer brain can be, and that size might not be enough for AI.
    As far as 50% of people used to farm to feed us and now only 5% do, there was only a third of the people back then and all food came from farms, now half our food is processed so it lasts longer, meaning a lot less waste.
    Also back then no one was on food stamps, now 19% of the country is. Take away more jobs and that number will go up. There might not be a finite amount of work for humans, but there is a finite amount they will be paid. So let’s have robots do everything while we get minimum wage to do our new jobs…

  15. Believe it or not, someone already came up with such a random plot generator…about a hundred years ago. No computer needed, just a deck of tarot cards. If they can predict your fortune, which is the “story” of what’s supposedly going to happen to you, they can create a new story out of whole cloth.

    I’ve played with the technique, and it’s remarkable how easily stories can create themselves out of the suggestions on those cards. It’s like how you see shapes in the clouds—a hint here, a suggestion there, and your mind fills in the rest.

  16. […] The Book that Writes Itself | Hugh Howey […]

  17. […] Why? Because every job that feels completely crucial and irreplaceable tends not to be. Travel agents likely said that no one could sort the time tables, understand the various routes, know which carriers had which amenities, and could do the job as well as they do. The idea that computers will ever write fiction sounds bonkers to many writers, who view themselves as irreplaceable, and yet I can see how it might happen. […]

  18. >> We need a German word for “thinking you had an original idea and then realizing many other people not only already had that idea but are well on their way toward implementation.” I hope someone can get on this. I bet someone already has. <<

    You bet right, Hugh. The word is … "Fehleinschätzung". :)

    Hans

  19. […] The book that writes itself – a possibility. Meh. Just when I had my alien telepath enjoying calligraphy for the sake of it (Shan-leo, in case you’re wondering – yes, he’s grown up!). Sigh. But then, he doesn’t write fiction! ;) Now let me grab a few more generators links… just in case! They do work as writing prompts! :) […]

  20. […] The Book That Writes Itself In which Hugh Howey asks the question: when will machines start writing books? Don’t think it could happen? Think again. It’s an interesting exploration on the advancement of artificial intelligence and humanities future. […]

  21. Okay, I’m late to this party…

    The key thing to remember is that in regards to “Artificial Intelligence” in today’s technology, the most powerful computer that we have created to date is nowhere near as self-aware as an amoeba.

    (Not my words, read original on NPR here: http://goo.gl/Cv0sJ7)

    Watson’s victory came not from intelligent intuition, but from high speed database scanning and statistical keyword relevance analysis. An AI project to get a computer to create music could only move forward after the computer received input from crowdsourcing; humans telling it that a particular sound or piece was pleasant. Even then, it never quite got there.

    For a computer to spontaneously come up with an idea and then generate a story from that idea that is both emotionally and intellectually gripping requires a level of technology that the human race has not yet achieved. I’m pretty sure that we will not see that level of technology in our lifetimes, nor that of our great-grandchildren’s lifetime. Of course, I am a science fiction author, so I look to different possibilities with the hope that it could inspire someone to actually pursue that line of thought.

    I have notes for a story that looks at this concept. But rather than dealing with AI (Artificial Intelligence), it deals with AS (Artificial Sentience). The irony, of course, is that those notes reside on a hard drive in a dead computer that will remain unavailable until I can actually afford to replace the original computer…

  22. I always like AI discussions. What I like most about them is what they show about how people think about thinking. But here’s the only question I think really matters (and I welcome someone pointing me to a source in the AI research community on this): why do we hear so much about “artificial intelligence” and nothing (as far as I know) about “”artificial emotion”?

    People treat intelligence and emotion as different. This is silly — they are identical. How can we talk about “AI” when we are subject to this kind of incoherence? We can’t even define in any meaningful way what “intelligence” is, vice what it “does”.

    BTW: Hugh is correct in that the human mind is an immanent phenom, but he’s wrong that it merely depends connects and processing power. It’s the nature of the connections that matter — just connecting a bunch of stuff, however large does not work — and that is the part we don’t yet understand. We know you can grow a mind; we have no idea how to engineer one.

    Last thought: writing novels by machine is meaningless. As has been pointed out, it merely translates the creative process from the novel writer to the code writer. When a computer can read the book that includes this line: “Though they were buried at opposite ends of the Earth, one dog would find them both”, and explain cogently and insightfully what it means, we’ll be getting somewhere. Not before.

  23. LOL, Hugh, that’s an intriguing idea. I can’t help you on the German, but if mystery author Erle Stanley Gardner were still around, he might agree with you about the novel-writing machine. While writing his Perry Mason novels (more than 80 of them), he devised “plot wheels” to save time. He had four wheels: the “Wheel of Complicating Circumstance,” the “Wheel of hostile minor characters who function in making complications for hero,” the “Wheel of blind trails by which the hero is mislead (sic) or confused,” and a “Solutions” wheel. Cool, eh?

    While I agree that a computer could certainly generate a decent plot, I find it difficult to imagine a computer able to generate an engaging voice.

    But maybe that’s just me, clinging to my human over-blown sense of importance in the universe. ;)

  24. Hi Hugh,

    Interesting idea. I was playing around with the idea that we live in a simulation already and that it’s design is to keep us eternal citizens from getting bored. The idea came to me when I watched a video where it was explained to me that the quantum double-slit experiment was an example of the computer in operation. It would constantly check to see where the observer was looking and collapsing particles to form expected results.

    Maybe we all live in a quantum penal colony. So, this idea is a good one but probably already done :)

  25. […] are betting on the bots.  Top-selling self-published novelist and blogger Hugh Howey thinks robot-written fiction is definitely coming… if not in our […]

Leave a Reply

Your email address will not be published. Required fields are marked *