The Innovator’s Dilemma: Understanding Digital Disruption

Something Mike Shatzkin told me once has really stuck with me: “The people at the major publishing houses aren’t idiots.” In fact, I’m pretty sure Mike has told me this more than once, usually after I pointed out something that I think publishers should try and can’t figure out why they don’t. Mike could see that the assumption in my advice was that publishing executives didn’t know what they were doing.

It turns out Mike was right and I was wrong. Publishing executives aren’t idiots.

Neither were executives at practically every company that has been disintermediated or made obsolete by innovation. This is a common theme in business lore and among the general public, and it is dead wrong. Established companies don’t go under because they don’t understand their market, their customers, their product. Nor do they go under due to managerial blunders, lack of R&D, and all the other myriad reasons commonly proffered. In fact, companies lose market share and go under precisely because they are well-managed.

To understand how this works, you simply must read The Innovator’s Dilemma by Clayton M. Christensen. I’m not kidding. This book will blow your mind; you will never look at business transitions the same way ever again. If you have any interest in publishing (or how the world works in general), move this book to the top of your reading pile.

I can’t do the entire thesis justice, but I’ll entice you with a few of the lessons here. I’ll also say that this is one of the very few business books I’ve read that uses copious amounts of real-world examples. This isn’t guessing. This is hard-core theory in the best and most scientific use of that word. Christensen is even so bold as to make predictions in the form of case studies that are eerily prescient. I repeat: This book will blow your mind.

So how do companies go out of business if they are well-managed and if they can clearly see the disruptive technology heading their way? It helps to understand that most companies that are disrupted are the earliest to be aware of the disrupting technology and to even dabble in it or patent the new tech first. Like Kodak’s work with CCD sensors (which led to digital cameras). Or publishers experimenting with digital books decades ago. The problems these companies face are several, but the biggest are these:

1) The disruptive technology is usually lower-margin while in its infancy, and so a “win” is not big enough to motivate managers and employees.

2) Existing customers and sales networks are not interested in the disrupting technology. They want improvements and a continuation of current products. Greater profits are to be made satisfying these demands rather than appealing to new markets (and again, getting employee buy-in is difficult, so the pressure not to pivot comes from without and within).

3) The initial market for the disrupting technology is never precisely what the innovator thinks it will be.

A great example offered in the book is Honda’s entry into the market with the dirt bike. Honda’s attempt to sell a low-cost motorcycle to US customers was a failure from day one. Established dealers weren’t interested in the low-margin offering. Customers didn’t want smaller bikes for transportation. It wasn’t until Honda staff started using the SuperCub model off-road to let off steam that a different market was envisioned. The bike was retooled for off-road use and sold through sporting good stores. When Harley tried to emulate the product, their existing customers and sales forces balked. Once they gained entry, Honda began investing in “sustaining innovations,” which are refinements within the current market, and were able to take share from what existing players, like Harley, do best by moving up to road and touring models.

The computer industry is full of these examples, and Christensen uses the hard drive manufacturing industry for many examples. He also uses the mechanical excavator industry. By looking at which companies failed and which succeeded, he is able to test his theories, and you’ll be amazed at how well they hold up. If you are interested in the publishing industry at all, you’ll see evidence of his theories everywhere you look (same for music, TV, social media, etc.). I’m telling you: read the book.

In the end, you’ll see that Christensen offers some very actionable advice by looking at hundreds of real-world successes and failures. The key to creating a culture that rewards small wins and lower profit margins, while having the support of a larger company, is to spin off divisions that are allowed to operate independently while possessing the potential to disrupt the larger business as a whole. When companies set up the disruptive technology in-house, they almost always fail. When the disruptive technology is set up as a separate entity, they often succeed brilliantly.

I’ve always admired Apple’s ability to make its entire product lines obsolete. Why wait for someone else to do it? It requires uncommon vision to manage this from within, usually a very powerful and far-seeing individual or with a culture that inspires disruption. Steve Jobs was one of the former. Google’s culture of experimentation on company time is an example of the latter.

Consider for a moment that it was Amazon who practically made the ebook successful. Their business was in shipping things, initially books. Why would they dabble in ebooks and ereaders? I can easily see an IBM-like or Kodak-like moment where Google or Apple invent the ebook marketplace and Amazon becomes the hero to book-lovers and the last bastion of hope for the print industry (much as B&N went from villain to hero). Instead, Amazon disrupted its own primary business. They are trying to do so again with subscription models, fan fiction, crowdsourced writing platforms and discovery tools, and much else. These divisions are set up to be largely autonomous, where small wins are big deals for the divisions, even as AWS and other projects make outsized profits. The cycle of experimentation and a willingness to fail are an asset. Possibly the company’s greatest asset.

Another fascinating point in this MUST-READ book is that established companies often make their greatest profits right before they go under. How can this be? The maturation of sustaining innovations and efficiencies reach their peak just as disrupting innovations mature into the same marketplace. Think of the innovating technology as a line creeping upward but lagging behind the existing, sustaining technology:


That upward trajectory of both lines represents a mix of benefits, price, convenience, reliability, and so on. At first, the disrupting technology does not have a high enough mix of these features to appeal to existing customers. It’s left to early-adopters, who can stomach the higher price, lower quality, inconvenience, unreliability, etc. But at some point, the existing technology improves well past what customers demand. That is, the storage capacity is more than needed. Or the quality is much higher than what is required. At the point at which the disrupting technology, which was formerly only suitable for a smaller customer base with different needs, enters the mainstream marketplace, it’s already too late for established companies to pivot.

What gets in their way most of all? The existing cost of doing business. Bloat. The need for those higher profit margins. The disrupting companies are small and nimble, and they can thrive at 30% margins where the established company needed 50% margins. Overhead gets in the way, as does the existing culture within the company, which looks for wins of a certain size and has customer contacts and needs that have progressed to the top of what the market demands, while all the meat at the bottom of those demand tolerances are being gobbled up. Music purists will say that digital song files are inferior to analog, but customers said it was plenty good enough. Storage, convenience, immediacy, selection, and price trumped other factors like quality, physicality, curated local stores, etc.

In a blog post last year, I posited what I would do if I had to run a major publishing house. One of the suggestions I made, and one I’ve harped on over and over since, is the need for a major publisher to close shop in NYC and move to more affordable real estate. Reading Christensen’s book, I’m convinced that this is the only way one of the Big 5 can thrive in the publishing world ten or fifteen years hence. The hardest part of dealing with disruptive innovations is restructuring to subsist on different margins. Most companies make the mistake of trying to achieve this through layoffs, mergers, and a gradual reduction in size. Layoffs pare payroll. Mergers like that between Penguin and Random House increase efficiency by jettisoning redundancies between the merged entity. Reductions in size are had by closing imprints (or in the case of retail, like Borders and B&N, shuttering store locations). The first and last of these create a spiral toward irrelevancy or bankruptcy. Mergers simply delay the need for one of the other two options (mergers are really a combination of the other two options, as they often lead to an overall loss in positions).

So why is moving out of NYC the answer for one of the publishers? Cutting costs would allow for competition both with suppliers (authors and agents) and customers (readers). Without offices in midtown Manhattan and along the Thames in London, a rogue publisher would be able to pay higher royalties, thereby out-competing for the juiciest manuscripts, while also offering lower prices, thereby out-competing for market share. In fact, this is precisely what Amazon has been able to do. They pay their imprint authors 35% of gross, which is double what the New York publishers pay. Or they pay their self-published suppliers 70% of gross, which is nearly six times what New York publishers pay. At the same time, they offer their products for a lower price, taking crucial market share.

This is the pattern for the dozens of examples offered in Clayton Christensen’s marvelous book. You can see how it played out in other forms of entertainment, and how it is playing out right now in the publishing industry. You can also see in his work how difficult it is for companies to pivot to new marketplaces and customer bases without upsetting existing relationships, both outside the company and within. Getting editorial to move toward erotica or new adult and away from literary requires an entire cultural shift, which isn’t easy (or even advisable in many cases). Greater success can be had by spinning off a new division, setting them up elsewhere so that small wins feel like big victories, and let them gradually eat at your own market share. It’s the courage to create the iPhone, which will kill iPod sales. Or the iPad, which will hurt laptop sales. Powerful CEOs with singular vision can do this with great effort. Spin-offs and skunkworks can do it far easier and have done so successfully far more often.

Having finished the book a few days ago and considering the implications for Christensen’s theories, I’m willing to make some predictions. Predicting the future, of course, is idiocy. It’s not possible. But the power of theory comes from its ability to offer prediction and reproducibility, not its strength in describing what has already happened. So my predictions are these, with the idea of revisiting this on my deathbed to see if I learned anything:

1) Low-cost reading entertainment will trump high-cost, even if the former comes in a less-adored package and with lower fidelity. That is, even if most readers “prefer” paper books and even if the low-cost items contain more mistakes or typos.

2) Print manufacturing and sales distribution will be destroyed by digital. This might seem obvious to some, but many industry experts are claiming that ebook growth has leveled off. I think ebooks will become 95% of fiction sales and 75% of non-fiction within fifteen years.

3) The first company to go all-in with mass market paperbacks will control most of future print market share. Grabbing 90% of what will eventually be print’s roughly 20% of the market might not sound like much, which is why no one is going after it now. But the low-margin option that customers want but that established industry leaders see no profit in is the basic definition of a disruptive technology. The first print-on-demand machine that can handle the thin paper and trim sizes needed for mass market POD will make a lot of money.

4) Physical bookstores will become much smaller and more specialized, or will simply move inside other retail spaces.

5) Most damning for my profession: Reading options will decouple from what we think of as “books,” as most reading will be done communally or on streaming websites or serially. I give this one another fifteen years before it’s obvious. Say 2030. What we think of as a “book” will be the vast minority of what’s read. Works will become shorter in individual package but longer in overall scope. Think seasons of TV.

6) Because of (5), authors will make far less per “read” in the future, as there will always be more people creating than buying (and this will only become more true as time goes on). The same forces that are allowing self-publishing to disrupt major publishers will eventually allow hobbyist writers to take vast market share from self-published authors. Tools that automate copyediting will improve until this is done for free and with the click of a button rather than hiring out editors. IBM’s Watson technology makes this practically feasible today. Also, the packaging will become automated, both with perfect digital file formatting and auto-generated cover art. Again, both will be free and as easy as clicking a button. This will greatly expand the pool of potential authors, forcing out many who are disruptive forces today simply because they are able to tackle these obstacles themselves or are willing to hire others to do so.

7) In fifty to one hundred years, authors themselves will be obsolete. No one will believe this today, but it’ll probably happen sooner than I’m giving it credit for. Already, computers are writing columns for the sports pages, and readers don’t know that’s what they are reading. At some point in the future, books will be written in a few seconds, tailored to each reader based on what they enjoyed in the past and what people with similar tastes enjoyed. E-readers will measure biometrics during the reading process to refine future works. Books will be infinitely diverse, but there will be a cultural clash over whether or not the empathy-building advantage of fiction is lost in a world of such catered entertainment. Human writers will be esoteric and admired by those who consider themselves to have the highest tastes. But it won’t change consumption habits; readers will gobble up stories written just for them. And if the pulse fails to race, a beloved character just might meet their end. We human authors won’t have to guess who to kill and when. It’ll happen when you very least expect it.

Don’t let the wackiness of these last predictions dissuade you from diving into The Innovator’s Dilemma. It’s a fantastic work. I’ve never put this much time into a blog post about a book recommendation. I’m telling you, this is a work that will explain so much of what’s going on around us. You’ll love it.

51 responses to “The Innovator’s Dilemma: Understanding Digital Disruption”

  1. Apparently this book can be read for free, legitimately, via Or at least, I assume it’s legitimately. is kind of high profile to be a pirate site.

  2. 1-4 I get, but 5-7 I don’t think so. Always hard to predict the future.

    5 in particular seems to be moving in the opposite direction. Rather than consuming TV series in small weekly bites, more and more people seem to be choosing immersive consumption, with binge watching of series. And Amazon and Netflix releasing entire series at once, rather than in weekly installments. With reading, this immersive experience is one of the most attractive things about reading a book, often in a few days, rather than stretching it out for a long time.

    As for AI, who really knows, but I tend to think that it’s overimagined. But then again, James Paterson is already taking formula writing to new levels. That may be the model – authors directing computers to write what they’ve outlined.

  3. Interestingly, a counterexample is taking place in the electric car industry.

    In a previous article that mentions The Innovator’s Dilemma, it was predicted that disruption of the automobile industry would come from the advancing capabilities of community electric carts (the golf carts that are commonly used in resort communities). They were vastly cheaper and much more limited than the normal car that they competed with, but they had some advantages (low cost, very, nearly free “fuel”, little noise, etc.) where they satisfied users demand. It was predicted that these cheap vehicles would improve to challenge traditional cars.

    Instead, what is happening? Traditional automobiles are losing in competition at the TOP of the market with the Tesla cars greatly exceeding the sales at the low end of the market. And it is increasing sales by IMPROVING the product and making it progressively cheaper.

    While The Innovator’s Dilemma describes a way that dominant industries can be disrupted, it doesn’t describe the ONLY way that can take place. It just makes the job of incumbent businesses even more difficult.

    1. The notion that disruption always comes from underneath is definitely not a given. However, Tesla has disrupted exactly no one yet. They’re a player in luxury automobiles, but it’s actually easier to break into the luxury market than the volume market. The volume market has the big factories and large development costs, because it takes a lot more effort to engineer a car that is reliable and can be built cheaply. Instead, Tesla went after the high end market where margins for their competitors are high while taking low margins for themselves.

      Tesla has picked an novel way to enter the market for cars and will definitely prove to be an informative case study if they manage to actually be disruptive. Right now, they are only a large market cap with trivial income.

  4. Hugh, I’d like to see your reasoning (or wild speculation…) on how you get #5 and #7. I’m aware of one VERY long-running SF serial, the Perry Rhodan world, but that is rather sui generis. A kind of German telenovela ;-) I also feel no strong inclination to track down the latest volume, since it will no doubt still be running thirty years from now and I’ll still be able to follow the storyline.

    As for the computer-assembled books, my *vast* respect for my sports-loving colleagues prevents me from speculating on why sports columns could be assembled by sub-AI computers—but as a computer geek, I think you would need an actual AI, bathed in the finest whiskey, to come up with lines like “His crookedness was such he could hide at will behind a spiral staircase.”

    1. Today’s magic is tomorrow’s tech.

      We won’t need to get to full-blown AI for novel-writing computers to become a reality. I see it working something like this:

      Hundreds of novels are extensively “marked up” by readers. So parts of sentences are tagged, but also elements of plot and structure. Programs absorb all of these marked up novels and learn general principles. (I saw how this works with image recognition today, and the results are as baffling and amazing as the founding elements are simple and banal. Computers can now “recognize” what is a cat and what is a horse just by analyzing each individual pixel, one at a time. Before long, they’ll be able to do the same with individual words).

      The earliest examples will be slight alterations of existing material. That is, a human author will write a “seed book,” and a computer will be able to modify elements of this to match readers. For instance, geography and place-names will match where the reader lives. All events in the same book will take place in every reader’s home town, with street names and descriptions that match what people see in real life. The protagonist’s gender can be switched with the press of a button, with all pronouns tweaked to match. Or relationships can be same-sex to give more reading diversity for the pro LGBTQ community. Names can match the ethnicity of the reader or their preference.

      Another use of early computer-created fiction will be perfect translations. The advances happening here are amazing and destined to compound. Translators and foreign publishers will be creamed, but readers all over the world and writers will benefit. Every book will be available in all languages, and computers will play a large role in “writing” these books.

      Eventually, these advances will lead to computers that are able to conduct individual scenes. There are already infant AIs that can converse with humans (a quasi-win went to an AI this year in the annual Turing Test). So dialog will get better. And then someone will program a computer to write an infinite array of bar fights. That one plug-in will join thousands of others. And then books will be written with the guidance of humans, who create the outlines of plot while computers fill in the details (a maturation of what James Patterson does with his ghost writers today). Eventually, even that ability will be automated.

      If you doubt that computers can have a role in art, look at how digital painting has evolved. It starts with digitizing analog art, which is then filtered or modified to look different. Then photographs are manipulated to look like original art, which some artists are now doing for profit, passing off the entire affair as if hand-drawn. Music and fine arts are already computer-created and enjoyed. The same will be true for literature one day.

      1. We’re on the way to this, but as we get closer, the holes in our ability to program computers to do sophisticated tasks become larger. Case in point: Watson could not tell the difference between Harry Potter and Voldemort. This tells us that the associative hierarchy is way too shallow to produce emergent characteristics.

        The deep hierarchy of our brains took hundreds of millions of years and an untold number of failed experiments to develop. It is likely that we will get to the point where there is far more processing power in computers than human brains, but they will probably only do relatively dumb tasks extremely fast. It could take a very long time for us to catch up on the coding side of things.

        In other words, we will definitely have computers as tools to produce books, movies, games, or whatever. They might excel in highly formulaic forms like romance and mystery, but they will probably be dumb tools that need a lot of help to produce something complex and interesting.

      2. I don’t imagine any of us will still be around in 100 years, but there are the grandkids to worry about. And my biggest concern would not be that computers could write novels, but that computers could do EVERYTHING. Need a job? Sorry. Only computers need apply.

        But wait–humans could solve that problem by blocking the sun so the computers can’t get their solar power. But then, what if they started feeding off of the humans?

        Neo? Neo, come save us.

        1. Show me the people who are eager to have jobs. I think our goal should be the elimination of all jobs. 100% unemployment. People can spend their time working, doing whatever they like, but all basic needs and infrastructure are automated.

          1. Actually, Hugh, the answer is whether or not anyone is eager to have a job, everyone needs a job. Depression and related disorders are spiraling exactly because of an increasing dependence on gadgets to perform basic cognitive functions. This is simply because an unengaged mind running “open-loop” becomes rather dysfunctional. It’s not about people doing what they “like” — it is about people doing what is meaningful. An unengaged mind is not capable of deciding what it “likes”, nor can it understand or perform what is meaningful. Call it the ‘Curse of Leisure”. A healthy mind requires positive stress (as opposed to self-induced negative stress). What you are describing leads to inevitably enervation, depression, listlessness, dehumanization and demographic suicide.

            You may be right that there will be no authors in 100 years. Just as today’s average 25-30 year-old (in the US) lacks the basic maturity, capacity to reason, and real-world skills of an average 16-18 year-old in the mid-70s, and is far, far behind what the people in the 1930 and 40’s could accomplish. This is due to exactly the factors you are talking about. So if the trend were to continue, the people in 100 years will be literal infants in old bodies, unable to think or care for themselves, reproducing mechanically.

            Now, I doubt that will happen, on a global scale. There are still places where people are not gadget addled. So the culture you describe, with the role technology you describe, will not happen because it is inherently self-limiting. Vigorous societies will destroy much of it and adsorb elements of what is left. My guess is that the crisis will come sometime between 2050 and 2100. It will probably start first in Europe, with serious cracks appearing about 2030-40. It won’t be much fun.

            By the way: if you look at the actual science behind AI, you will see that it too is self-limiting. There is no such thing as “AI”. There is intelligence and there are rule-based “expert systems” and they are not the same thing, now matter how sophisticated to the latter appear. There are things they simply cannot do. In about 20 years, people will begin to figure this out. At some point (if we survive the intervening decades) people will look back at this age and laugh about those “morons” who believed in AI.

            So be careful what you wish for.

          2. This reply is hierarchically to Hugh, but directed at Owen.

            I guess you’ve never heard of the second career. Google is your friend if you want to catch up on what people who have financial independence are doing with their time. Making a paycheck does not equal meaningful to quite a few people. They would rather spend their time helping, teaching or creating to make the world better. Hugh was rather brief in his post, but this is almost certainly the way he was looking at it. How much better would things be if we could give children more mentoring and individualized instruction during their educational process?

            Your suggestion that we’re getting dumber is so Socratian it’s hilarious. You should look that one up if you need to catch up as well. Millennials definitely have a tendency to not handle negative feedback well because their parents were inclined to shield them from negative results, but that isn’t a function of technology. It’s a reaction to how those parents were raised. They know from experience how much you can damage a child with only negative feedback, so they reflexively parented the other way. Unfortunately, parents have a hard time being even keeled about how to handle their children and have a tendency toward the fringes of too positive or too negative. Further, if we’re detecting more depression and anxiety disorders, it’s because we’re not so busy trying to sweep them under the rug. Also, needy over-supported children complain about what is wrong with them more than the self-hating over-criticized children.

            I can only guess that you haven’t worked in a modern, large scale, technical project. If you had, you would not talk about how amazing the accomplishments of the 30s and 40s were. The Manhattan Project had roughly 13k scientists and engineers involved. Companies like Google, Facebook, Qualcomm, Intel and many others have multiple tens of thousands of engineers. Most have more complex operations than the Manhattan Project. Intel spends half of a Manhattan Project(inflation accounted for) a year on building fabs. Also, a pack of those useless 25-30-year-olds has scaled a computing system to service billions of users in a large scale connected graph. The previous generation(X) created a similarly challenging project at Google.

            Your “real world” is going away. Abstract reasoning is far more important when you need to program a machine to build or maintain something rather than get a crew together to do so manually. Automation is a different kind of difficult, and it doesn’t require the same skills needed in the past. Young people today are being prepared to live in a world different from the past, and so far the changes have been perfectly fine.

            If you think expert systems like Watson are the only thing going in AI, then you don’t know much about AI. Almost none of the sophisticated things that AI do like PageRank, EDA, Game AI, Object Recognition, Neural Nets, Genetic Algorithms, Optimal Path Search and many more are expert systems. Some of these are applications and some are algorithms, but one can get the point.

            Your reference to believers in AI might have something to do with Elon Musk’s recent talk about our needing to be careful. He’s right to be concerned. Supercomputers will exceed the processing capability of the human brain in the next decade or so.

            I’m personally not too worried about AI becoming sentient. This is because my graduate studies were in computational biology with a focus on neuroscience. The neuroscience community is pretty well aware of where consciousness comes from. It comes from the confluence of emotion, self-maintenance, decision making and having internal models of objects in the world. All of these components add up to a model of self to sit next to the model of the objects that surround us and the abstract constructs in our heads. If we keep a disconnect between higher order modeling and self maintenance in computing systems, then that model of self should not come into existence. However, there will be a strong incentive to build self-sustaining automatons. Therefore, we do need to be very careful about how we link those systems together in order to avoid creating an entity that will unduly inject its own survival into its plans.

            I am most probably a moron of this age, but I seriously doubt that you’re version of what’s happening and going to happen reflects reality in any meaningful way.

          3. Enabity:

            I have no intention of hijacking Hugh’s space to engage in a debate here, as fascinating as that would no doubt be. But I do want to thank you for offering an illustration of some of my points. (Specifically, in regards to supercomputers exceeding the processing capability of the human mind, I have no doubt that will true in the case of some, perhaps many, people, in the not too distant future.)

            If you so choose, you might some day engage in your own analysis of a comprehensive set of social metrics over the last 50 years or so. As you claim a background in computational biology with a focus on neuroscience, you might (to gather some interesting insight, if for no more humanitarian reason), spend 15-20 years working with people who suffer from chronic pernicious depression. You might engage in a more detailed study of the interaction of technology and societies at various of stages of development (but I don’t especially recommend Jared Diamond). I could make further recommendations in this regard, but if you follow these, you will no doubt start to begin to formulate your own. (If you have in fact done these things, I would be very interested to see you publish your results in full.)

            As for your speculations on what I have and have not done in my professional career, the truth might surprise you, but as I could make any claims I desire, I see no point in doing so. (As for the comments on AI by the gentleman with the singular name: why no, I have no idea what he said. I believe he builds cars, does he not? Fairly close to me, I think.)

            As for you thinking Google is your friend . . . well, it’s a brave new world that has such creatures in it. (What year is it again? Sometimes I forget.)

            As for my “real world” that you say is going away . . . well, there are some folks I would like to introduce you to some day. They are old, you see, and most of time they move rather slowly. I’m also afraid they can be a bit of a bore. So, I will forbear. They have this way of showing up and introducing themselves, anyway. And you appear to be having quite of lot of fun.

          4. The First Immortal: A Novel Of The Future by James L. Halperin was written back in 1988. His novel projected a future in which computers and robotics could provide all needs. However, the same AI or AI-like systems that could theoretically supply our other needs could also theoretically supply our need for jobs. In Halperin’s non-dystopian novel (an increasing rare commodity) the neo-Republican socialist government of the future ensures that everyone is housed and fed, while simultaneously recognizing the social value and importance of work. Halperin’s speculative solution has the government providing all of the citizenry’s needs, but limiting those who refused to perform some type of acceptable work to recieving only last year’s stuff.

          5. By the way, you can download the The First Immortal for free at Halperin’s Heritage Auction site. He no longer writes scifi, having become rich as a numismatic dealer and auctioneer as the result of a book he wrote on numismatic grading. (I have no connection to the book or its author. He wouldn’t know me from Adam.)

            The free copy link:

          6. Oh yeah. In the poorest places everybody has to work all the time to survive.

          7. Owen,

            Believe it or not, I’ve studied the evolution of science and engineering. Technological development comes down to one concept, search.

            For a scientifically primitive society to produce something like wootz steel, which contains carbon nano tubes, requires a lot of trial and error as well as good fortune in the form of the impurities in the iron ore they mine in India. Technology took off when the western world began to combine empirical experiment with philosophical construct. This has allowed us to reduce the search involved in finding a solution to an engineering problem. Essentially, the search space is smaller, because we have a very educated guess as to where to look.

            It is easy for anyone to look at the large infrastructures built in the past and to see how magnificent they are. They’re large and quite complex, or devastating in the damage they can cause. When one looks at a computer sitting on their desk, or uses the internet, almost all of the complexity is hidden from them. The largest computer processors have tens of billions of transistors in them. They are etched and implanted at a scale approaching the nanometer. Getting ten billion transistors all synchronizing to do one task is at least as difficult a task as building a railroad or a skyscraper or a massive pyramid, but if you don’t value what you can’t see, then you will never understand what it takes. The point is that the young people of today are at least a match for any generation you’ve deified in your mind. They have different challenges, which are at least as difficult and they are meeting them.

            Undoubtedly, technology is disrupting to people who can’t think abstractly and it makes them feel disconnected. The primitive world probably was a very similar experience for people that considered doing the same simple tasks every day to be a tedium.

      3. Hugh, to paraphrase you a bit: “The earliest examples will be slight alterations of existing material that match the reader’s preference.”

        It’s not “will be.” It exists now and has already been happening for decades with those individualized children’s books that you can get by mail order to be delivered for a child’s birthday. The company that markets the human-created “seed book” is supplied, by a parent for example, with individual plot points and personal information specific to that child. The company’s computer puts it all together and prints out a picture book, tailor-made for that child, which is then mailed to the home.

        It’s no great jump from there to computer-created novels. In fact, computer-created long-form texts already exist in the intelligence community.

      4. “a maturation of what James Patterson does with his ghost writers today…”

        I look forward to the foundation of Computers United and full-page ads in the NYT preceded by a Streitfeld script (based on the flawed but persistent concept of whale math).

  5. “7) In fifty to one hundred years, authors themselves will be obsolete. No one will believe this today, but it’ll probably happen sooner than I’m giving it credit for.”

    This is scary to admit, but true. For those creatives who need hope, there is an exciting trend of, “artists in residence” positions emerging at venture capital, and technology companies.

    There is a shortage of truly great ideas and people who can sell them. For the creatives and authors who can ideate, think laterally, and sell- what we now know as authorship will transform into “artists in residence.”

    In the old days the uber creative used to be hired to, “idea sit” for corporations with the resources to implement what they thought up. Now, technology firms with cash on their balance sheets will offer these residencies/opportunities. It’s an exciting time NOT to be involved in traditional publishing.

    But before all that, prolific authors and creatives have the ability to set up their own James Patterson type operations where they leverage a team to create huge bodies of work. Acting and speaking like Patterson afterwards is probably not a good idea, but you get the point.

    1. You’re sorely mistaken if you think there is a lack of creativity in technology. The creativity necessary to conceive of the subatomic world, origins of the universe or how to create a computing machine is greater than any artistic endeavor. They don’t need the “creatives” to help them come up with really good new ideas.

      Those “artists in residence” are there to bring diversity to the thought process. Engineers use technology in a certain way that doesn’t necessarily track with the rest of the population. It is helpful to have someone around that conforms more to the popular view so the end product is useful to a wide audience. Further, if that person is a popular artist, they bring the power of their celebrity to the company.

      1. ” . . . how to create a computing machine is greater than any artistic endeavor.”

        According to whom? Aside from the fact you are mixing apples and oranges, and did not posit a metric by which to measure “greater”, a brief appeal to history will call your statement into question.

        1. Case in point: Leonardo DaVinci

          Was able to paint the Mona Lisa single-handedly. Had the creative capacity to envision a helicopter, but would have needed a team to help him overcome all of the challenges involved in constructing a working helicopter.

          1. I agree with points 1,4, and 6 completely. I somewhat agree with points 2 and 3 and I disagree without points 5 and 7.
            As someone interested in book publishing there is no doubt digital is the cheapest- publishers and authors save on print orders, storage, distribution, having to sell to wholesalers and/or retailers, and shipping, among things. By selling directly to customers a publisher/author can “cut out the middleman”. BUT there will always be people who want a physical book.
            Think of voting: Today is election day and there are all kinds of online and mobile websites, apps, and tools to get people to the polls. It is easier for candidates than it’s ever been to connect with voters and potential voters and get one’s message across. Yet in America we consistently see similar percentages of people turn out. Why? Because no matter how complex or advanced the technology, there will always be some people who vote every election, who vote in presidential elections only, or who rarely or never vote. No technology can change that. Books are the same way. Some people will become readers with shorter books, some digital, but those who want a physical book aren’t going to change. Those detached from reading won’t become avid readers because of e-books.
            Lastly, authors becoming obsolete is a great way to get readers and comments! but in reality that is not going to happen. How many of you over the age of 45 saw The Jetsons and though by now cars would be obsolete and we’d be flying through the air living like George Jetson? Yet we still live on the ground and drive cars. Only the cars are more advanced than they were in the 1970s. Instead of getting rid of cars we just changed their capabilities. Predicting trends is difficult but I doubt computers will be creative enough to develop a book more interesting than what a human could write.

  6. Fantastic, thoughtful, inspiring post.

    People may well shake their heads at some of your predictions, but they’re a lot closer than many think. A lot of the skepticism comes from the predictions that were made 30 years ago stating we’d have hover cars or AI robots today.

    Of course, those predictions didn’t come true. But we live in a different world today. Moore’s law means technology is advancing at a rapid pace and more money ($bn+) is being plowed into the tech industry. It’s only a matter of time.

    If the creative industries can be automated, every industry can be automated. From retail workers to lawyers.

    If anyone is interested in getting more information about technology’s role in our future, i highly recommend this video: It’s 15 minutes long, but you’ll be hooked after the first 10 seconds.

    1. Wow! Exciting and scary, both. Thanks for sharing the link.

  7. Phyllis Humphrey Avatar
    Phyllis Humphrey

    I’m an author, so #7 seems far-fetched to me. Reviewers and editors tell us we need a unique “voice” but no doubt you’ll say computers will be able to imitate that too. Luckily, I won’t live quite that long.

  8. It seems that with a state of the art POD machine, every convenience, drug, grocery store , etc. could be a “bookstore” that stocks every title ever written for people who simply prefer reading on paper. Heck, anywhere you now have a vending machine, you could have a book vending machine stocking every title ever written.

    Of course ebooks would still trump widespread POD machines in ultimate convenience, but perhaps the first company that as you say goes all in with this approach to paper books could take over the lions share of the paper book business.

    And perhaps like 3-D printers, the technology could even advance to where you could have a POD machine in your home. If the price is low enough, then even with low quality paper books it could qualify as an example of #1.

    And maybe you need to add #8: In one thousand years, readers will be obsolete as robots become the main consumers of the written word! We humans will all have moved on to another dimension where complete bodies of knowledge are instantly transmitted telepathically from one disembodied consciousness to another.

    1. A POD is a wonderful machine, but so are Xerox machines, I don’t think POD will catch on until the Xerox machines no longer need the “PaperJam” light. They have been working on it for over 50 years.

      1. Good point Terrence….ebooks will probably always trump POD machines for reliability and ease of use….and also require less dead trees.

  9. Beautiful post and thank you for it. I’ll definitely read it.

    I’m already streaming my next novel on my website at This is a WordPress widget (PubML) that gives the reader the ebook experience. It is the future because it requires no proprietary hardware, no proprietary software, and makes the authoring of books as simple as the reading of books.

    All online. All you need is a browser.

    Welcome to eBooks 2.0

  10. What you are describing is creative destruction. Adam Smith described the process, but didn’t come up with the catchy phrase. Sounds like Christensen is showing how the real world confirms the theory.

    The same forces that are allowing self-publishing to disrupt major publishers will eventually allow hobbyist writers to take vast market share from self-published authors.

    If people think there is a meaningful distinction between hobbyist and self-published author, then the process has already begun.

  11. Clay’s book is pure gold.

    Predictions about disruptive technologies usually overestimate their near-term impact and grossly underestimate their long-term impact.

    Kind of like an avalanche. A few trickles of snow slide by, and you cringe. Then you relax, uncrouch, and let out a shaky laugh. And then the mountain falls on you.

  12. Great post. Here’s an example of a change in the way books are introduced into the market. Ten years ago, my legacy publisher printed some Advance Reader Copies, bought one ad in a dying regional magazine for at least $1000, and sent me to one book festival. That was the full marketing campaign for my first novel. No ebooks, obviously, just hardcovers. Nothing else.

    When I decided to self-published my new novel, Zion, I took a big risk on ARCs, 125 of them with packets and stamps cost $1500. It almost sunk me. 75 % of these books went to the wastebasket. I will never do stand-alone ARCs again. It was a risk, I knew, but I hoped for some big reviews in major magazines, the same places that reviewed the first book.

    The real ARCs were released from Oct. 30-Nov. 1: free Kindle ebooks. In 2 days, I’ve given away 1339 copies, and I hope to hit 2000 tomorrow. In ARC pricing, that’s $30000. The Kinde giveaway was 100 % and my only paid advertising was a $50 banner ad running 3 days on a Louisiana political website.

    Most authors have no idea why I gave away 2000 ebooks the first 3 days of the book launch. If they can’t figure this out, they don’t get it. Most of these authors are highly intelligent.

    In 10 years, the model that I think will work now will not work then. But as a self-publisher, I can change course and not have to sell an agent, a host of editors and staff, publisher, or the like 1300 miles away in NYC. I simply have to read what to do on Tim Grahl or Hugh Howeys blogs, and weigh my options. If the gumbo ins’t spicy enough, add some cayenne pepper. Stir the pot.

  13. While distribution is certainly an important issue, focusing on it exclusively assumes that the main problem is delivering to consumers materials that they have chosen to read.

    But this begs the question of what about when people don’t know what they want to read next? The great joy of bookstores (and to a lesser extent libraries) is browsing. And while people can browse online, and certainly do, it’s no where near as much fun as physical browsing. So what happens to physical browsing if bookstores disappear?

    1. Physical browsing will move into VR.

    2. Depends on what we think is fun.

      I can flick a finger down the Amazon Thriller best seller list and browse lots of books. The process is much like looking at a shelf of face-out books. Look at the cover, look at the author, read the blurb.

      My browsing suggestion for Amazon is a little check box by each book. Check the boxes of the books that are vaguely interesting. Then hit condense, and everything but those books disappears. Click “Blurb” and the first Amazon web page for each book appears in a single scrolling screen. Just scroll down the screen reading blurbs or sampling.

      And what happens when bookstore browsing disappears? Everybody stops reading.

      1. That’s an idea Amazon should be listening to. Simple and effective.

  14. Not that it completely discredits him, but Christensen predicted the iPhone would be a failure, primarily because it was Apple’s turn to go down. His theory doesn’t allow much for big businesses being smart enough to adapt. Here’s a more detailed taking apart of his theory:

    That being said, I think the theory is very helpful in looking at trends and trying to predict the future. It certainly is an excellent way for framing a debate about business startups.

    On the other hand, I think calling self-publishing a disruption is underselling it’s potential. It’s more of a revolution than a disruption. From my humble observations, the closest thing I’ve seen to it was the entire personal computer revolution during it’s early days with Steve Jobs, Wozniak and Bill Gates and all the other personal computer innovators. When you read about that time a lot of the excitement and energy was very similar to what is going on in self-publishing. People, young and old, were brought together with a collective vision of everyone having a personal computer in their home to tinker with. Money was not the main issue, it was about empowerment. Early software was given away for free, sometimes even computer parts, people shared information, there were big debates about what direction it should take. And there was a classic war between the new guys and the established big computer businesses that were pushing mainframes and dumb terminals and kept trying to dismiss personal computers as impractical.

    So I see a lot of similarities in all the excitement and energy, the way self-publishers are sharing information, putting books up for free, talking about how money isn’t the main goal and the battle between traditional publishers and self-publishers.

    And here’s the two main points I take from that comparison.

    1. This isn’t about disruption as much as creation. New markets and customers are being created faster than old ones are being lost. Self-publishers aren’t really taking money away from traditional publishers (which have record profits) but creating new money from bringing in new readers and creating books with fans that wouldn’t have bought alternatives. Sure, there are pressures on print, which is declining, but the main pressure comes from trad publishers themselves, which is why they are resisting digital. But just as personal computers didn’t destroy the mainframe market, and, in fact, made it grow, I think self-publishing doesn’t directly compete with traditional publishing. They can not only exist together but will likely help each other grow even larger.

    2. We’re only seeing the tip of the iceberg. Both in terms of creative freedom and business. We’re still in the early days of the Osborn 1 and personal computers that required programming in BASIC. The huge potential of self-publishing has yet to really make itself known. Self-publishers are going to go on to make movies, produce television, sell toys, build theme parks, etc. It’s really just the beginning. They’re going to create networks and startups to help each other with marketing and accounting and library management. They will make alliances with big publishing, big media and big corporation. At the core, what this is about is creating intellectual property. For a variety of reasons, big corporations have been able to seize control of intellectual property and diminish the contribution of it’s creators. This is the pendulum shifting back and we’re going to see some interesting things like when Edgar Rice Burroughs became one of the first authors to incorporate and created Tarzana and when Lucas got the merchandizing for Star Wars and showed the studios what they were overlooking.

    The best person to exploit a creative property is the creator. And self-publishing is the beginning of giving powerful tools to creators to compete on an (almost) even playing field with the big businesses that have dominated creative works for the last fifty years.

  15. […] The Innovator’s Dilemma: Understanding Digital Disruption | Hugh Howey […]

  16. Agree on everything!

    …except 5, 6, and 7. lol

    5) Can’t wait for this to hit the Erotica market!

    6) So… there’s already a lot of automation via plagiarism programs. But, when comparing the output of these programs to the original source, there’s no mistaking the theft. While they are remarkably “smart,” its replacing synonyms and switching tenses and replacing ‘I’ for ‘he’ or vice versa.

    I’ve been fascinated by the rise of this type of automation. To me it’s a bigger threat than piracy. At least with piracy, you still get credit for your work.

    But I’ve also been thinking about writing in a manner that is more difficult to mimic. Writing in fragments that only make sense in the context of the paragraph. Writing over the top metaphor and simile that are something that only makes sense in context of the character and their inner thoughts.

    I think the next shift in writing is going to be a move away from what can easily be copied by a program (long before what you’re predicting will ever come to fruition).

    7) The model you are talking about, today, makes authors CEOs of their own empire.

    I get that’s not your point. You’re talking about writing becoming something automatic. Maybe that’s true. But like my opinion on #6, I gotta see a program that can rub its naughty bits up and down on language, before I think it’ll replace what the human mind can do — on both fronts: writing (transmitting) and reading (receiving).

  17. Would love to see how a bot handles a room full of pre-schoolers. Ha!

    Begs one to wonder with so many unemployed by bots how currency will change and where will all of the unemployed live? Maybe we can have the bots figure that out too?

    Can you imagine if most people were not working? Lines of people would be everywhere! I love when everyone is at work, I get so much more done and can go anywhere I want with less traffic. I’m never bored because I love learning new things but those who just sit in front of the TV’s will become zombies.

  18. I think some of your predictions are way off, like the 1950’s magazines talking about the flying cars we would have on the 80’s and about our weekend homes on the moon. Just because it is possible doesn’t mean it will ever happen. These two things are even more possible today but we don’t hear it anymore, why? Because they were silly ideas. A flying car breaks down and falls from the sky, and spending a million dollars for each weekend away is out of the reach of even Bill Gates, he would rather spend money helping the poor..
    Books have existed since before the printing press, paper books will never dissapear, even if someday e-ink screens advanced to the point where we can write on them, I think paper books will always exist. A lot of your vision is from the fact that you live in this advanced society, you forget that 50% of the world lives in poverty and will never own an ebook reader, even their children will never afford one. It is like the people who say for a billion dollars we can bring fresh water to the whole earth, and yet we can barely build a bridge for that price, their fantasy outweighs reality and they don’t see the true cost. It cost 4 times that to build one building in NY.
    Your other point about writers becomng absolete is even more a ‘condo on the moon’ fantasy, I have lived all of my 48 years hearing about intelligent computers but have yet to see one. even if by some miracle a computer became self aware, it isn’t human so it can’t write for us, it never will be able to. Try getting an Einstein to write you a book, you will not understand it, story comes from common expierences and I have never relied on a plug to live, lol. The day a computer reaches out intelligence will be the day before it surpases us, and another day later it will advance so far we won’t have any idea what it means anymore.

  19. I was VERY glad to see number 7 there! I agree that this kind of prediction seems like total lunacy today. I think you’re partly right, and partly wrong. First, I think you’re over-estimating the time scale! I think it’s more like 25-50 years, so half what you’re talking about. Second, I don’t think authors become obsolete.

    Look at what’s happened in the world of chess. IBM’s victory didn’t make human players obsolete. Chess is as popular as ever. At the highest level we see humans and computers working together to play games. This is what Kasparov views as the future of competitive chess at Grandmaster level.

    I see humans talking to their Artificially Intelligent couterparts and creating books TOGETHER. The humans get to do what most view as the most interesting part – designing the plot, the characters and so on. The computer gets to do the grunt work – the writing!

    Another great piece Hugh, the book you rave about was already in my pile. I shall duely move it to the top!

    Best, Toby

    ps Ray Kurzweil (head of AI at Google) believes that it IS possible to predict the future, and has been doing so (correctly!) for over thirty years. Read everything you can on him. The one distinction he makes, is being able to predict “overall macro trends” rather than “individual micro events”.

    1. The best book I’ve read about this is SMARTER THAN YOU THINK. I do believe we’ll go through a period of cooperation, but that computers won’t need us forever.

      1. Harry Seldon figures out psychohistory long ago.

      2. Thanks Hugh, have added it to my wishlist, great cover too.

  20. One of the big 5 has begun moving out of NYC. Harper now has about a quarter of its employees in South Brunswick, NJ. News Corp. owns a set of buildings there and an employee tells me there is PLENTY of room for more people.

  21. […] other news, Hugh Howey posted an essay to his blog. In it, he talks about how he’s implied in the past that “publishing executives were […]

  22. In regards to #4 – Book stores moving into other spaces; there was an article in the 6 November Washington Post about the area’s beloved Politics and Prose bookstore opening satellite operations in the beloved Busboys and Poets eating/drinking locations.,1116164.html

  23. Great article. Don’t agree that you can replace a Shakespeare with a computer program. Already tried that. It didn’t work.
    Even readers that read schlock can tell good from bad, shallow from a passionately written book with a new view to what they’re into.
    I think sci-fi people are always very eager to think everyone can be replaced–it’s their genre after all, so I don’t blame you guys. You can replace many people with many programs, but certain things you cannot replace…it becomes too complicated to produce a high quality product. A sports page written at anywhere from the 2nd grade – 5th grade level (newspapers in the US) are not well-written, innovative, memorable novels.

Leave a Reply

Your email address will not be published. Required fields are marked *