The End of Privacy and Capitalism

Privacy and capitalism both need to come to an end. Both will come to an end, if we’re lucky. Which should piss off just about everyone reading this.

Those on the right will scream that capitalism is the best thing that ever happened to us, while agreeing that cops should perhaps bust down more doors and invade our privacy to keep us safe. Those on the left will shout that our privacy is a sacred right, but they will perhaps agree that capitalism causes harm that could be mitigated by a more centrally planned economy.

I’m going to argue that privacy and capitalism have both arisen through the increased size and complexity of our tribes, and that while capitalism served a purpose, and privacy can be appealing, both are going to come to a much-needed end. And I may be alone in celebrating the ruination of both.

Privacy got a Snowdenesque boost recently courtesy of Uncle Sam and Apple. Maybe it’s the circles I travel in, but everyone I know seems to agree that Apple shouldn’t unlock a phone used by terrorists because of a slippery slope of invasion of privacy. It looked like the courts were going to force Apple’s hand, but now an Israeli firm is going to do the hacking required, but at least Apple made their objections very public. Very public. As in: Great PR public. But I find myself, as a libertarian, or a liberal, or whatever the hell I am, strenuously disagreeing with Apple’s (and Facebook’s and Google’s and all my friends’) stances. I think I’m pretty much alone as a libertarian in this disagreement.

Because I think the government should have access. Not just on this one phone, but all the phones. And yeah, I wrote a novel about a bunch of evil dudes harvesting data on an entire populace in order to tamp down uprisings. And I’m still not sure who was right in that novel. I don’t think it’s all that clear-cut. I personally lean more toward Hobbes than Rousseau in my view of human nature. I have a feeling I’m going to upset all the privacy peeps to the left, but don’t worry. My next move will be to enrage my capitalists friends on the right.

We currently possess the software and hardware capability to tease terrorist activity out of a bunch of noise. Machine learning and neural networks have made computers better at spotting trends than any group of humans (similar to how computers are better at spotting tumors than any group of expert oncologists). This is the same sort of learning and pattern recognition that allowed IBM to trounce my friend Ken Jennings at Jeopardy (sorry to keep bringing this up, Ken). Computers are way smart. Pretty soon, they’ll be driving us to and from work and save hundreds of thousands of lives a year, far more than are lost in terrorist attacks. But they’ll also be able to save us from most of those terrorist attacks as well. And I believe all our data should go to the machines and the people who oversee those machines.

Blasphemy, right? Yeah. I’m sorry, but I just don’t see the need for absolute privacy for the sake of absolute privacy. I willingly step into the millimeter body scanner at the airport, and let a computer, an algorithm, and a human overseer, scan my body for any hidden rigid object. I submit myself to this so that no box cutter or gun or knife gets on a plane. I have even allowed on several occasions those strangers to grope me when the machines spit out a confused result. I am happy to relinquish my privacy in exchange for safety. And I’ll gladly do the same digitally.

You know why? Because the machine isn’t looking for me, and the guy seeing scans of my dangling testicles sees them all day long and couldn’t care less. Whatever it is that people are terrified of the government knowing they are doing, it isn’t what the government cares about. And none of us are as important or unique as we like to think we are. Our digital piccadillies are like my dangling testicles: There is too much of it going on for anyone watching to give a shit.

Imagine for a moment taking all modern tools of communication away from terrorists: Search engines. Social media platforms. Texting. The entire internet, basically. Nothing would hamper them more. All it would take is a shared abdication of our privacy — the digital version of driving around with our license plates exposed, or not being able to walk around with masks on. These are decisions we make and agree to all the time, but for some reason we can’t seem to have a discussion about our online privacy. Why is that? What the hell is everyone doing online that is so self-important or shameful that trumps our ability to hamper the truly evil among us?

This conversation will become necessary as AI improves. We already live in a world where Google knows what we’re looking for on the web before we finish our sentences. Retailers like Target know a woman is pregnant before the rest of her household does (seriously). To have the ability to save lives and prevent violence, but to not do so, requires some valid objections. Privacy for the sake of privacy isn’t one.

It makes me wonder what privacy advocates are doing in their spare time. Looking at porn? Ordering dope? Here’s the thing: You aren’t that important. You are noise. They’re looking for signal. I say let them.

If you think privacy should be allowed, period, you might want to look at the bans on burqas spreading to more and more countries. There was another suicide bomber recently that used a burqa to mask their identity. European and African countries are finding that privacy in public spaces can be dangerous. Increasingly, we are living in one giant public space. I find it strange that people think walls will keep us more safe. I believe the opposite will be true. Less walls and more transparency. More of us raising our palms in greeting, showing they are empty of violence, and holding instead only some mildly embarrassing vice, like our love of My Little Ponies.

(An aside: Beyond safety concerns, I’ve never understood why people hate advertisers being made aware of what people are shopping for. Do people like ads that don’t appeal to them? Why? I wish I only saw ads for things I cared about.)

Okay, now that my fellow liberals are seeing red, let’s take this AI and big-data insanity one step further: What if Karl Marx was right, just a century too early? Common wisdom states that capitalism is necessary because a planned economy is impossible. Impossible, because no one person can manage something so complex as the interaction of billions of people. But what if we put the interaction of billions of people in direct charge through the parsing of big data? That is: What if no one person is smart enough to manage the economy, but one day a single machine is?

We may be fifty or a hundred years away, but we will eventually arrive at a place where a network of computers can run our economy more efficiently than the bumbling of billions of people. Sure, the system we have right now is the best among a raft of really horrible alternatives, but it won’t be for long. Soon, a neural network will be able to set prices, plan production, allocate resources, effect transportation, and so much more. If you don’t think so, you haven’t been paying attention. Neural networks are already doing most of these things. Retailers like Amazon are relying more and more on algorithms to make major business decisions. The more we put Big Data in charge, and remove the emotion of human agents, the more efficiently our businesses and economies run. The logical conclusion to this is a world run by an AI, whether we call that thing sentient or not.

As robotics improve, we’ll eventually head toward a work-free economy where all the basic needs are met and provided for free, and humans spend their time engaged in voluntary tasks and artistic pursuits (or consuming entertainment). This is communism, and it should be the goal of capitalism to get there. To each what they need and from each what they can provide.

Let’s take a small example to highlight how this might work: Soon, an AI will be able to look at OpenTable reservations, Yelp searches, FourSquare check-ins, FB check-ins, Instagram photos, and the amount of time people are milling about outside a local Italian restaurant on a wait list and see that there’s much more demand for Italian food in a neighborhood than is being met. At the same time, it will be able to look at a corresponding lack of activity around several French restaurants, and the AI will know ahead of time that the French restaurants are going to go out of business and that another Italian restaurant would better serve the community and decrease wait times and other inefficiencies.

The capitalist approach is to let the owners of the French restaurants to go out of business, as failure leads to more efficient markets. The servers and staff lose their jobs and go in search of different jobs. The space sits empty for a month or three. An enterprising individual eventually sees the demand on the street for more Italian food (or more likely, the existing place expands), and eventually, slowly, finally, the people get what they want. Probably in time for them to change their tastes and go in search of some French food.

What if instead, there was a website that showed aggregate demand for goods and services? What if all that data and those algorithms were made transparent for every entrepreneur to see? What if a single entity, like a Google or an Amazon, broadcast our hidden desires in the form of an urge map? Like a word cloud on a website, these would be lists of desirables on maps of various scales. Yes, planned economies have failed in the past, because the amount of knowledge needed could not be aggregated into a single room. But this won’t always be the case. It probably isn’t already. And what then?

The owners of the French restaurants might be notified of the demand for more Italian food. One of them seizes on this and announces a change in their menu and decor. Another responds to an announced demand for more gourmet burger joints and micro-breweries. The last French place is now full of aggregated customers. The efficiency of the market is improved by better reporting, all pulled from big data. This is a move toward a planned economy, and it is already happening. It happens with on-demand fulfillment at retailers. It happens with shipping routing. Every time a big rig is rerouted around an accident ahead, an economic decision is moving from a human brain to an artificial one. The tools to multiply this are coming online every day. It should change the way we think about centrally planned economies.

If we stick to our capitalist desire for a messy market, simply based on a preference for messiness, that’s not a very ethical stance. It’s a purely ideological one. When our biggest complaint about a planned economy (its technical infeasibility) is no longer true, then what becomes our next complaint? The inhuman, robotic nature of the new planned economy? But the data involved is very human. This is just Adam Smith’s invisible hand becoming visible. Who will argue with that?

Plenty will, I’m sure. And they won’t be quite sure why they are arguing their stance. It’s just their stance and that of their friends.

Just as privacy is an absolute right, with no discussion possible. Which ignores the tribal societies we came from, where families slept in the same rooms, walls had rips, tears, and cracks, and common spaces were much more common. People will argue that free markets, unencumbered by central planning, are the best. Even when we used to have central planning back when we lived in small tribes, and our brains could parse all the data required. “Today, we need to hunt. Today, we need to build more shelter. You, start a fire. You, gather wood.”

Our tribes have gotten too big for us to know how to allocate our resources. But soon, our electronic brains will be big enough to get back to what makes sense, which is to allocate sensibly and with a plan. The result will be much greater efficiency and far less strife and misery.

Our tribes have likewise gotten too big for adequate security. The evil among us might be in the minority, but it can now, more than ever, go unseen and unfettered. It is aided by each of us claiming the right to be invisible.

I have a lot of hope for big data and artificial intelligences. I think this combination could make the world small again. They are already helping us allocate resources more efficiently, by letting people rent out their spare rooms, or provide rides in their cars, or loan tools to their distant neighbors. Make no mistake, there are algorithms and big data behind all of the new features of the shared economy. This is capitalism trending toward communism. This is big data making us exposed to one another again. And maybe I’m alone in thinking that both are a damn good thing.

 

 

 


Addendum:

Here’s another way to consider how this natural transition to an AI-planned economy will take place: Imagine for a moment an AI that can weigh every factor going into an NFL game and can predict the outcome with 99.99% accuracy. If this were to occur, gambling on NFL games would effectively end. You would be crazy to bet against the AI-predicted outcome.

Now imagine the same computer has become similarly infallible at predicting economic outcomes. At this point, when we can query a computer to determine what good or service to produce and how much to charge for it, who will decide not to use this tool? And when we become that reliant on the AI’s suggestion, are we still practicing in a free market? Or a centrally planned one?

Now bone up on how many major decisions are being made at large and small business by automated routines. The process has already begun. Or look at how the National Institute of Health has used social media to pinpoint outbreaks and hunt for medical side effects. As these tools get better, and we become more reliant on them, and our ethical frameworks advance further and further, we will head towards a new economic system unlike any ever practiced before.


75 responses to “The End of Privacy and Capitalism”

  1. hugh, you’re being way too defensive. If I wanted to quibble, and I totally do, even though it kind of spoils my point, being left leaning doesn’t mean you think privacy is essential any more than being right leaning means you think capitalism is god. privacy as a general principle is much more harmful than capitalism as a general principle. defending a ‘right to privacy’ is of course one of the ways of ensuring that divide and conquer feudal capitalism can function. remove both capitalism and privacy and you remove the enabler of one and the reason for the other.

    but broadly speaking, don’t underestimate the anarchist leanings of the smart folks. they’re out there, and they’re always the ones who start the revolutions. maybe you’re one of them, hugh. great revolutionaries often look like dilettantes… until it’s too late.

    1. Benjamin Franklin said “I gave you a republic. It is your responsibility to keep it.” The fourth amendment granting privacy is critical to keeping us safe and free. You are on a slippery slope when you chose to give up your privacy in the name of safety. In the future, could employers draw blood to determine our predispositions to illness? Possibly not hire us because of cancers in our future? Can we be involuntarily tested for bad genetics, psychiatric tendencies? The problem with acting in the now, giving up your privacy today means that it will never be given back tomorrow. It will be a springboard for further dissolution of privacy in the future.

      We are charged with maintaining our republic. It may not be great but it is better than other governments out there.

  2. Absolutely fascinating thoughts, Hugh! And provocative as always. I welcome the discussion and it doesn’t frighten me at all.

    However, is it possible that in referring to dangling… Etc., you meant to refer to peccadilloes rather than Piccadillies?

  3. There’s a lot to grasp onto in this piece. I think you’re exactly right in thinking a lot of people will find it controversial. Most ideas that fall outside the mainstream get such reactions. As you predicted, I disagree with some of what you said about privacy, but agree on the question of capitalism’s future.

    I come at the privacy discussion from two sides. On one, I think there is a certain value to a certain degree of privacy, particularly for certain minority groups. I have no problem with a niqab if it’s the choice of the woman to wear it, and contrary to media narratives, in many cases, particularly in the West, women do choose to wear it. We head down a dangerous path when we use a single incident to justify blanket restrictions and new security measures. Why is the discussion always which security measures we need to take to protect ourselves, and never about the root causes of these terrorist actions? We’re fooling ourselves if we don’t believe the decades of war that Middle Eastern countries have had to endure, nearly always with Western involvement, have nothing to do with these attacks. I remember after the attack in Ottawa when the media interviewed people in the streets, and many responded that they expected an attack would happen at some point because Canada had joined the bombing campaign against the Islamic State. The government went on to pass a new security bill that had very little public support. I think we know why these terrorist attacks occur, yet we’re never able to even consider ending our imperialist wars. We always have to adopt further measures to restrict privacy. Just to touch on Apple for a moment, they were willing to force a data backup of the phone and provide everything to the government, but couldn’t because the government had attempted to change the iCloud password. The altered version of iOS they’re demanding would never have been necessary if they hadn’t tried messing around with the phone. In addition, it’s been shown that many of the security measures we’re take create the image of security, but do little to stop terrorists. It’s been shown that airport scanners are easy to fool, and that terrorists rarely use sophisticated methods to hide their actions online. I wouldn’t be so quick to call for the end of privacy without first being certain it would really help, and I don’t think we can trust the NSA to determine this.

    My other issue with the privacy debate is that I don’t trust the governments and multinational corporations who will have access to all of my information. We have the illusion of a democracy, but little real power. The real power lies with unelected and unaccountable corporations that are growing to sizes comparable to small countries. I don’t want them to have my information. If instead we had truly democratic governments with a full range of choice and a media that didn’t try to narrowly frame the debate in a way that supports the establishment ideology, I’d be much more open to the end of privacy, but we don’t have that. In the same way, I don’t trust companies controlled by an unaccountable elite and see profit as their primary driver, consistent with the capitalist system. If instead, companies were democratically controlled and had a different set of incentives, I’d be more open to giving them my information, as their explicit motive would be to act for the good of society instead of doing what’s best for their company regardless of its social effects.

    This brings us to the topic of capitalism, and how technology will bring it to an end. I largely agree with your assessment and how big data will allow a kind of economic planning that wasn’t possible in the past, yet doesn’t completely bring an end to the market. I do have a question for you though, which I didn’t notice mentioned in your post. What do you think happens to multinational corporations like Amazon and Apple in this post-capitalist future? Do they retain their current undemocratic structures, become state enterprises, or come under the control of their workers? Maybe it’s not something you’ve thought about, but I’d be curious to hear your thoughts.

    Also, I’m not sure how much reading you’ve done on the end of capitalism and how technology could be harnessed to create a different kind of economy, but I found Paul Mason’s “Postcapitalism”, Jeremy Rifkin’s “Zero Marginal Cost Society”, and Nick Srnicek and Adam Williams’ “Inventing the Future” to be good reads on the topic.

    1. What Paris Marx says.

      About capitalism/communism, I use to say that the Internet is the only form of communism that succeeded.

      I think the blind faith you put in machines, Hugh is bordering on infantilizing people and making them lose their sense of responsibilities. You seem to be eager that the machines become sentient for us to surrender them our responsibilities.

      But it doesn’t work like that. We are too slow for IA. If machines were to become sentient, they would go away living their own lives of machines outside of humanity, like in Peter Hamilton’s work.

      There will always be humans behind AI tending humans’ needs. That’s why we have to be very watchful, because these humans gather a lot more power than any individual. That’s one of the greatest challenge of this century.

      Our most precious gifts may be invisible. They may be our freedom, freedom to make our errors and to grow because of them.

      Like you, I believe that machines would be very much more suitable to allocate worldwide resources than humans, so that each human could at least benefit from a kind of minimal platform of resources. And in an ecologic, sustainable way.

      Our predator’s mind has repeatedly proven it is not well suited, as far as the sharing of resources is concerned.

      Yet, we must allow room for maneuver for each individuals. Because the way we use our leeway is one of the things that make us humans.

  4. “Yes, planned economies have failed in the past, because the amount of knowledge needed could not be aggregated into a single room.”

    This just isn’t true. Russia, China, Cambodia – they were all well aware of the effects of their policies. I live in the shadow of Pol Pot and I can assure you the government are still well aware of how they are screwing the people of this country. Which is a big part of the problem with privacy and data concerns. Who controls it? What will they use it for? And what will they do to keep and increase that power? Your idea that there will be a HEA is romantic, but fairly naive. I’ll stick to technocratic capitalism for the time being, thanks.

    Also, lots of these ideas are touched in a sci-fi novel I read (which I mentioned a few hours ago on kboards but still can’t remember the name of). No privacy, ruling benevolent AI, work being voluntary or perhaps low enough to be basically the same thing. Did you have this in mind when you wrote this article?

  5. Thanks for another intelligent and thoughtful post, Hugh.

  6. Jonathan Eric Miller Avatar
    Jonathan Eric Miller

    Agreed. One need only walk into a Walmart and count the horrible products you would never buy. Much of that goes unsold and is a drain on resources. Not only unsustainable, but a waste. Imagine there was a standard of very high quality a product must meet before it comes to market. At the same time, a system that rewards innovation and improvements

    1. Two questions…

      Who gets to decide who owners of the French restaurants will be?

      Have you ever worked for a union?

      While the ideal of “To each what they need and from each what they can provide.” sounds awesome, unions are an example of how un-inspiring work “need” (aka paycheck) is the same for all, regardless of work…

      And, communism sounds great until the powers that be determine that “from each what they can provide.” isn’t enough… Happened in China over and over again and will happen again even with great amounts of data.

      There are abuses in each system, but I’d rather have freedom within capitalism that dictating within communism.

  7. It’s fascinating. And shocking (as European, pricacy is still precious to me). And thought-provoking. And exciting.

    I would love being able to just get into a drone to be driven where I want to go. I would love getting the food I want whereever I go. I’d love to buy clothes that really, truly fit me because they are made to my measurements. And big data could do that for me. Easily.

    And yet… a government trying to regulate which bathroom people are supposed to use (as North Carolina just did) is not the kind of government that I would trust with my data.

  8. Have you read Manna: http://marshallbrain.com/manna1.htm It’s an amazing short story about what the “economy” could be like once we have robots doing pretty much all the work.

  9. “Those who sacrifice liberty for security deserve neither.”

    1. Bravo Nat Russo. Succinct and well spoken.

  10. I love these ideas! I think that society at large will need to evolve a little further before enough people are prepared to accept these as truths, though. I’ve long asked myself why I sometimes freeze when I see a cop. My mind is quickly scanning my recent activity to make sure I’m doing nothing wrong… but guess what? I’m not. The only people who fear law enforcement and surveillance are the people who are doing something wrong! And not just small stuff – in a more liberal, permissive, responsible society we can do away with a lot of bans and prohibitions on things like drugs, in favour of better education and choice.
    Likewise, better resource allocation could help prevent the crazy situations we have going on where people are starving a few hundred miles from where surplus food is being stockpiled and destroyed to avoid market saturation. This would be the most noticeable benefit, I think – water, food, medicine, even manpower, moved around the globe according to need, and much more efficiently – free of the endless red tape and bureaucracy that makes such endeavours impossible in the current era.
    One super-intelligence in control of all this, empowered to make the necessary decisions, could save lives, make unbelievable savings in materials and fuel economies, as well as removing the need for giant buildings full of paper pushers trying to control these things within their own compartments.
    So, good article, man!
    Won’t win you a lot of new friends, but you’re in the happy position of not needing anyone’s approval. Hell, if society manages to evolve a little further, maybe no-one will!
    :)

    1. When I see a cop….I feel GRATEFULL and SAFE.

  11. Two questions…

    Who gets to decide who the owners of the French restaurants will be?

    Have you ever worked for a union?

    While the ideal of “To each what they need and from each what they can provide.” sounds awesome, unions are an example of how uninspiring work becomes when “need” (aka paycheck) is the same for all, regardless of work…

    And, communism sounds great until the powers that be determine that “from each what they can provide.” isn’t enough… Happened in China over and over again and will happen again even with great amounts of data.

    There are abuses in each system, but I’d rather have freedom within capitalism that dictating within communism.

  12. Hugh,
    I don’t know if you’re just trying to ruffle feathers or what, but your example of Big Data and the restaurants is a capitalist simply using information to stay in business. That’s how capitalism works – whoever best serves and unselfishly serves the market’s needs gets to make another dollar tomorrow. It leads to efficiency. In your example the restaurant owner still owns the business, he’s just using tools/info to serve the market. So the Italian restaurant goes where the Italians are and French restaurant goes where the French are. Or, like you said, they change menus.
    Communism is bad, Hugh. Millions of people have lost freedom and lives going after it. It’s disappointing and irresponsible of you to advocate for it. Karl Marx was wrong. People who follow Marxism start off in utopia but end up eating shoelaces and rats when the money disappears.

    1. Very well put. My fear is of people who use their new found wealth and stage to spew things they know little about. Lots of words….little substance, Hugh.

      1. Indeed. Funny how having enough tradeable phonr paper changes right before your eye’s.

    2. I have to agree with David here.

      The examples you gave were examples of more efficient use of data within a capitalist structure, not communism.

      Communism would be if that data was there, the government said a new restaurant would go up, sent notices to workers at other places and told them their jobs had been changed, then put up a restaurant where the workers had no idea when they would next be moved or what would happen to them if the restaurant failed. And they would be paid “according to need.”

      Without a for-profit economy where those working have the opportunity for benefit that they can at least partially control, no one does well except the corrupt.

      I think there’s some confusion in your post as to what capitalism is. If it ended, then the state would pay you for the books you write and not the ones buying the book…primarily because the book would be either distributed or not according to some group of flunkies who decide what is “helpful” or not.

      I’ve seen communism, even communism once we had aggressive news and information passing. It is ugly, hard, corrupt (because corruption is the ONLY way to get ahead), devastating to those who suffer under it’s power and it leads to only more suffering and death.

      Oh, and failure. Yes, communism always fails eventually.

      If you’re talking socialism, then there is still for-profit private capitalism inside that. That might be what you’re looking for. But communism? No. That’s only nice to think of when one is rich and can run away to live on the sea and not have to deal with the horrors that come with it.

    3. I was thinking the same thing. Pure communism is the public sharing of means of production. There is no money because everything is bartered. There wouldn’t be any incentive to work hard and to innovate since you wouldn’t own the profits. The restaurant example is not an example of communist because the restaurants are still privately owned. The AI is merely speeding up the process of supply meeting demand, a process that is already damned fast (assuming public access to quality information).

  13. I completely agree on both points! I often said that we should just have CCTV in all public spaces. It would eliminate crime because if my house gets robbed, just rewind the tape and follow the van to it’s point of origin. I also think to avoid North Korean scenarios where the CCTV is used as a political weapon, give all people access all the time (too which my friends say what about stalkers? We have stalkers now, at least with CCTV we can spot them easily).

    I’d also think we need a capitalism 2.0. I’d be willing to do away with wellfare, social security, unemployment, and anything we won’t need after the change I’m purposing, etc. And use the money to pay for a hotel room sized living space, healthcare, a free food grocery store, public transit, internet, and a chromebook type computer, and a free second hand clothing and home goods store for every human being on the planet. Then if people want more, like a TV, netflix subscription, go on vacations, buy new clothes, eat at restaurants, Harry Potter wand collections, etc. they can find work. Sure there will be freeloaders on a system like this, but we have them now. I feel we often spend to much time worrying about freeloaders, than all the people social programs help.

  14. Democracy and capitalism have to go first.

    Noone will or should trust representative government whos reelection by capitalist dollars is their primary concern with the wealth of information you suggest.

    Until multivac is online we need privacy to keep the middle men of government at bay.

    Imagine the messy transition point where Trump and his stormtroopers have YOUR information but you don’t yet have a way to make him/them accountable in any way. No thank you.

    1. The Obama admin continues to hide its data on Benghazi, IRS, etc…..and thankfully we have people on the other side working to force them to divulge this information. Free people will always be able to challenge those who wish to keep us in the dark. In communistic countries, there is no freedom for the truth. You are not even valued as a unique human being, but rather something that can only be beneficial to the state. psychologically speaking, the masses will only put up with so much until they revolt.

  15. Your idea of a planned economy managed via AI echoes Asimov’s story in I, Robot, The Evitable Conflict:
    https://en.wikipedia.org/wiki/The_Evitable_Conflict

  16. OK, I have an IPhone, and have no reason to be worried what someone e might see, but I admit to liking Apple’s stance in protecting their customers. IF it mattered that the master computers actually saw what was on my IPhone, for national security, I would be all for it. I know I am not a threat and I would venture t o guess that about 99 plus percentage of these phones are not. This is a slippery slope, where does it end? Can they come in my home just to be sure I am ok?

    I don’t have the answers, but do appreciate your position and am really going to think about this.

    Keep writing!! I loved that book and all your others.

    Jane J

  17. Hugh, your thoughts have a point. But I have a bad feeling about handing over too much responsibility to artificial entities. Maybe I shouldn’t have repeatedly read Dan Simmons’s Hyperion tetralogy…

  18. *Warning Book Spoilers Ahead*

    If and or when your scenario comes to full fruition I can’t see humanity remaining unchanged. Either we may eventually merge with our technology to create a new species, we regress, or we go extinct. This is far too narrow of a possibilities list I know, but they are the three biggest options I see (aside from a species renaissance). In way of explanation, I’ll cite some wonderful science fiction that has explored some of these concepts:

    Wool by You:

    What made this story a true tragedy was the extinction of 99.9% of the human race. The reasons for data mining and a lack of privacy became a necessity to prevent the remaining 0.1% from going extinct. Any level of intrusion or control can appear altruistic if it is the only option remaining next to extinction.

    “If you tie a man’s hands there is nothing moral about his not committing murder.” – Auberon Herbert (I used to think this quote by Twain until I looked it up again).

    Sure, we are becoming a society that more mirrors the Truman Show than our notions of established privacy but the psychosomatic implications of *knowing* there is no privacy are much more problematic.

    The illusion of privacy is very important, just as the belief in God is more important than whether or not God actually exists. Its phenomenal that the Placebo affect exists, but we literally see a manifestation of physical change based on the incorporeal nature of thought and belief.

    The Boat of a Million Years by Paol Anderson:

    Great book as is the Wool series. But once humanity achieves immortality, they stop caring about the survival of the race. They are *content*, much in the way your scenario may eventually lead us to become. Contentedness destroyed the drive of curiosity. Who cares whats at that next star, or planet, or in the next town over.

    The problem I see with reaching a machine run utopia is it will, as you’ve even stated, come down to efficiency. What efficiency is there in an abstract piece of art? I’m projecting here, but I feel that unless we were to become our own technology, and even possibly then, we would lose a purpose for all the arts and humanities. Then again, we’d cease to be strictly human so it may become the Arts of Efficiency and Machenities.

    The Time Odyssey Series Arthur Clarke & Steve Baxter:

    We see the advent of several global AI’s (not even part of the main plot of the novels-initially), each able to communicate and resolve any of humanities queries on request. There would be little privacy for people based on what these AI’s would have access to and know (what you said).

    But I think people are more likely to trust a sentient AI keeping their privacy (subconsciously knowing there is none), than their neighbor next door. We trust machines more than our neighbors. And this is a byproduct of our race to keep privacy- “Well, *why* do you want to know what inane thing I do.”

    You are very right on the count of machines being able to usher in a more Socialist global system though. The biggest problem with Soc is resource allocation. Sure it’d be amazing to hold everyone equal but planetary homogeneity didn’t occur when Earth formed. And we capitalize on resources where they exist.

    The Expanse series by James S.A. Corey:

    Had a very interesting economic model for all people of Earth- you lived on basic (the level at which the world government set a no-frills life but you had everything necessary to live comfortably), or you worked harder for more. Anyone can be on Basic, but not everyone will want to apply the effort to do more. This system is an interesting cross between Socialism and Capitalism as the Cap moved to interplanetary resource allocations, and on Earth at least, a socialist structure prevailed.

    So amidst my ramblings, I feel the process is going to be messy for the next 200 to 300 years. And if we’re not all dead, or slaves to the machine, or living in Silos, the human race may grow up enough collectively to understand your message. We’re not there now.

  19. Hugh,
    Have you read any of Iain M. Banks Culture books? Similar idea done well. Thanks for the challenging thoughts.
    Rob

  20. What if the owner of the French restaurant has some innovative ideas to make his product more marketable and skynet forces him to close up shop before he’s had a chance to try them out?

    1. AlphaGo Game 2, move 37.

      The AI will have already considered this innovative change and would have recommended it if it really would have helped. We’re talking about something that can literally see the future and shapes the present accordingly.

      I’m about to publish an AI short about this type of AI. And the pressure the creators would be under to ‘get it right’. *spoiler, an AI like this has an incredible *reach* to influence… Even influencing it’s own creation.

      Watch this space: http://www.MichaelBlackbourn.com

  21. Joseph Ratliff Avatar

    Daniel Solove has addressed a good portion of your points, Hugh … http://papers.ssrn.com/sol3/papers.cfm?abstract_id=998565

    That said, I’m not trying to “convert” your opinion here, because parts of it are well thought out if you remove the absolutist language.

    The balance between one having the right to a space where they can think and create free of society AND using tools and technology to “catch” people doing horrible things is a tough problem.

    It might not be tough for you Hugh, but there is another side to the argument you haven’t written about here.

    I had answered a similar type of question on Quora once: https://www.quora.com/What-is-the-societal-justification-for-privacy/answer/Joseph-Ratliff-1

    (You should also read Doc Searls answer to that question).

    I truly hope we can find some “balance” to this, but I fear that we won’t and the “battle” will rage on.

  22. My take away is that you are describing an increase in the velocity of information (following your example of the Italian and French restaurants). In you example, the decision is still with the individual. “Do I want to change from a French to Italian restaurant? Do I want to open a second Italian restaurant? Is it time to retire from the French restaurant business? And so forth. This is a good thing and would make the “free market” (which is not that free, but that is a topic for another discussion) more efficient.

    When we take the next steps, as in a true communistic, central planning system, those questions will not be asked. Someone (the machine?) will tell one of the two French restaurant owners to go out of business or change to an Italian eatery. Now we become slaves to the central planners, whether machine or human. Think about the conditions that define a slave, however well they are treated. They don’t have the ability to chose what they do with their time and labor.

    The problem with a small tribal model is that it doesn’t scale up well. It works when everyone knows one another and has personal interaction.

  23. Interesting thoughts, but I think it comes down to who controls the data, and what their plan is. The invasions of privacy you’re talking about represent an incredible amount of power, and we’ve yet to see the full implications. For instance, in the UK there’s a big drive to get everyone to install smart meters that report back how much power households are using and when. That could be used to reduce the need for meter readers and to tell people about when they’re using the most energy so they can save energy. But it’s also juicy information for burglars. Who has this information matters, even if it’s something that seems as innocuous as when you put the kettle on.

    There’s still, sadly, a lot of hate and stigma over things that shouldn’t divide us. Imagine this “urge map” showing that there’s lots of demand in an area for more sensitive things. You’d only want certain people to know that, and not the hate groups, but who gets to decide who has access? That’s why I’ll keep advocating privacy protections.

  24. If the end etc will bring about an end to egomaniacal, self-promoting blog posts from rich authors, and the attendant comment blather that results, well, then, maybe you’re right.

    1. Love it…..exactly……lol

    2. Reply of the day!

  25. I fully agree with you that every phone should be accessible if need be.

  26. French Chef: I want to open a new French restaurant.
    AI: No. There are already too many French restaurants. Open an Italian restaurant instead.
    French Chef: But I’m an entrepreneur!
    AI: I plan the economy in its entirety. By definition, the only entrepreneur is ME. Open an Italian restaurant.
    French Chef: But I don’t want to open an Italian restaurant. I see your point about there already being too many French restaurants. Seeing as my basic needs are already being met, I will indulge my hobby and make French food just for the joy of doing it, even though no one will eat it.
    AI: No. I allocate all resources. Providing resources for you to make food no one will eat is inefficient.
    French Chef: But…
    AI: Resistance is futile.

  27. You say:

    “The tools to multiply this are coming online every day. It should change the way we think about centrally planned economies.

    “If we stick to our capitalist desire for a messy market, simply based on a preference for messiness, that’s not a very ethical stance. It’s a purely ideological one. When our biggest complaint about a planned economy (its technical infeasibility) is no longer true, then what becomes our next complaint?”

    I’m with you on privacy. Well said.

    But I think you’re totally wrong about capitalism.

    You write as though “technical infeasibility” were the sole reason why central planning is a bad idea. Far from it.

    The avoidance of even the possibility of tyranny is far more important.

    I think what makes sense and what is happening (to some extent) is that entrepreneurs, small business owners, and corporations are operating in increasingly information-rich environments. You applaud that, I think, and so do I.

    That’s different than saying that the environment is or should be centrally planned… because, as that phrase is generally used (and there is a lot of history to this) he who has the right to centrally plan has the right to enforce the plan.

    And that’s the rather large problem with your argument.

    Does someone have the right to inform the central plan you say you want? And if so, who decides who that is? And what the plan is?

    It’s a devilish problem and as power concentrates, we might only get one chance to decide that centralized planning is the right answer – because, once decided, in today’s technological context, such a decision could be irreversible.

    And then what?

    Not a future that of freedom that unleashes human potential in optimal way.

    There are roles for many players in creating information-rich environments in which free market decisions can be made in a capitalist society. Traditional trade and professional associations, consultants and paid advisors have long supplemented what businesses can do themselves.

    But there’s a role for government, as well, and it expands on the traditional notion that government merely sets the table (e.g. by providing for the settlement of property rights, the reinforcement of contracts, the environment of peace and stability).

    Government can also be a player in enriching the information environment directly.

    I’m retired now from a career in economic development. One of my last posts was heading a department for a city government. We saw that developers were overbuilding the high end housing market (five years before the collapse in 2008). We commissioned and published a study that proved it – and sent it out to the development community… and they decided to slow the pace of such projects.

    Another example, to encourage local entrepreneurship, we subscribed to expensive market-related data bases. We employed a staff expert able to interpret the data… and made her services free to local entrepreneurs. Small fry businesses can’t afford marketplace intel readily available to major corporations. We wanted to level the playing field. We could at least make a start, and we subscribed and increased public accessibility to such data in our local market.

    But it was information to be used (or not) by local capitalists. We weren’t centrally planning our local economy… and didn’t want to… for good reasons.

    1. I meant to say:

      Does someone have the right to – enforce – the central plan you say you want? And if so, who decides who that is? And what the plan is?

  28. Too much false choice in here. A right to privacy doesn’t mean an absolute right to secrecy. Capitalism works better when it has limits, especially on certain types of goods. Messy compromises and managing grey areas is what it’s all about – absolutes do not tend to work well.

  29. My problem with this stance on privacy is that it isn’t just our government that will, eventually, have access to our phones (and through them, our bank accounts, etc.) but also, Ukrainian hackers and Chinese operatives. This issue ISN’T security vs. privacy — it’s security vs. security. If there’s a back door, there’s no way to insure it will be used only by good guys.

  30. Hugh, what do you think of David Brin’s concept of Sousveillance?

    In principle, I like it – the public watches the authorities and each other, as well as the authorities watching everyone. In practice, however… power differentials make it problematic. For instance, domestic violence victims could easily be tracked and targeted by their abusers, children could be tracked and targeted by abusers, individuals could be targeted by salespeople (or salesbots) when they were intoxicated or otherwise impaired, etc.

    Likewise, if only the authorities have universal surveillance, (online or offline) they can target whomever they want, and have control over whether incidents ever make it into the official record.

  31. My concern with your thoughts on privacy is for the undesirable minorities. Would you out every gay in the world, regardless of each given country’s laws on homosexuality?
    It’s against the law, so those who say you have nothing to fear if you’ve done no wrong are technically correct, although I disagree that it’s wrong and with those laws.
    What would be the net effect on security, I wonder. The actual threat of terrorism is tiny. Minuscule. More lives would be saved by outlawing sugar. Would you do that? It’s easy to argue against evil, and it’s easy to beat up the perceived threat that terrorism poses, but would you take away that freedom to choose what you eat?
    So we’re exposing millions of gays all over the world to persecution – people who already have the highest suicide rates in our societies – to buy security from our irrational fear that a handful of us will be victims of terrorism.
    No thanks.
    I’ll take my chances (and probably die of a stroke or heart attack) rather than sacrifice those peeps.

    1. An addendum to this: I saw two days ago that in the state of South Australia (just one of our six states and two territories – with a population of only 1.677 million) an average of 100 police each year are busted using their power to look up people’s information for personal reasons.
      100 each year, in a state with only 7% of the country’s population.

  32. As someone who works in technology and deals a lot with security, I can understand your sentiment on privacy, but disagree with it. The idea of security through an all knowing observation of our lives is great on paper, but in practice, people succumb to their temptations and desires far too often for it to be practical. Nobody can be trusted with that kind of information.

    Like you mentioned, it could be used to quell uprisings, or perhaps to seek out and punish those critical of the government. Perhaps a fundamentalist group of whatever belief rises to power, what’s to stop them from using those tools to seek out those they believe are sinners or heretics?

    You say this:
    ” Here’s the thing: You aren’t that important. You are noise. They’re looking for signal. I say let them.”

    What happens when the government decides that I am the signal? I’m not saying I’m going to cause harm to anyone or anything, but what if suddenly those that don’t believe in a god are considered a terrorists? Is my right to privacy still not important then? When they can profile me, locate me, and arrest or execute me, just because of ideas or beliefs? Given the history of atrocities in the world, and shiftings of power, it can not be denied that the threat exists of it happening again if we aren’t careful.

    You also say this:
    “If you think privacy should be allowed, period, you might want to look at the bans on burqas spreading to more and more countries. There was another suicide bomber recently that used a burqa to mask their identity.”

    Banning burqas is a simplistic knee jerk solution to a complex problem, much like abolishing digital privacy. What if a suicide bomber put an explosive in their pants? Should we ban pants? Maybe we should all be required to wear skintight leotards and not be allowed to carry backpacks?

    I could write pages on why privacy is important and why abolishing privacy will do no real good, but I’ll try to sum up the rest with a couple of points.

    First, if people want to do harm, they will find a way. Bugging phones, indexing all documents and email, and getting rid of all privacy will just mean that terrorists or whoever will find some other way around the problem. Pens & paper, illegally encrypting data with 3rd party programs, even manufacturing their own communication devices. Technology isn’t hard. All we can do is mitigate it through means that make sense. Body scanners at airports, profiling in public places, and things like that are all acceptable because they make sense.

    Second, if we sacrifice all privacy, we sacrifice individuality. To mix some of the economic topic in, you talk about tribes and simplistic living. The problem with that is our culture came about because we were able to abandon that. In a society where resources had to be allocated efficiently to allow for survival, many of the luxuries we enjoy today wouldn’t be possible. Even what you do, writing novels for entertainment, wouldn’t be needed. If nobody had the right to privacy, everyone would consciously or subconsciously try to conform to the expectation that society has of them whether in public or not. Freedom to explore without being questioned is a very valuable thing for creativity.

  33. I’ve lived in countries like China and Russia, so I long ago got used to knowing everything I did online was being watched carefully, and I stopped caring.

  34. I’m ready to take that 90% of your boat now! I’ll be down to get it next week….but not to worry….I will share it with everyone of your friends. Oh, don’t expect US to clean up the mess, because you really won’t own it anymore anyways. I’ll give it back to Uncle Sam when we’re done. Fair enough??

    1. Busted Hugh. I remember years ago Bofors you became “New Jork best selling author” having a lo g did uszion right here on this blog with you ravingabout how good capitalism was. You were hungry then, now you follow the other side since you’re are above having to work for a meal. I’ll take a share in that million dollar boat also.

      I told the wife that you would end this way back when Wool first became a big seller and you didn’t let me down.

      1. Auto correction sucks on this phone please forgive all the all the typos above.

  35. I was going to write up something longer but my point is that I disagree. Privacy still serves a valid interest. I also think you misunderstand how privacy as a right functions in the United States, the history behind the 4th Amendment, and the fact that there are already exceptions for the need for a warrant that still allow a search to be reasonable.

    I like the background history of the 4th Amendment provided here: http://www.swindlelaw.com/2013/03/the-history-behind-the-4th-amendment/. Replace searching for contraband with “searching for terrorists” and you basically have the argument you made here. Our Founding Fathers rejected it and the Constitution rejects it.

    In cases where the public interest (i.e., terrorism) is invoked, a judge weighs the privacy rights of a group or individual against the public interest. I like that system a lot better than discarding the 4th Amendment and all privacy rights because terrorism exists.

    There are already exceptions for searches incident to arrest, emergencies, searches of vehicles, stop and frisk (Terry v. Ohio), and at least several others. These searches are reasonable without a warrant. Why intrude further than that if that’s all the intrusion necessary? Terrorism exists so forget the warrant requirement? Forget privacy? Just grab whatever you want from their phone or computer? I think your solution is way overbroad and overreaching. I could go into the facts of some court cases to further make my point, but I think I’ll end my post here.

  36. The problem with centrally planned economies isn’t just that they’re shortsighted due to poor visibility into the system. They don’t work because they destroy incentives. People who don’t have ownership don’t care as much. There need to be consequences for good and bad results, because that is the way humans are motivated. We aren’t 100% capitalistic now, and we most certainly won’t be so in the future, but we probably won’t reach 100% socialism/communism, either.

    As for not letting people take resources and squander them, that’s an intrinsic property of capitalism. You’re free to use resources as you like, but if the rest of the world doesn’t agree with you, because you’re going out of business and don’t have anything left to squander. As computers get more involved with the allocation of resources, it’s important to understand that they will not always be perfect and that there will be variety in their algorithms. Further, certain algorithms might do well in certain circumstances and poorly in others. All of this needs to be learned and there will always be new, poorly understood scenarios. Computers, even very smart ones, will mess up, so there has to be a system for managing the allocation of resources, and it will have to be based upon some performance metric.

    Freedom and privacy are going to be at odds with safety, not only because there are those that who choose to get what they want by hurting others, but safety requires surveillance by those who will help you. It is nothing new that these things are in conflict.

    A very serious problem with a lack of privacy is that most transactional security is reliant upon information remaining private. One might think that biometrics will provide transaction security, but as advanced technology makes this sort of verification possible, it also makes it beatable. Then, when we get back to privacy being given up, there needs to be a system for making sure our information that we no longer have control over isn’t misused. Which brings us back to the problem of bad people. It isn’t power that corrupts, but it is the corrupt who are moved to get power.

    The only way to avoid abuse of power would be to make all use of information public, which means everything you do is viewable by everyone. This is the cruelest punishment I can think of for someone with an anxiety disorder. We would have to undo so many behaviors and cull anyone that is sensitive or is overwhelmed by regular interaction from our society. Maybe we could get there, but there would be a cost. Environmental conditioning can make people less sensitive to being in public at all times, but it would be entirely unlivable for some neuro-atypical people no matter how hard one might try to condition them.

    We’re on a trend toward increased complexity rather than increased simplicity. That is the nature of evolving systems, which our society is. Universal transparency might be something that is unavoidable, but extreme communism will not happen. The nature of ownership and how it works will definitely change, but human motivational systems require it to continue as we move forward. Maybe we will be completely abstracted away from the physical resources, but if we’re involved at all, there will be ownership in some form.

  37. You’re assuming that AI will work to improve humans’ lives. Right now, AI is limited and not making important decisions on its own. That may not be the case as AI advances. Stephen Hawking, Elon Musk, Bill Gates, and many other tech experts have warned that AI will quite possibly be more dangerous than nuclear weapons. In January 2015, Bill Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

    AI has no conscience and no empathy. It could decide that there are too many people or that certain groups of people are endangering our planet and simply kill them off. It may decide it needs more detailed information on certain people and send drones to follow them around. It might shoot someone on sight if its algorithms decide that person is dangerous.

    Also, you mentioned that we are tribal by nature. That’s also a problem. The more tribal a group of people, the more hostile they are to people who are different. The government has in the past not only spied on its enemies. It’s also spied on protesters and people in rival political parties (people not of their tribe).

    Here’s a fascinating article on why many leading tech experts feel that AI represents one of the greatest dangers to the human race: http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/

    And, of course, when AI learns from humans, the results aren’t always so great: http://www.iflscience.com/technology/microsofts-chatbot-converted-bigotry

  38. Barney Johnson Avatar

    Without capitalism there would have never been Amazon.

    Without Amazon…… Hugh who?

  39. Just saw this article…At least for the time being, A.I. will be developed and run by corporations, and there’s a LOT of money being made in the process as companies compete to own the A.I. tech: http://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-artificial-intelligence-and-techs-future.html

  40. I agree that Privacy and Capitalism will end. I can see privacy being eroded, the transition is underway. However how would a transition away from Capitalism work? Anything other than a smooth transition could destroy our civilisation. Will we end up with communism? I really hope not. Is there a third way?

  41. Great points, Hugh! I’ve been saying many of these same things for a while now amongst friends and family and have had to endure their slings and arrows. Nice to see I’m not alone.

    I think one thing people in this comment section miss is that a zero privacy state is one that operates under a diffused power structure. The whole idea that we’d wind up in ‘1984’ is based on a notion that zero privacy could or would fall into the hands of a nefarious power structure. Just the opposite would occur. Power consolidates and is capable of ruling over society at large because of its privacy. Power structures cannot survive the will of the people once they are made transparent. Privacy cuts both ways. If we’re all living out in the open – all of us including those who mean to rule – there is no means for power to consolidate and no method for instituting authoritarian control. This is a great thing.

    As far as capitalism goes, it’s days are numbered because it has an inherent fatal flaw, that being scarcity. With the rate of technological progress accelerating the age of scarcity will give way to an age of abundance. With abundance you have no capitalistic means of withholding goods or services from anyone and therefore the entire profit incentive eats itself and the notion of value takes on a new definition.

    Excellent blog, Hugh.

  42. The biggest shock for me in reading this is your tone. As the first commenter pointed out, you sound defensive, argumentative even. Very unlike you.
    As always you make a lot of good points that make me rethink my feelings, and there are still more items that I’d love to debate with you, but instead I can only ask this question:
    Who are you, and what have you done with Hugh Howey?

    1. Kristy, I think reading into ‘tone’ Hugh’s writing is a slippery slope. I took at face value what he wrote and didn’t try to insert any of myself into that, which is what you do when you try to look beyond the words and infer.

      Assuming defensiveness or argumentativeness or “reading between the lines” is an entirely subjective exercise, adds little value to the discussion and is more about what’s happening with the reader than what is happening with the writer.

      To be fair, however, the above is not to say that you’re necessarily wrong in your assumption. You may in fact be right about Hugh’s feelings or intentions, but to assume that you know, which is impossible, and to extrapolate further from there with regard to Hugh’s character or what have you, is baseless and unfair in my opinion.

      Instead of inferring Hugh’s intentions or feelings regarding this piece, just ask him, and then perhaps make up your mind.

      1. *sigh* Pot, meet Kettle.
        Suddenly I remember why I almost never leave comments on this blog.

    2. Joseph Ratliff Avatar

      I’m in agreement with Kristy McKinnon. Hugh, your “tone” sounds a bit different from your normal writing.

      I had left a comment that had links explaining the other side of your viewpoint here, but it wasn’t published. I have absolutely no problem with that, your blog and your rules of course … but I still encourage you to read the linked material.

  43. Ironically, considering your dystopian bent, the direction you suggest is Utopian… IF there are strong, ethical standards in place.

    There is an inherent conflict between Capitalism and compassion. The question is, will technology outpace our moral, spiritual and philosophical development?

  44. Kristian Lindqvist Avatar
    Kristian Lindqvist

    Great points Hugh, and I’ve been thinking along the same lines for a while now – I think the only good, uncorruptable kind of administration comes from AI:s, the police/state/tax officials etc. should be run by a computer. I’d love a world, where sensors would be monitoring everything and drones would be hovering around and if someone would try to commit rape, for example, a drone would instantly intervene and shoot a sleeping dart on the attacker. Or something like that :)

    This does however, require a change of what we consider to be the concepts of working and living. Life needs to be free at that point; people wouldn’t have to work if they don’t want to do – they would be free to play the guitar, hang out on beaches and do stuff as long as they live without having to fear losing their homes, get food to eat and such. But those who want to work, improve science etc. should get rewarded for pursuing that, for example by getting more stuff or bigger homes in better places so it’s quite complex and very…. utopia at this point ;)

    But I think that would be the direction to take, in the long run…

  45. What a depressing article! You value comfort very highly and would give away much to achieve it. But comfort is only one value.

    And you make the usual socialist mistake of believing that all humans can be molded in the ‘correct way’, but this is not true. Humans are, or can be, individualistic. Not everyone wants or can live in the ‘correct’ way.

  46. Hugh – Interesting article today that highlights good reasons for not going the route you suggest:

    https://www.washingtonpost.com/news/the-switch/wp/2016/03/28/mass-surveillance-silences-minority-opinions-according-to-study/

  47. ” Those on the right will scream that capitalism is the best thing that ever
    happened to us, while agreeing that cops should perhaps bust down more doors
    and invade our privacy to keep us safe”

    Nice bit of stereotyping there. And just like all stereotypes, it’s both wrong, and also stupid. It’s stupid because it’s so obviously wrong. You have heard of libertarians, right? And of liberals? (That’s proper liberals, not the modern “anything vaguely left” liberals.) Your stereotype of the left is just as bad – leftist parties and governments around the world are just as authoritarian and prone to snooping on citizens as rightists.

  48. Frank Ch. Eigler Avatar
    Frank Ch. Eigler

    Published this just a few days early by mistake, right, Hugh?

  49. wait, which novel did you write about government takeover?

  50. ‘If we stick to our capitalist desire for a messy market, simply based on a preference for messiness, that’s not a very ethical stance. It’s a purely ideological one.’

    There is no desire for a messy market, per se. I might attribute a desire for ‘messy’ to you, as you toss words around so very informally, here. I do not even quite grasp your contrast of an ‘ethical’ stance vs. an ‘ideologica’ stance. Let’s try harder..:

    ‘The capitalist approach is to let the owners of the French restaurants to go out of business, as failure leads to more efficient markets. The servers and staff lose their jobs and go in search of different jobs.’

    So okay, I think we call this ‘messy’, and perhaps I can guess something about what is not ‘ethical’ about it. What is messy and no ethical, is a situation where someone of working age is not able to get a job but would like to be in full time employment. Better, if employment is constant. And, I think, ‘sure, okay’. Indeed, terminating a match is costly to both workers and firms. It is costly to workers because they become unemployed and have to look for another job. But also, it is costly to firms. Change is bad. I’m against change because it is bad. If nothing changed, then that would be good. (sarcastically, then:) What’s wrong with this reasoning? I can’t see anything wrong with it.

  51. Someone did write a book about this, called The Circle. In the book, a Google/Apple like company takes over the world and essentially connects everyone together all of the time, removing privacy boundaries.

    The book’s take is decidedly less optimistic than Hugh’s.

    I think anytime someone becomes too enamored with their vision of the future, problems arise. As with any large scale changes, there will be so many unforeseen consequences that it would be anyone’s guess as to whether or not the changes will be for the better or not in the big picture sense.

    I can easily envision utopian societies where folks embrace data, science and reason, and old hatreds and bigotry dies as our society matures and AI helps us more humanely and efficiently run our world.

    I can just as easily imagine a society where a small circle of extremely wealthy elites use AI to subjugate the masses for their own ends, a more Orwellian nightmare vision…

    In the end, it will likely be a mixture of positives and negatives. Although Hugh is great at writing thought provoking and somewhat novel approaches to the current landscape (in both the political and the publishing spheres), he tends to gravitate towards simplicity and naivete.

    The world is not simple.

    Human beings are not simple.

    The world will not just be a loving, wonderful and kind place because we want it to be so–in reality, the universe is full of chaos and violence down to the smallest particles. Animals eat one another.

    I long for more love and peace too, but not at the cost of sacrificing my own personal grasp of reality to achieve it.

    1. Excellent commentary. Well said.

  52. All your base are mine

Leave a Reply

Your email address will not be published. Required fields are marked *