Privacy and capitalism both need to come to an end. Both will come to an end, if we’re lucky. Which should piss off just about everyone reading this.
Those on the right will scream that capitalism is the best thing that ever happened to us, while agreeing that cops should perhaps bust down more doors and invade our privacy to keep us safe. Those on the left will shout that our privacy is a sacred right, but they will perhaps agree that capitalism causes harm that could be mitigated by a more centrally planned economy.
I’m going to argue that privacy and capitalism have both arisen through the increased size and complexity of our tribes, and that while capitalism served a purpose, and privacy can be appealing, both are going to come to a much-needed end. And I may be alone in celebrating the ruination of both.
Privacy got a Snowdenesque boost recently courtesy of Uncle Sam and Apple. Maybe it’s the circles I travel in, but everyone I know seems to agree that Apple shouldn’t unlock a phone used by terrorists because of a slippery slope of invasion of privacy. It looked like the courts were going to force Apple’s hand, but now an Israeli firm is going to do the hacking required, but at least Apple made their objections very public. Very public. As in: Great PR public. But I find myself, as a libertarian, or a liberal, or whatever the hell I am, strenuously disagreeing with Apple’s (and Facebook’s and Google’s and all my friends’) stances. I think I’m pretty much alone as a libertarian in this disagreement.
Because I think the government should have access. Not just on this one phone, but all the phones. And yeah, I wrote a novel about a bunch of evil dudes harvesting data on an entire populace in order to tamp down uprisings. And I’m still not sure who was right in that novel. I don’t think it’s all that clear-cut. I personally lean more toward Hobbes than Rousseau in my view of human nature. I have a feeling I’m going to upset all the privacy peeps to the left, but don’t worry. My next move will be to enrage my capitalists friends on the right.
We currently possess the software and hardware capability to tease terrorist activity out of a bunch of noise. Machine learning and neural networks have made computers better at spotting trends than any group of humans (similar to how computers are better at spotting tumors than any group of expert oncologists). This is the same sort of learning and pattern recognition that allowed IBM to trounce my friend Ken Jennings at Jeopardy (sorry to keep bringing this up, Ken). Computers are way smart. Pretty soon, they’ll be driving us to and from work and save hundreds of thousands of lives a year, far more than are lost in terrorist attacks. But they’ll also be able to save us from most of those terrorist attacks as well. And I believe all our data should go to the machines and the people who oversee those machines.
Blasphemy, right? Yeah. I’m sorry, but I just don’t see the need for absolute privacy for the sake of absolute privacy. I willingly step into the millimeter body scanner at the airport, and let a computer, an algorithm, and a human overseer, scan my body for any hidden rigid object. I submit myself to this so that no box cutter or gun or knife gets on a plane. I have even allowed on several occasions those strangers to grope me when the machines spit out a confused result. I am happy to relinquish my privacy in exchange for safety. And I’ll gladly do the same digitally.
You know why? Because the machine isn’t looking for me, and the guy seeing scans of my dangling testicles sees them all day long and couldn’t care less. Whatever it is that people are terrified of the government knowing they are doing, it isn’t what the government cares about. And none of us are as important or unique as we like to think we are. Our digital piccadillies are like my dangling testicles: There is too much of it going on for anyone watching to give a shit.
Imagine for a moment taking all modern tools of communication away from terrorists: Search engines. Social media platforms. Texting. The entire internet, basically. Nothing would hamper them more. All it would take is a shared abdication of our privacy — the digital version of driving around with our license plates exposed, or not being able to walk around with masks on. These are decisions we make and agree to all the time, but for some reason we can’t seem to have a discussion about our online privacy. Why is that? What the hell is everyone doing online that is so self-important or shameful that trumps our ability to hamper the truly evil among us?
This conversation will become necessary as AI improves. We already live in a world where Google knows what we’re looking for on the web before we finish our sentences. Retailers like Target know a woman is pregnant before the rest of her household does (seriously). To have the ability to save lives and prevent violence, but to not do so, requires some valid objections. Privacy for the sake of privacy isn’t one.
It makes me wonder what privacy advocates are doing in their spare time. Looking at porn? Ordering dope? Here’s the thing: You aren’t that important. You are noise. They’re looking for signal. I say let them.
If you think privacy should be allowed, period, you might want to look at the bans on burqas spreading to more and more countries. There was another suicide bomber recently that used a burqa to mask their identity. European and African countries are finding that privacy in public spaces can be dangerous. Increasingly, we are living in one giant public space. I find it strange that people think walls will keep us more safe. I believe the opposite will be true. Less walls and more transparency. More of us raising our palms in greeting, showing they are empty of violence, and holding instead only some mildly embarrassing vice, like our love of My Little Ponies.
(An aside: Beyond safety concerns, I’ve never understood why people hate advertisers being made aware of what people are shopping for. Do people like ads that don’t appeal to them? Why? I wish I only saw ads for things I cared about.)
Okay, now that my fellow liberals are seeing red, let’s take this AI and big-data insanity one step further: What if Karl Marx was right, just a century too early? Common wisdom states that capitalism is necessary because a planned economy is impossible. Impossible, because no one person can manage something so complex as the interaction of billions of people. But what if we put the interaction of billions of people in direct charge through the parsing of big data? That is: What if no one person is smart enough to manage the economy, but one day a single machine is?
We may be fifty or a hundred years away, but we will eventually arrive at a place where a network of computers can run our economy more efficiently than the bumbling of billions of people. Sure, the system we have right now is the best among a raft of really horrible alternatives, but it won’t be for long. Soon, a neural network will be able to set prices, plan production, allocate resources, effect transportation, and so much more. If you don’t think so, you haven’t been paying attention. Neural networks are already doing most of these things. Retailers like Amazon are relying more and more on algorithms to make major business decisions. The more we put Big Data in charge, and remove the emotion of human agents, the more efficiently our businesses and economies run. The logical conclusion to this is a world run by an AI, whether we call that thing sentient or not.
As robotics improve, we’ll eventually head toward a work-free economy where all the basic needs are met and provided for free, and humans spend their time engaged in voluntary tasks and artistic pursuits (or consuming entertainment). This is communism, and it should be the goal of capitalism to get there. To each what they need and from each what they can provide.
Let’s take a small example to highlight how this might work: Soon, an AI will be able to look at OpenTable reservations, Yelp searches, FourSquare check-ins, FB check-ins, Instagram photos, and the amount of time people are milling about outside a local Italian restaurant on a wait list and see that there’s much more demand for Italian food in a neighborhood than is being met. At the same time, it will be able to look at a corresponding lack of activity around several French restaurants, and the AI will know ahead of time that the French restaurants are going to go out of business and that another Italian restaurant would better serve the community and decrease wait times and other inefficiencies.
The capitalist approach is to let the owners of the French restaurants to go out of business, as failure leads to more efficient markets. The servers and staff lose their jobs and go in search of different jobs. The space sits empty for a month or three. An enterprising individual eventually sees the demand on the street for more Italian food (or more likely, the existing place expands), and eventually, slowly, finally, the people get what they want. Probably in time for them to change their tastes and go in search of some French food.
What if instead, there was a website that showed aggregate demand for goods and services? What if all that data and those algorithms were made transparent for every entrepreneur to see? What if a single entity, like a Google or an Amazon, broadcast our hidden desires in the form of an urge map? Like a word cloud on a website, these would be lists of desirables on maps of various scales. Yes, planned economies have failed in the past, because the amount of knowledge needed could not be aggregated into a single room. But this won’t always be the case. It probably isn’t already. And what then?
The owners of the French restaurants might be notified of the demand for more Italian food. One of them seizes on this and announces a change in their menu and decor. Another responds to an announced demand for more gourmet burger joints and micro-breweries. The last French place is now full of aggregated customers. The efficiency of the market is improved by better reporting, all pulled from big data. This is a move toward a planned economy, and it is already happening. It happens with on-demand fulfillment at retailers. It happens with shipping routing. Every time a big rig is rerouted around an accident ahead, an economic decision is moving from a human brain to an artificial one. The tools to multiply this are coming online every day. It should change the way we think about centrally planned economies.
If we stick to our capitalist desire for a messy market, simply based on a preference for messiness, that’s not a very ethical stance. It’s a purely ideological one. When our biggest complaint about a planned economy (its technical infeasibility) is no longer true, then what becomes our next complaint? The inhuman, robotic nature of the new planned economy? But the data involved is very human. This is just Adam Smith’s invisible hand becoming visible. Who will argue with that?
Plenty will, I’m sure. And they won’t be quite sure why they are arguing their stance. It’s just their stance and that of their friends.
Just as privacy is an absolute right, with no discussion possible. Which ignores the tribal societies we came from, where families slept in the same rooms, walls had rips, tears, and cracks, and common spaces were much more common. People will argue that free markets, unencumbered by central planning, are the best. Even when we used to have central planning back when we lived in small tribes, and our brains could parse all the data required. “Today, we need to hunt. Today, we need to build more shelter. You, start a fire. You, gather wood.”
Our tribes have gotten too big for us to know how to allocate our resources. But soon, our electronic brains will be big enough to get back to what makes sense, which is to allocate sensibly and with a plan. The result will be much greater efficiency and far less strife and misery.
Our tribes have likewise gotten too big for adequate security. The evil among us might be in the minority, but it can now, more than ever, go unseen and unfettered. It is aided by each of us claiming the right to be invisible.
I have a lot of hope for big data and artificial intelligences. I think this combination could make the world small again. They are already helping us allocate resources more efficiently, by letting people rent out their spare rooms, or provide rides in their cars, or loan tools to their distant neighbors. Make no mistake, there are algorithms and big data behind all of the new features of the shared economy. This is capitalism trending toward communism. This is big data making us exposed to one another again. And maybe I’m alone in thinking that both are a damn good thing.
Here’s another way to consider how this natural transition to an AI-planned economy will take place: Imagine for a moment an AI that can weigh every factor going into an NFL game and can predict the outcome with 99.99% accuracy. If this were to occur, gambling on NFL games would effectively end. You would be crazy to bet against the AI-predicted outcome.
Now imagine the same computer has become similarly infallible at predicting economic outcomes. At this point, when we can query a computer to determine what good or service to produce and how much to charge for it, who will decide not to use this tool? And when we become that reliant on the AI’s suggestion, are we still practicing in a free market? Or a centrally planned one?
Now bone up on how many major decisions are being made at large and small business by automated routines. The process has already begun. Or look at how the National Institute of Health has used social media to pinpoint outbreaks and hunt for medical side effects. As these tools get better, and we become more reliant on them, and our ethical frameworks advance further and further, we will head towards a new economic system unlike any ever practiced before.