Surveillance Economy
How Far Can the Surveillance Economy Go?

By LESLIE K. JOHN

UNINFORMED CONSENT

Companies want access to more and more of your personal data — from where you are to what’s in your DNA. Can they unlock its value without triggering a privacy backlash?

 

Three years ago the satirical website The Onion ran an article with the headline “Woman Stalked Across 8 Websites by Obsessed Shoe Advertisement.” Everywhere she went online, this fictional consumer saw the same ad. “The creepiest part,” she says in the story, “is that it even seems to know my shoe size.” The piece poked fun at an increasingly common — if clumsy — digital marketing technique. But today its gentle humor seems almost quaint. Technology has advanced far beyond the browser cookies and retargeting that allow ads to follow us around the internet. Smartphones now track our physical location and proximity to other people — and, as researchers recently discovered, can even do so when we turn off location services. We can disable the tracking on our web browsers, but our digital fingerprints can still be connected across devices, enabling our identities to be sleuthed out. Home assistants like Alexa listen to our conversations and, when activated, record what we’re saying. A growing range of everyday things — from Barbie dolls to medical devices — connect to the internet and transmit information about our movements, our behavior, our preferences, and even our health. A dominant web business model today is to amass as much data on individuals as possible and then use it or sell it — to target or persuade, reward or penalize. The internet has become a surveillance economy.

What’s more, the rise of data science has made the information collected much more powerful, allowing companies to build remarkably detailed profiles of individuals. Machine learning and artificial intelligence can make eerily accurate predictions about people using seemingly random data. Companies can use data analysis to deduce someone’s political affiliation or sexuality or even who has had a one-night stand. As new technologies such as facial recognition software and home DNA testing are added to the tool kit, the surveillance done by businesses may soon surpass that of the 20th century’s most invasive security states.

Illustration by Michael McQuaid

The obvious question is, How could consumers let this happen? As a behavioral scientist, I study how people sometimes act against their own interests. One issue is that “informed consent” — the principle companies use as permission to operate in this economy — is something of a charade. Most consumers are either unaware of the personal information they share online or, quite understandably, unable to determine the cost of sharing it — if not both.

It’s true that consumers do gain some benefits from all the data gathering, such as more meaningful advertising and better customer service, pricing, and potentially even access to credit. But companies urgently need to find a way to balance the benefits with privacy protection. Consumer advocates are raising alarm bells about invasive digital practices. Public outcries ensue each time a scandal hits the headlines, whether it involves Equifax’s loss of sensitive personal information about tens of millions of people or Russian operatives using social media to manipulate the votes of Americans. Internet privacy experts who not too long ago were viewed as cranks on the fringe now testify before Congress and headline conferences. In Europe major legislation to protect user privacy has already passed. We’re starting to see signs of a widespread “techlash,” which could have profound implications for firms that use consumers’ data. It’s probably no coincidence that Facebook saw its valuation plummet roughly 20% after it publicly suggested it might scale back on some data collection.

 

At the same time, consumers don’t reward companies for offering better privacy protection. Privacy-enhancing technologies have not been widely adopted. People are generally unwilling to pay for privacy-enhancing technologies and even if they do, will pay only modest amounts. Though some might take this as evidence that people simply don’t care about privacy, I’ve come to a different conclusion: People do care, but as I’ll explain, several factors impede their ability to make wise choices.

If both sides continue on these diverging trajectories, the surveillance economy may be headed for a market failure. The good news is that policymakers can help. The first step is to understand how people make decisions about the privacy of their personal information and how they can be induced to overshare.

HOW CONSUMERS GOT IN SO DEEP

Let’s be frank: People are bad at making decisions about their private data. They misunderstand both costs and benefits. Moreover, natural human biases interfere with their judgment. And whether by design or accident, major platform companies and data aggregators have structured their products and services to exploit those biases, often in subtle ways.

Impatience. People tend to overvalue immediate costs and benefits and underweight those that will occur in the future. They want $9 today rather than $10 tomorrow. On the internet, this tendency manifests itself in a willingness to reveal personal information for trivial rewards. Free quizzes and surveys are prime examples. Often administered by third parties, they are a data-security nightmare, but many people can’t resist them. For instance, on the popular “real age” website, people divulge a large amount of sensitive health information in exchange for the immediate “benefit” of knowing whether their “biological age” is older or younger than their calendar age. Consumers gain zero financial reward for such disclosures. They may be vaguely aware of the potential costs of providing such information (at its extreme, higher insurance premiums down the road), but because those downsides are vague and in the future, they’re disregarded in exchange for a few minutes of fun.

Impatience also prevents us from adopting privacy controls. In one experiment, people who were setting up a digital wallet were offered a service that would secure (that is, encrypt) their purchase transaction data. Adding the service took a few additional steps, but only a quarter of people successfully went through them. The vast majority were unwilling to trivially inconvenience themselves by following a onetime simple process in order to protect their data from abuse down the road.

Data “transactions” are often structured so that the benefits of disclosure are immediate, tangible, and attractive, while the costs are delayed and more amorphous — and in these situations, our impatience tilts us toward disclosure. Mobile credit-card stations, for instance, email you receipts and make transactions fast and paperless. But the costs of companies’ capturing your email address and other personal information come later. Sensitive data, such as your name, demographics, and location, is amassed and shared or sold, and in all likelihood you are eventually barraged with targeted marketing. Although some of those ads may be welcome, others may be annoying or intrusive. And some fear that in the future consumer data may even be used in more-impactful ways, such as credit score calculations — and possibly lead to discriminatory “digital redlining.”

Read Full Article and Learn More

Leave a Reply