In recent years, a number of high-profile cases involving the violation of online customer privacy have raised public alarm. For example, Home Depot made headlines last year when a massive theft of its consumer credit and debit card database affected more than 56 million customers. And in 2012, Target was in the spotlight for a newsworthy privacy violation: the company’s data-driven algorithm had correctly identified a customer’s pregnancy, and proceeded to send the teenage girl coupons for baby gear—before her father knew about the pregnancy. Many other online businesses—including Facebook, Google and Amazon—have also been featured in the media regarding potential privacy concerns.
Why are we seeing such an increase in high-profile violations? We believe that two forces have fueled this trend.
First of all, companies now have greater incentives to collect—and share—data about their customers. Since a growing number of consumer activity—from information search to browsing to actual transactions—now occurs online, it is both easy and inexpensive to collect and build large consumer-behaviour datasets.
Having all this data at its fingertips enables a company to build rich customer profiles, which in turn enables it to narrowly target its advertising and marketing offers to specific customers. As the Target example highlights, almost every major retailer now has a ‘predictive analytics’ department that uses data to understand consumers’ shopping and personal habits. This access to information also allows companies to benefit from ‘network effects’ to create positive word-of-mouth, thereby creating secondary markets. The bottom line is that the more information a company can get about you, the greater the revenue potential.
The second force behind the increase in privacy violations is that, from a consumer perspective, the risks of sharing our information online have increased exponentially—but this is not widely recognized. The ease with which companies can collect and share personal data—coupled with technological advancements that improve the ability to analyze and draw deep insights—increases the potential for misuse that includes identity fraud, manipulation, and social and economic discrimination. For instance, marketers may use ethnicity information to keep attractive offers out of reach to the least-profitable and most-costly population segments; or, knowing that particular demographics are likely to more effectively use social media, businesses may reward them with better customer service and shorter wait times.
While the broader societal challenge is to safeguard consumer interests in a rapidly-evolving digital environment, we believe that a key element of this problem is individual—and lies in the millions of decisions consumers are making each and every day. In a recent report prepared by the University of Toronto’s Behavioural Economics in Action research hub (BEA@R), we set out to put a behavioural lens on the topic of online consumer privacy.
Behavioural Insights on Consumer Privacy
To make a fully-informed decision about what information she should share online on any given occasion, the fully-rational consumer needs to go through three important decision-making steps.
First, she needs to employ the appropriate mental model to think about ‘information sharing’ as a risky prospect—similar to the risks of contracting disease on exposure to contaminated food, the risk of a side-effect after consuming medication or the risk of losing money when trading in risky assets.
Second, she needs to use available information to quantify the risk and identify the possible outcomes. There is usually limited information to enable this, but a lot of information in disclosures and privacy policies exists that would allow her to identify harmful outcomes. Finally, the consumer would need to integrate the ‘identified risk level’ with the ‘outcome information’ to arrive at a judgment as to whether the benefits of sharing her information exceed the potential harm.
Following are four reasons for these dramatic inconsistencies.
1. Consumers are limited processors of information. While economists may suggest that providing more information to consumers will help them evaluate costs and benefits and make better decisions, evidence from the behavioural sciences indicates otherwise. Most consumers make decisions using ‘heuristics’, or mental shortcuts, rather than processing information fully, and as a result, they are often susceptible to sub-optimal decision making. Lengthy privacy disclosure statements on websites written in legal jargon are not the answer; we are all too familiar with—and perhaps guilty of—simply scrolling through detailed disclosures to click the ‘I agree’ button and proceed with a desired download or purchase.
2. Consumers are highly susceptible to cognitive laziness. As a result of this, they may not actively seek alternatives to address their privacy concerns. In study after study, consumers are shown to prefer to stay with the default option, whatever it might be—and in the context of online privacy, the default option is most likely ‘being tracked’ and ‘surrendering data’. In a study of online social networks, Carnegie Mellon professor Alessandro Acquisti and researcher Ralph Gross found that the vast majority of users had not changed their default privacy settings. Furthermore, the ‘experience effect’ may lead consumers to develop an actual preference for the default option, after spending some time enjoying the convenience it offers.
3. It is difficult for consumers to anticipate the ways in which their information might be made vulnerable. A few years ago, the ability to control one’s refrigerator through a mobile device from another country using the Internet would have seemed like a science fiction fantasy; yet today, it is a reality. Similarly, it would have been impossible to anticipate that a social media website could recognize your face in photographs posted online, even when you are not tagged on them. Yet, Wired magazine recently reported that social media websites have now developed facial recognition technology that could soon make this a reality. This serves as a warning for those who sometimes post questionable—but untagged—photos of themselves.
4. Consumers are increasingly displaying impulsive behaviour online. To be fair to consumers, many websites are laden with features that seem designed to motivate such behavior, including special promotions, discounts and chances to win prizes in exchange for personal information. A classic example of an impulsivity-enhancing innovation is Amazon.com’s ‘One Click’ buying button. Given the immediate benefits such features present, consumers are likely to surrender their data—even private data, like health information—without considering the possible delayed consequences of their actions. Furthermore, the rapidly-evolving online landscape makes it even more difficult to assess risk levels and potential harm.
In recent research, Harvard professor Leslie John and Carnegie Mellon professors Acquisti and George Lowenstein showed that the value that consumers assign to privacy—and indeed, whether privacy is a relevant concern for them at all—is highly dependent upon the context in which the value is elicited. This context dependence was significantly greater for privacy than it was for other products and services, suggesting that consumers lack an internal ‘meter’ for valuing privacy.
Taken together, these findings provide insights as to how we can design interventions to help consumers better assess risks and make safer choices. Traditional interventions in public policy have taken the form of regulations or incentives. For example, a regulatory body can enforce a law banning the processing of customer data without consent, and impose a large penalty for companies that don’t comply; and the government can provide monetary incentives for companies to invest in more secure technology to safeguard customer data.
Another approach—and the one we recommend—is to recognize the limitations of the human mind and design behaviourally-informed solutions that can lead to better choices.
Many of the deficiencies described herein can be thought of as ‘cognitive gaps’, and a simple way to think about these gaps is to treat them like physical deficiencies—say, for instance, a broken ankle. In such a scenario, once corrective surgery has been performed, two sets of things need to be done: the first is to provide the handicapped individual with a mobility device and have her work with a physiotherapist to strengthen the injured area; and the second is to make sure the patient is in a safe environment to minimize further injuries.
These same concepts can be applied to the ‘cognitively- handicapped’ consumers who are over-sharing personal data online with minimal thinking. First, we can equip them to better assess the risks of sharing data online, and second, we can make the environment safer, so that they don’t get severely ‘hurt’ when they stumble. We have also included a third set of solutions that addresses the role of businesses from a Behavioural Economics perspective. Let’s review each solution set in turn.
1) Equip the consumer: The first step in equipping consumers to better assess risks is to sensitize them to the notion that information shared online constitutes a potential risk. Nutrition is similar to consumer privacy in that the benefits and consequences are intangible, hard to assess individually, and delayed in time. Yet the food industry has done a good job in sensitizing people to health risks, via nutrition labeling: a standardized format facilitates comparison between food items, standardized language provides common terminology, and the briefness of the labels enable consumers to quickly find what they are looking for. A similarly simple, standardized privacy label could serve as a useful tool to help consumers easily assess and compare risks in the digital space. More generally, we propose that educational programs on privacy literacy that include awareness campaigns, curricula in schools and colleges, and standardized disclosure of risks and adverse outcomes.
2) Pad the environment: Padding the environment simply refers to actions that make the environment safe for consumers who might not have the ability or motivation to process information fully. One example of a padding strategy is to set the defaults on online websites to the highest level of consumer privacy. Similarly, the default setting on mobile devices might be to ‘turn location devices off’. A second tactic might include the use of reminders or decision points to nudge users about the potential risk associated with sharing information online.
3) Incentivize businesses suitably: In our opinion, it is important to focus privacy efforts not just on consumers, but also on the providers of online web content. For example, firms could offer products or services that explicitly make consumer privacy a central part of their value proposition. If consumers start recognizing the importance of privacy and have the ability to measure the privacy quality of a given website or company, there would be increased demand for higher levels of security, which in turn might push privacy as a central value proposition for all online businesses.
How might businesses be nudged into improving their privacy? Restaurant hygiene quality grade cards are a good example of how this could work. When Los Angeles County introduced these grade cards, to be displayed in restaurant windows, its health inspection scores increased, consumers became more sensitive to restaurant hygiene, and the number of hospitalizations due to food-borne illnesses dropped by 13 per cent. These grade cards, which provided consumers with a simple way of evaluating and comparing a complex variable like hygiene, were successful in convincing restaurants to incorporate hygiene as an important value proposition in their business. Likewise, we believe that the use of ‘privacy badges’ or a rating system that evaluates the privacy policies of a given business would nudge businesses into creating a safer environment for their customers.
While the data revolution has opened up countless opportunities for businesses globally, it has also created new challenges for consumers. As indicated herein, at the heart of the problem are the actions of consumers themselves, who are sharing too much of their private information online, exposing themselves to a number of harmful situations.
It would be easy to passively accept that in our interconnected age, ‘privacy is dead’. However, it doesn’t have to be that way. Knowing that consumers possess cognitive handicaps, the responsibility for ensuring public welfare lies equally with businesses and government agencies. In our opinion, if consumer groups work with governments and businesses to build up the three pillars described herein—equipping consumers, padding the environment and making privacy a core value of business—the data revolution can deliver on all of its promises without compromising the safety of the consumers who are enabling it.
Dilip Soman is the Corus Chair in Communication Strategy, a professor of Marketing and an affiliate of Rotman’s Behavioural Economics in Action (BEA@R) research hub. Melanie Kim (MBA ‘16) is a second-year MBA student at the Rotman School who worked as a summer intern at the BEA@R research hub.
[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]