New research will ID cues on Fortune 500 websites that elicit private information

June 28, 2016

Jinyoung Kim, the Pennsylvania State University

By Jinyoung Kim, the Pennsylvania State University

From saving a credit card number on Amazon to entering a social security number on Pay Pal, vast amounts of personal information are revealed on corporate websites. As a result, many Internet users are increasingly wary of potential privacy breaches. In fact, recent surveys documented the growing privacy concerns among the public. For example, a recent survey showed that 92 percent of American adults are concerned about the security of their personal information online. A poll by the Pew Research Center captured similar fears reporting that 75 percent of Internet users doubt that their sensitive information is securely managed by corporate and governmental data holders.

Despite high levels of privacy fears, however, we continue to give up our private information on numerous online and mobile websites on a daily basis. For instance, we readily disclose our geo-location data on Google Maps, and give a consent to Facebook’s privacy terms and conditions without carefully reading the details. As such, considering the cognitively demanding nature of many websites, we reveal much more information than we own up to the survey reports. That is, we actually behave in a way that contradicts our worrisome attitudes about online privacy. This epitomizes a phenomenon called the “privacy paradox.”1

What drives this attitude-behavior inconsistency?

Drawn on previous research in social cognition, our study contends that humans are “cognitive misers” who process information in an expedient manner at the cost of thoroughness. Instead of calculating potential merits and risks of information disclosure in an effortful manner, we argue that Internet users oftentimes quickly evaluate the security of websites based on privacy-related interface cues that trigger “cognitive heuristics” (i.e., mental shortcuts, rules of thumb).

More specifically, as the Modality-Agency-Interactivity-Navigability model2 theorizes, when we are asked to submit personal information to a site, several cognitive heuristics related to privacy are triggered by interface cues on the site. Such heuristics help us quickly assess the site’s security and decide to reveal or withhold our information. For instance, presence of an authoritative source in the vicinity (e.g., Apple or Google’s logo) may assure us about safety of the site, and encourage us to provide our personal data based on our instinctive application of the “authority heuristic” (i.e., “popular name, brand, or organization can guarantee the security of a website”).

Based on this theoretical premise, our study proposes that privacy-related cues on many corporate websites might play a key role in the public’s decision-making process about information disclosure. In particular, considering the potential of privacy-related interface cues for deceiving the public into believing that organizations securely manage their information, and for soliciting more information than necessary, it is imperative to investigate the status quo of usage of privacy-related interface cues on corporate websites.

In order to accomplish these goals, our study will be carried out in two phases with the generous support from the Arthur W. Page Center. This study will investigate (a) what interface cues are commonly used in the Fortune 500 companies’ websites to elicit private information from the public and (b) to what degree does the public perceive the cues as ethical.

Given that little attention has been paid by corporations to dealing with information privacy, the findings of this study will serve as an empirical basis to develop substantive guidelines on the ethics of digital strategic communication. The findings will be posted on the Arthur W. Page Center’s blog in fall of 2016.

 

References

1Norberg, P.A., Horne, D.R., & Horne, D.A. (2007). The privacy paradox: Personal information disclosure intentions versus behaviors. The Journal of Consumer Affairs, 41 (1), 100-127.

2Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 72-100). Cambridge, MA: The MIT Press.