Research in Progress: How design influences reactions to fake news stories in social media

September 4, 2018

Bart Wojdynski

By Bart Wojdynski, University of Georgia

Social media and contemporary online misinformation are inextricably linked, because social media serves as the main source of traffic for many misinformation sites.

As a primary funnel of audiences to news stories of all kinds, Facebook has taken action several times in the past year in an attempt to decrease exposure to misinformation.

Facebook uses machine learning to identify potentially false stories. It compares them with catalogued fact-checking work by organizations such as PolitiFact and Snopes.com, and also refers new potentially false stories to the organizations for fact checking.

Facebook’s first attempt, rolled out soon after the 2016 election, was to label content which had been flagged by fact-checkers as misleading. The labels, which appeared under the story and stated “disputed by [names of third-party fact checkers”] 

The social media format’s user log data anecdotally showed that the labels were not curbing the traffic to misinformation. Additionally, empirical research showed that “disputed” labels increased perceived accuracy of fake stories that were not labeled. Facebook discontinued use of these labels in December 2017.

Facebook then modified its news feed to curb traffic to disputed stories by altering visual cues in posts containing links to these questionable articles. Facebook typically formats posts containing news stories in a standard format that makes them noticeable, including a main image, headline and summary blurb. As of April 2018, however, posts with links to disputed stories receive no automatic image and have a smaller headline.

An even bigger modification Facebook has made is that these stories will now be posted alongside related articles. In making this change, Facebook user interface designers cited research that found corrections appearing alongside misinformation posted on social media reduced consumers’ adoption of the views promoted in the misinformation.

News consumers select news stories from social media feeds in a process that’s heavily emotional—posts which arouse strong emotions, whether positive or negative, are the ones most likely to be clicked or shared. Our study will take a more nuanced examination of how design and emotions play a role in the way consumers view false stories on Facebook.

We are building several versions of a constructed Facebook feed that includes links to a few factual news stories from established news organizations and a few articles containing political and science misinformation. In the main or “control” version of the feed, the “fake” news stories will be presented using Facebook’s current layout for disputed stories, with two related news story links below each story which allow readers to access articles on the same topic from credible third-party fact-checkers.

In three other versions of the Facebook feed, we will examine alternative approaches to flagging online misinformation. One version will utilize the previous “disputed by third-party fact-checkers” label, and two others will use different visual cues in the post as a warning of the post’s disputed status.

We’re interested in several different outcomes of how the variations in design will affect consumers’ experiences. First, we’re interested in how consumers pay attention to the Facebook posts themselves, including how much time they spend looking at the story and adjacent elements. Secondly, we want to know how consumers react to the story headlines and images, as well as the warnings and linked related stories. Finally, we’re interested in how differences in consumers’ attention and emotion influences whether they select the stories, how much time they spend reading them, and whether they believe the claims made in the stories.

To examine these factors, we’ll be utilizing two different technologies in our lab to help measure participants’ attention and responses. The first is a monitor-based eye-tracker, which allows us to unobtrusively record where participants are looking, and how long. The second is facial-expression coding software, which uses a webcam and automatic facial detection to track minor facial movements of the eyes, eyebrow, and muscles around the mouth to gauge the extent to which participants feel discrete emotions such as anger, joy, or surprise. 

After our study participants finish viewing the stories in the feed, we will also ask them questions about their recall of the stories, their perceptions of their accuracy and their reading experience.

We hope that our research will help other platforms and publishers decide how to deal with labeling potentially inaccurate content in a way that consumers will notice, but which will minimize perceived intrusiveness or emotional reactance.

For further information on this study, email Bart Wojdynski at bartw@uga.edu. This project is supported by a Page/Johnson Legacy Scholar Grant from the Arthur W. Page Center.