Ethically integrating AI into crisis communication – Scholar Q&A with Alice Cheng

July 25, 2024 • Jonathan McVerry

Yang (Alice) Cheng

A collaborative research effort will explore the ethical implications and practical applications of artificial intelligence in crisis communication. The research team – Yang (Alice) Cheng, North Carolina State University; Yan Jin, University of Georgia; Jaekuk Lee, North Carolina State University; Wenqing Zhao and Nicole Cortes, University of Georgia, and practitioner partner Philippe Borremans from the International Association of Risk & Crisis Communication represent expertise in AI ethics and crisis communication from several academic and professional perspectives. The scholars say this will ensure a comprehensive evaluation of how AI can and should be used when communicating during a crisis. The project is part of the Page Center’s 2024 research call featuring scholar/practitioner collaborations. The researchers are first-time Page Center scholars except Cheng who has been funded twice and Jin who is a five-time scholar. In this Q&A, Cheng talks about her team's plan, why research in this area is important and possibilities for future projects.

Can you tell us about the topic and how your project will address the ethical challenges of AI in crisis communication?

The research project aims to investigate how AI can be ethically integrated into crisis communication during natural disasters and environmental emergencies such as wildfires and flooding. Specifically, we are interested in understanding how crisis and risk messages generated with or by AI are perceived and trusted by various stakeholders, including emergency management practitioners and the general public in the U.S. and European countries.

How do you plan on building that understanding?

The project will employ a multi-phase cross-national survey approach. The first phase involves conducting surveys among crisis communication practitioners to gather insights into current practices and perceptions regarding AI integration. The second phase includes surveys among the general public living in wildfire and flooding high-risk areas, focusing on their trust and perception of AI-generated emergency messages. One set of questions will explore the impact of disclosing whether a message is AI-generated on trust and perception.

How does your project fit into the research literature?

This research contributes to the existing literature by addressing the gap in understanding how AI can be ethically employed in crisis communication. While previous studies have explored AI's role in various domains, there is limited research specifically examining its application in public emergency management and the ethical considerations surrounding it. Our project seeks to fill this gap by providing empirical data and developing practical guidelines for communication professionals.

Can you share an example of how AI could or would be used during a crisis?

Lots of places have used AI for crises. We are specifically interested in emergencies, like fire, earthquakes or like here in North Carolina we have hurricanes. AI can ask people to send a message to help find the nearest shelter. So, we want to focus on the situation and think about emergency management. From the ethical standpoint, there are two sides. One is the practitioner side. What do they think about ethical AI? And the other is the public side. How does the public perceive AI messages? Are the messages transparent? Are they fair and unbiased?

What practical uses do you hope to discover through the results you find?

The findings of this research will inform the development of practical guidelines for communication professionals, helping them navigate ethical considerations when using AI in crisis communication. These guidelines aim to enhance transparency, privacy, and accountability in AI-generated emergency messages, thereby improving public trust and response during environmental crises.

It’s early, but do you have thoughts on future steps based on findings from this study?

Moving forward, we plan to analyze the survey data to identify key insights and patterns in stakeholder perceptions and attitudes towards AI-generated crisis messages. Additionally, we aim to disseminate our findings through academic publications and industry conferences to foster discussions and further research in this emerging field. Collaborations with communication firms and emergency response agencies will continue to ensure the relevance and applicability of our research outcomes. This structured approach will help convey the scope, objectives, methodology, and potential impact of your research project effectively.