An interdisciplinary study on how AI affects CSR - Scholar Q&A with Keonyoung Park

December 5, 2023 • Jonathan McVerry

Keonyoung Park, assistant professor at Hong Kong Baptist University

If you’ve read the Page Center blog the past couple months (or any news outlet, really), it’ll be no surprise to hear that artificial intelligence technology has affected every inch of public communication. Corporate social responsibility (CSR) – an expectation that companies engage in responsible activities that contribute to community, the environment, and society – is no different. First-time Page Center scholars Keonyoung Park, Hong Kong Baptist University, and Hoyoung Yoon, Ewha Womans University, are leading a project to uncover how AI technology influences and redefines CSR. Park and Yoon will blend their expertise in public relations and computer science to answer a significant question: “What is the extended boundary of corporate social responsibility in the era of AI-assisted communication?” Park discusses the project below. The study is a part of the Page Center’s 2023 call for research proposals on digital analytics.

Please discuss your research topic and share an overview of your Page Center project.

Our research project highlights the significance of algorithm transparency in AI-assisted communications, addressing key ethical issues such as information asymmetry and communication credibility. Drawing on signaling theory, our work seeks to redefine transparency within the context of AI algorithm-based communication, a domain that often eludes complete understanding even among public relations and communication researchers due to its technical complexity. Our empirical studies are designed to explore ways to cultivate public trust in communication technologies through enhanced transparency of AI algorithms, and to understand how this trust can extend to the organizations that utilize these AI systems. We anticipate that our research will provide both a theoretical framework to comprehend AI algorithms and AI-assisted communication from a public relations standpoint, and practical guidance for PR practitioners to effectively demonstrate their organization's commitment to maintaining transparency in an AI-mediated communication environment.

What is your definition of digital analytics? How does it relate to your current research?

Digital analytics is a multifaceted field that involves the collection, analysis, and interpretation of data from digital sources, such as websites, social media platforms, and mobile applications. It's primarily used to understand and optimize web usage, gauge the effectiveness of online marketing campaigns, and derive insights into user behavior and preferences. In relation to our research, digital analytics plays a crucial role. Firstly, it helps in assessing the impact of AI-assisted communication technologies. By analyzing user interactions with these technologies, we can understand how transparent AI algorithms influence user behavior, trust, and engagement. Secondly, digital analytics provides quantitative evidence for our studies. By collecting and analyzing data on how users respond to different levels of algorithm transparency, we can empirically test our hypotheses about building trust in AI-assisted communication.

Talk about AI transparency and how it applies to algorithms.

The concept of transparency is fundamental in public relations, as it significantly enhances public trust. This principle extends to AI transparency, especially concerning the algorithms underlying AI-assisted communications. In interactions with AI systems like chatbots, users often ponder if their communications are handled fairly and securely, and whether the responses they receive are credible. Addressing these concerns requires a degree of openness about how AI algorithms function. However, this poses certain challenges. Firstly, these algorithms often represent a key organizational asset, making full disclosure to the public a complex issue. More importantly, even if organizations were to fully open their algorithms to the public, the technical complexity of these algorithms often renders them incomprehensible to non-experts. This gap in AI literacy means that simply revealing the algorithms doesn't necessarily contribute to transparency in AI-mediated communication or reduce information asymmetry.

In our research project, we explore effective strategies to maintain genuine transparency in the disclosure of AI algorithms. Our goal is to build trust not only between users and the AI system but also between users and the organizations implementing these AI technologies. We aim to investigate methods that can bridge the gap between the intricate nature of AI algorithms and the understandable communication of their functionality and limitations to the general public. By doing so, we believe we can enhance the public's trust in both the AI system and the organizations behind them, fostering a more transparent and reliable AI-mediated communication environment.

What is the plan for your project? Do you have a timeline?

Our project is structured around two key studies that focus on algorithm transparency in AI-assisted communications. The first phase of our project, which we have already completed, involved conducting qualitative in-depth interviews with industry experts. Specifically, we engaged with AI communication practitioners at Kakao in South Korea. The goal of these interviews was to gain insights into how professionals in the field strive to maintain algorithm transparency and their perspectives on its importance. This includes understanding their approaches to reducing reality gaps in AI communications.

The second phase of our project, which is currently in progress, involves a quantitative online experiment. The purpose of this experiment is to gather empirical evidence on the effects of transparency signaling. We aim to assess how the transparency of AI algorithms influences public trust and the relationships between organizations and their publics. This part of the study is designed to examine the extent to which transparency can narrow perception gaps between AI systems and users. Through this experiment, we hope to validate our hypothesis that clear and understandable communication about AI algorithms can significantly enhance trust, not only in the technology itself but also in the organizations that deploy these AI systems.

How will the expected results fit into current literature?

Our research project makes a substantial contribution to the current literature on public relations and AI-assisted communication, particularly regarding the concept of AI-algorithm transparency signaling. This study contributes to a deeper understanding of how transparency in AI algorithms can build trust in AI systems and, by extension, the parent companies of these AI systems.

The key findings of our research underscore the strong positive association between transparency signaling in AI algorithms and the level of trust attributed to AI systems. Our regression study offers clear evidence that transparency in AI algorithms significantly enhances trust, especially in contexts where AI-assisted communication, such as chatbot interactions, is prevalent. This aligns with previous studies which suggest that providing precise and current information about automated systems augments trust and user satisfaction.

Can you share some potential practical uses for this project?

The potential results of our research on AI-algorithm transparency signaling in AI-assisted communications have several practical applications that can be beneficial across different sectors:

AI development and design: For developers and designers of AI systems, our research findings emphasize the importance of integrating transparency features into AI technologies. Understanding that transparency boosts user trust, developers can design AI systems that not only perform tasks efficiently but also provide users with understandable insights into how decisions are made or information is processed. This could involve designing user interfaces that offer explanations in layman's terms or incorporating features that allow users to query the AI about its decision-making process.

Public relations and communication strategies: Our research can guide public relations professionals in crafting more effective communication strategies, especially when using AI-assisted tools like chatbots or AI-generated content. By understanding the importance of transparency signaling, PR professionals can better manage how they communicate the use of AI in their operations, ensuring that the public and stakeholders are aware of the AI's role and its limitations, which can enhance credibility and trust.

Policy making and governance: In the realm of policy and governance, the findings can inform the creation of guidelines or standards for AI transparency in public and private sectors. This can be particularly relevant for regulatory bodies or organizations developing ethical frameworks for AI usage, ensuring that AI systems are not just efficient and ethical, but also transparent in their operations.

Consumer education and AI literacy: The results can be used to develop educational materials and programs aimed at enhancing the general public's understanding of AI. By promoting AI literacy, consumers can make more informed decisions about using AI-driven services and products and have realistic expectations about their capabilities and limitations.

Corporate governance and ethics: For corporate leaders and decision-makers, the research highlights the ethical implications of using AI in business operations. It underscores the need for ethical AI usage policies that include transparency as a key component, fostering a culture of trust and accountability within organizations and with their stakeholders.