Navigating the ethics of AI in advertising – Scholar Q&A with Rachel Esther Lim

July 11, 2024 • Jonathan McVerry

Rachel Esther Lim

With the sudden rise of AI technology and its tremendous effect on nearly everything, many areas of public communication have been slow to provide adequate ethical direction. Oklahoma State University researchers Rachel Esther Lim, Sujin Kim and Skye Cooley are working with marketing firms to learn how the industry is navigating the ethical challenges and leveraging the potential of AI. In their project, the first time Page Center scholars will also identify available ethics training tools for advertising practitioners. The project is part of the Page Center’s 2024 research call on ethics training in public relations, journalism, advertising and strategic communication. In this Q&A, Lim discusses the origin of the research idea, the need for training and the team’s plan for their project.

How did this project come together?

The story of our project started when ChatGPT was coming out in the marketplace. It wasn't just surprising; it led us to think about how we should teach and prepare our students, prompting us to re-evaluate how to best equip them with the skills needed in the field, given the emergence of new technologies. We realized that an entire industry of people was also figuring out how generative AI was going to change the world. We started wondering if this is going to take away our jobs and what are we going to do? So, when I started studying ChatGPT and generative AI, it made me ask: What is the best way to train my students who are going out in the field? How can we prepare them? It’s so new that we thought it might be an industry problem too. Maybe it’s something that practitioners are thinking. We should investigate it. In our team, Sujin has a very strong background in advertising. Her knowledge will be helpful for us to navigate how generative AI and other technologies are being used by practitioners. Skye is really great at project managing, and he will help us manage the project and build strong networks with local advertising agencies.

Is there an example of an ethical issue in advertising that is affected or created by new technology?

Advertising is all about being careful about deceptive messages and not misleading consumers. That looks like it could easily happen with AI, especially with deep fakes. There is a lot of deceptiveness in that technology. There is also personalized advertising, which I think is a huge thing. That is AI algorithms coming up with phrases that are relevant to consumers. A company could argue that its collecting data based on consent, but it’s not always clear cut from the consumer perspective. Plus, where is the boundary to all of this and is this manipulative? We’d like to know how we can make people more aware of when AI technologies are being used. Should we draw boundaries? How transparent should it be?

So, you’re studying something that’s changing and evolving in real time?

Yes, and based on other research I’ve done, sometimes people feel like they have the ability to cope with these advertisements. They don’t think they’d be tricked or fall for advertisements, but that’s not always true.
So, one thing we are doing since talking about it at the Page Center Research Roundtable is focusing on training tools and finding out if there are any out there. We want to know what kind of content we can recommend to industry practitioners.

Can you talk about the marketing firms you’ll be working with on this project? What is your plan for working with them?

We hope to meet with our industry partners this fall. We want to ask them questions about what ethical tools are out there and how they are navigating generative AI. What are the issues? What are the ethical dilemmas? And what content would they want? Our goal is to gather the several pieces that emerge and see if there is training content that we can specifically apply to advertising practices. It looks like the current content talks more about the organizational processes and how we can manage or create ethical cultures. I haven't really seen anything that talks about the consequences. We will probably explore the content that’s out there to see who is using them and what motivates others to take the course as well if they don’t have it in their agency.

How important is training, not just for a person, but for an organization?

AI anxiety is a huge thing. Generative AI is here, but people are very hesitant. Our research team at OSU conducted a study with students. We asked them to brainstorm an idea. One group was asked to use ChatGPT and the other group didn’t. And, interestingly, we observed that several students were hesitant to use the AI. They asked if they were required to use it, and those students ended up using the bare minimum. We found out that students were concerned, or felt unethical to use generative AI. We don’t know how similar it is to practitioners, but these findings suggest that clear, smart AI guidelines and training are needed to enable students or those in the practice to use the technology safely.

How has Page Center support helped kickstart this project?

This is a really exciting and great opportunity for us scholars. We need funds to delve into these critical issues. Engaging in academic discussion with the academic community is very meaningful for us. So, we're really honored and really excited to kick off our project with the Page Center.