Helping companies stay transparent about their AI use when hiring – Scholar Q&A

November 16, 2023 • Jonathan McVerry

Ying Xiong and Joon Kyoung Kim, University of Rhode Island

Artificial intelligence is finding a role in nearly every major industry. Human resources is no different. Two scholars are leading a project that will help companies become more transparent about using AI in their hiring practices. The project will not only gauge how people feel about AI, but will also help build trust between companies and stakeholders about how hiring is done. First-time Page Center scholar Ying Xiong and two-time scholar Joon Kyoung Kim, faculty members at the University of Rhode Island, will develop corporate communication strategies based on public perception so companies can be open about AI practices. The study is a part of the Page Center’s 2023 call for research proposals on digital analytics.

Your project comes from a human resources area, which is somewhat new to the Page Center. Can you share how it came together and where it fits with other Page Center work?

Kim: One of the Page Center’s values is transparency, and many companies are already using AI in the hiring process. But not every company is very transparent about its AI use. New York City is requiring companies to disclose AI use when they hire people and other cities and states may do the same thing. We believe it's better to be more transparent because the public should be well informed about how AI is used.

Xiong: Joon and I have analyzed previous research regarding AI use in HR. In the past, we saw companies use AI to screen potential candidates. However, nowadays, we found more and more companies using AI to determine who will be interviewed. And during the interviews, they’ll record people’s facial expressions and body language. They’ll record what they say and analyze their personalities to see if they are a good fit for the job. We found that not all the companies have a transparent, ethical practice regarding this information. We really want to explore whether the companies will be more transparent and disclose that information to potential applicants.

In terms of ethical use, is it wrong for AI to judge people by some of those things?

Kim: I think it really depends on individual perspectives. Regardless of if its good or bad, organizations should disclose that information. These studies improve important information about how humans use technologies to make decisions when hiring people.

Xiong: This is not only about the use of AI but also about the acceptance of AI. We have AI applications all around our lives nowadays. This project is about whether we know we are being screened by human beings or AI applications. That's all about transparency and acceptance. That's the core issue we want to explore in this study.

How do you envision a company being transparent about AI-powered hiring processes?

Kim: That is where our project is at. There is no such practice at this time, so we are testing different possibilities of where companies can put this information. It could be in a space like an HR page or recruitment site or job ads. Or it could be part of a general statement from the company. We are exploring whether we can test different places and platforms.

Can you talk about your plans and your timeline for your study?

Xiong: We are planning to complete the IRB review this fall and we're planning to finalize our literature review by the end of this year. Then early in January, we will launch the research to collect public opinions. We will do the data analysis and aim to submit our paper to conferences in April 2024. Plus, at AEJMC in August where we’ll present at the Page Center Research Roundtable meeting.

What kind of questions do you ask to gauge and understand people’s perception about AI and hiring?

Kim: Our project will include a survey and an online experiment. First, we will conduct an online survey to better understand how the public views AI use in hiring. Our second study will be an online experiment which aims for testing companies’ message strategies to better communicate about AI use in hiring with the public. We will create and test different versions of messages explaining AI use in the hiring process. As we mentioned before, companies are not really disclosing such information, so we have to spend more time finding good examples of messages that explain AI usage in hiring. And we don’t know what kind of information the public expects from job advertisements regarding AI use. So, it’s not really what kind of questions are we going to ask, but how we are going to depict AI use in the hiring process. That will be the first step. And then we will ask participants about their attitude toward the company and also whether they will be interested in applying based on the varying degree of AI use.

Xiong: We will analyze what the companies should say and how they should say it. Even though we know that companies should be transparent. We are eager to know the communication strategies. We can use that to improve people's acceptance rates and improve the reputation of the company.

Can you talk about the timeliness of this research? How relevant is it right now?

Xiong: Particularly right now, this research is very important so we can understand how to become more transparent and ethical in the communication process. We have more and more AI applications in our everyday lives. I think job applicants have the right to know how people use that data. That's crucial for our society.

Shin: I'm excited about this project because this work is new but relates to work I’ve done the past several years and is built upon previous findings. We are very confident in producing meaningful findings. We want to build a foundation to give companies so we can help them conduct their business and collect and store corporate data responsibly.