· Features

Four lessons on ethical AI use in recruitment

AI in recruitment has been lauded both as a solution to human prejudice, and an unreliable tool based on biased data. So how can HR use it ethically? Millicent Machell reports

The 2023 AI boom, which came to the public’s attention following the release of generative AI tool ChatGPT, has seen a mixed response from the HR profession.

On the one hand it has sparked fears about job security, data security and bias in AI-driven decision-making.

Conversely, many have been excited by the opportunity to automate repetitive tasks, identify patterns in employee data and make more opportune hiring decisions.

For HR teams looking to harness the powerful benefits of AI in recruitment, how can they do so safely and ethically?

In our latest HR Lunchtime Debate in partnership with Harver, a panel of experts discussed the options.

Here are four key takeaways from the debate.

 

1. Identify where AI adds value

AI can make the hiring process smoother and more successful for both employers and candidates, according to the experts.

“AI is monumental in all the different ways it can be used,” said Jane Wu, associate partner for talent and transformation at technology corporation, IBM.

“It allows you to source candidates, in your own talent pool or on LinkedIn for example, through a requirement for specific skills or education. It can identify those potential candidates quickly and clearly, assist with scheduling interviews, or even screen the candidates.”

However, it also has the potential to improve candidate experience, according to Anne-Marie Balfe, financial services talent leader EMEIA, at professional services company, EY.

She said: “It’s great for sourcing and screening but could also help candidates ask ‘Is this job right for me? Is this in my skill set?’

“It can also really help with the ease of responding to candidates and supporting them as they go through the interview process, which can be really difficult when you’re recruiting for tens of thousands of roles.”

AI also allows employers to track the points where candidates are not being supported enough and tend to drop out, according to Ben Porr, global vice president of people science at talent management software company, Harver.

“By tracking engagement throughout the hiring process, you can see trends about what’s working well and not working and therefore you can see when people are dropping out and why.

“Particularly with current unemployment rates and candidate engagement concerns, it is helpful to know if people are dropping out due to valid concerns, or if they are regrettable losses due to candidate experience.”

 

2 Maintain human monitoring

Ethical AI use requires frequent human review to spot any systemic flaws according to Balfe.

She said: “We need to constantly review what we like and what needs to change. There have been a lot of examples where we retire technology when it isn’t working.

“For example, a lot of firms used to use technological capabilities that judge video interviews with AI. It later came out that people with darker skin were not perceived as well as those with lighter skin tones in these videos and that obviously created bias.

“So it’s crucial to be able to say ‘this is no longer appropriate’ and remain adaptable and alert.”

One way to monitor the effect of AI is using surveys, Balfe said.

“It’s particularly useful to survey candidates and new colleagues in the first 90 days of hire so you can see if the tools have improved it or had an adverse effect.”

When using AI, it is also important to maintain human connections, according to Wu.

She said: “AI creates efficiencies, but it doesn’t remove human intervention. You still need relationships and contact to ensure people stay with your teams.

“There seems to be a concern that we’ll remove the human from HR. But it’s so vital that when you’re an employee or candidate, you can ask questions to another person and receive empathy and understanding from them.”

 

3 Beware of bias magnification

Porr said caution is required with AI, as bias could impact huge numbers of people.

He said: “It’s smart for people to be cautious and AI is inherently risky if you’re not aware of the inputs.

“Previously if a recruiter was biased, they’d impact a handful of people. If you have a bug in your AI model, it could be affecting hundreds of thousands of potential candidates.

“The key is to be able to explain how your AI works, and regularly audit your processes to ensure you have no issues.”

Balfe said AI bias is usually an echo of human bias that shows up in the historic data AI bases its decisions on.

“A lot of initial trials showed inherent bias from the data, or from the language and criteria in job adverts,” she said.

“This is then amplified through machine learning which looks at who has historically been hired in the position, for example prioritising men and women for stereotypically gendered roles.”

Wu said even if the AI is not biased in decision making, the rapid advancement of the technology could leave some employees behind, creating different ethical concerns.

She said: “Early career employees and older employees may be disadvantaged. Early career employees may not have experience to put on their profiles yet, and it’s difficult for AI to sift based on potential.

“On the other hand, more tenured employees have done a lot before these profiles came about. They may struggle to keep up with the new technology and not have logged their many skills, excluding them from roles that they would actually excel in.” 

 

4 Research international regulations

As an international organisation it can be challenging to comply with AI guidance and regulations as they are still in development and vary widely between countries, according to Porr.

He said: “It’s interesting that the US guidance emphasises fairness and lack of bias, whereas EU guidance has focused more on the data security side.

“However, both are critical so as a global organisation we have to make sure we’re covering all those bases.”

Wu emphasised the importance of the auditing process to ensure compliance.

“We’re doing a massive audit in IBM because we want to be complying with all regulations. Being in close consort with our legal team also provides knowledge and reassurance – we like to go above legal and be ethical.”

For smaller organisations that don’t have a legal team, Balfe recommended taking time to research and implementing AI slowly.

She said: “Many smaller companies are looking at the first steps they can take with AI, so HR need to be aware of regulations and the market they’re operating in. It’s good to take baby steps, for example setting policies on the use of ChatGPT."

Click here to view the webinar. 

This piece appeared in the July/August 2023 print issue. Subscribe today to have all our latest articles delivered right to your desk.