· News

HR and AI: How can HR use AI effectively and ethically?

The government published a whitepaper on Wednesday (29 March) which promised to regulate artificial intelligence (AI).

AI has been a popular topic of discussion due to new generative AI tools such as ChatGPT.

There are, however, concerns regarding bias in AI, especially when used in decision-making, for example in recruitment practices, due to the flawed data it learns from. This means that there could be serious DEI failings if AI is used incorrectly.


More about AI:

AI used as a training tool to help improve employee pitches

Businesses warned employees lack skills to handle AI

UK workers believe AI offers better career support than people


The whitepaper did not establish new regulation but called on regulators to apply existing rules. 

It outlined five key principles that companies should follow in their use of AI: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

So how can HR teams effectively use AI while adhering to these principles?

 

Perry Timms, chief energy officer, PTHR

“Generative AI has captured our attention since the widely accessible ChatGPT from Open AI, Microsoft’s enhanced Bing search and now Bard from Google. I’ve experimented a little – with mixed results – and I know the talent acquisition world particularly has been using ChatGPT in a range of ways (CV data entered to show a match for role profiles as one example).

“We’ve heard about AI processing power, but now have our hands on it and it’s potentially amazing and potentially volatile.

“As people professionals we have to remember this is not just processing. People’s lives and livelihoods are in our hands on some of the decisions we make to hire, develop, review, assess and deploy people’s skills, energy, will and creativity.

“AI cannot do everything. Nor can we. In a time of emerging discovery let’s not get carried away but be experimental and with safety and fairness at the top of our minds.

“We cannot opt out totally, so we have to be assured and confident and that only comes from wisely being “in the arena".

 

Gosia Adamczyk, director of HR, Verve Group

“AI is here to stay and HR teams should learn to embrace its ability to increase efficiency and save time. 

“There are uses for AI at every stage of the employee lifecycle. For example, we can use generative AI to create simple and specific job descriptions that hone in on the competencies needed for a role. We could then go on to generate job interview questions that are based around those competencies. We can even use it in internal communications, for example to adjust the tone of voice on some information and make it more fun for employees to read.

“However, we must remember that AI is there to support us, not replace our jobs. We can’t blindly communicate anything produced by AI, we have to check everything and make sure it is fully in line with our values. 

“No one has the answer yet on how to use AI perfectly and it is imperative not to trust it completely and remain aware of its flaws.”

 

Manny Athwal, founder, School of Coding

One positive aspect of AI is its ability to analyse large amounts of data quickly and objectively, allowing HR professionals to make data-driven decisions. It can also reduce bias when hiring by removing subjective factors like gender or race from resumes, and can assist with identifying qualified candidates who may have been overlooked.

“There are also potential negatives associated with AI. Algorithms used in AI systems can perpetuate bias if they are trained on biased data or programmed without careful consideration of DEI concerns. Additionally, AI may not be able to fully capture the nuances of human behaviour and may miss critical factors in decision-making processes.

“To ensure AI used in HR does not have negative effects on DEI, HR professionals should be aware of potential biases in the data used to train the algorithms and carefully monitor the results to identify any potential biases. It’s also essential to involve diverse perspectives in the development and implementation of AI systems and to regularly evaluate their impact.”

 

Kelly Thomson, employment partner, RPC

"AI technology can be very useful, including as a potential way to disrupt the impacts of unconscious biases in human decision-making. But there is clear evidence that bias from an underlying data set can become entrenched in an AI algorithm. The danger is creation of an apparently neutral decision-maker which, in reality, has bias built into its DNA.

“The new AI framework doesn't change the legal obligations on AI users not to discriminate against individuals. Nor does it speak to the use of AI to proactively improve equity. Instead, it simply encourages sector regulators to ensure that regulated entities are complying with laws on discrimination by taking steps to avoid bias arising within AI.

“Human understanding and continuous oversight of AI technology and processes will be critical to doing this. But, at the same time, this emphasises the importance of proactively minimising the potential for the biases of those humans to find their way into the technology being created, reviewed and updated."

 

Listen to the HR Most Influential Podcast's episode (S2, E4) dedicated to the human impact of AI here.