Until now, AI adoption in the workplace has outpaced regulation.
As Renate Samson of AI the Ada Lovelace Institute, said: “Even the experts don't understand what they're actually building. We’ve had 20 years of change in six months.”
However, lawmakers are beginning to catch up.
In the EU, a new AI act is being drawn up to establish obligations for providers and users depending on the level of risk from artificial intelligence.
The final form of the law will be published at the end of the year.
In the states, a New York law is already in effect. The landmark legislation regulates the use of AI in hiring and requires employers to notify candidates about the use of such tools; allows candidates to request what data is used, and requires an annual audit to evaluate the tool for bias.
Rood told HR magazine that more legislation is on the horizon.
He said: “What you’ll see in the future is new regulation from every level of government. Regulators are excited to create guidance around this tech, considering all the ethical questions it throws up.
“This creates a really challenging environment for HR who need to stay on top of this stuff.”
He added that global regulations of AI will vary, meaning international companies have a greater task in remaining compliant. However, there are some actions HR can take to make compliance easier.
He said: “Firstly, they should be keeping an inventory of which HR tools are being used, what for, and how they affect candidates and employees. They should also know exactly when a human is kept in the loop.
“Remember, AI is being incorporated into existing HR technology, so you may be using new AI systems even if you haven’t switched providers. AI is truly being integrated in HR’s basic toolset.”
Secondly, Rood says HR should find out whether AI tools are producing fair results.
“We need to think about how we maintain our standards of fairness. HR have spent the last decade putting processes in place to prevent discrimination, for example. Now we need to channel those effort into ensuring our AI does not discriminate.
“AI does not create new problems. But it means we have to revisit problems we’ve been working on through a new lens.”
Rood said it can be difficult to ensure AI is fair due to its ‘black box’ nature, meaning results are given without explaining how they are reached.
Read more: How can HR use AI effectively and ethically?
In order to understand AI systems better, Rood recommends speaking to the technology vendors.
He said: “HR leaders need to ask both potential and current vendors for a list for the factors that go into the decision making process and the weights of those factors, which is equally important.
“Good vendors will tell you what you want to know. If they say ‘it’s highly technical’ or ‘it’s proprietary', that's not acceptable.”
Another way for HR to get ahead of regulation is to ensure colleagues and candidates know when AI is being used, Rood said.
“We talk a lot about the concept of transparency when it comes to AI compliance. Lots of these new regulations will require that human users should know when their career is being judged or affected by an algorithm.”
AI laws will continue to evolve, and it is unclear what the future of AI law will be in the UK, where the government has so far held off on regulating as part of its “pro-innovation” stance.
Rood added: “This is a highly confusing time for HR using AI but I do think that it’s a world where HR can use common sense principles and get ahead.
“As new laws come into effect, we can start with simple principles of transparency and fairness. If you're doing those things you're in a good spot.”
Three things HR can do to get ahead of regulation:
1. Make an inventory all AI being used in HR processes
2. Ensure you understand how your AI makes decisions and whether this is fair
3. Be transparent with candidates and staff about when and how AI is being used