AI in the workplace is a diversity issue HR needs to take control of

HR needs to have a seat at the table when it comes to employers’ use of artificial intelligence (AI) and automation, according to senior associate at Fieldfisher David Lorimer.

He warned that known biases in the way AI and algorithms are created could potentially lead to legal implications when employers apply them in the workplace.

AI is often used in automated recruitment practices and performance monitoring, but there could be a real risk of indirect discrimination if the right assessments are not made.

Yet the impartiality that a machine offers could be just the remedy to discrimination that employers are searching for.


An introduction to AI in the workplace:


In a recent case in the court of appeal, civil liberties campaigner Ed Bridges won a complaint against South Wales police for its use of facial recognition technology to identify potential crime suspects.

Ruling in favour of Bridges, the court found the use of the technology had breached the public’s right to privacy and the police force had failed to properly investigate race and gender bias in the software.

Though campaigners continue to rail against its use in the public sphere, the lasting impact of such a ruling may not be quite the ‘damning indictment’ for AI at work that some commentators believe it to be.

With some amendments, the South Wales police force are planning to continue using the software due to its success rate – in 61 arrests facilitated by the technology no unlawful arrests have been made.

“To be totally honest I think that quite a lot of the instant reaction [to cases likes Bridges’], such as it’s a death knell for AI in employment is all way over the top,” said Lorimer.

“There’s a definite role for these kinds of platforms which can make things much more efficient, but it’s a question of making sure that the considerations are wrestled with and properly considered.”

Lorimer argued that risk assessments often take a back seat when it comes to technology as it can seem to stunt progress.

He added: “The language of risk assessment and impact assessments has become a byword for halting progress or ‘blue tape’ but actually, that's really the key here. Before anything happens and before we install and rely on technologies like AI, it is essential that you are both thinking through the issues and documenting that.”

Public bodies are required by law to do quality impact assessments and, although private sector bodies do not have this obligation, Lorimer recommended they take the same approach.

“We're going to see more of this, and I think not limited to a public space. It’s probably a good thing because [now] is a good chance to think through; to have a platform; and an agenda for considering what the issues are.”

Lorimer therefore said that a cross-department effort is needed.

“If a business is big enough to have an in-house privacy function then they need to be involved. Legal – again is the function exists. HR absolutely needs to have a seat at the table, and broader compliance functions,” he commented.


Having company-wide buy-in is critical because of the multiple elements covered by AI and automation’s application in the workplace.