· Features

Cover story: AI risks and hazards

AI looks set to be a game-changer for HR, but adoption is fraught with potential risks. Jo Gallacher outlines the pitfalls and assesses how organisations should proceed.

HR is no stranger to trends. Some stick around, some are a flash in the pan. But AI somehow feels different. It represents a seismic shift in how our lives operate, disrupting the way workplaces function. At this point, the tech appears to be limitless.

AI can streamline recruitment processes and assist in HR tasks such as payroll management and performance evaluations; analyse complex data sets to identify patterns, trends and correlations; improve employee experience and transform some jobs into fully automated roles.


Read this month's letter from the editor: Tech may be smart, but HR can be smarter


With such power of course comes great responsibility, and anxiety. In March, more than 1,800 signatories, including Elon Musk and Apple co-founder Steve Wozniak, called for a six-month pause on the development of systems “more powerful” than GPT-4 (the latest version of the AI system that powers ChatGPT).

Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.

If the tech giants at the forefront of new tech are struggling to keep up, what chance does HR have? There’s a lot to unpack, so we’ve compiled some of the many questions HR may have, to help the profession navigate this new world of work.

Is AI in HR’s remit?

From recruitment to job replacement, the direct impact of AI on HR is obvious, but should this mean it automatically becomes HR’s responsibility?

“HR certainly has a role in supporting user adoption, but it is just another digital tool to achieve a business outcome and therefore to some extent it falls into everyone’s remit,” says Julia Mixter, director of transformation at Raven Housing Trust.

Mixter says the responsibility needs to be shared out across the business rather than HR’s burden alone.

She adds: “A digital team may help to find the right tool or licensing; a data protection or information governance person will need to assess the risks associated with GDPR and similar areas; business analysts may need to consider where automation can have the biggest impact, and leadership will need to sponsor adoption consistently.

“When we come together and recognise that it is the interaction of people, process, technology and data, then we can work much more effectively to introduce new technology.”


Read more: AI jargon HR needs to know


HR’s to-do list is already much longer than the working day, so overseeing the implementation of AI may be a stretch too far, particularly in organisations with large employee bases.

Katie Obi, chief people officer at Beamery, recommends beginning with a limited scope.

“A good way to carefully evaluate AI is to start small, with prototypes and pilots, and critically assess the results that are coming back,” she says.

“Ensure that the group assessing the outputs are from a diverse population of stakeholders and have the right skillset to evaluate bias and validity of outputs.”

What is a business priority for one organisation may not be for another, so HR needs to consider what role AI could play in helping here.

Obi adds: “It is also helpful not to take a purely top-down view around opportunities, as often those closest to the work, and more used to utilising innovative technology outside of the work, are able to best identify these.

“Companies could use competitions, hackathons, labs, and innovative suggestions to help generate ideas and create prototypes. Some of the most interesting examples of potential use cases for AI include those inside HR processes.”

How can HR prevent biases and inaccuracies?

With all the pros of AI and the promise of smarter ways of working also comes warnings of biases and inaccuracies.

Prime minister Rishi Sunak will be hosting a global AI safety summit in November and the Trades Union Congress (TUC) has created a new taskforce to draft new legal protections to ensure AI is regulated fairly at work.

The TUC’s taskforce is calling for a legal duty on employers to consult trade unions on the use of high-risk and intrusive forms of AI in the workplace. It is also seeking a legal right for all workers to have a human review of decisions made by AI so they can challenge decisions that are unfair and discriminatory.

Patrick Brodie, head of employment, engagement, equality at law firm RPC, warns that those who use the technology need to have a thorough understanding of its limitations and risks.

He says: “The first difficulty is that the open-source data from which [some] models generate answers has no definitive source of truth. It has not been analysed and cleaned; the data will carry errors, inaccuracies or biases and these failures will track across to the answers generated.

We all have to remind ourselves that the data carries the biases of the original writers and, therefore, risks being amplified and published more widely as truth.”

For example, AI models looking for the best candidate for a leadership role may have a bias towards middle-aged, white males due to historic data of people in corporate leadership roles being white men.


Read more: HR and AI: How can HR use AI effectively and ethically?


Brodie adds: “So, garbage in, garbage out. If the information generated by the model is relied on, without review, it risks those biases, errors and inaccuracies becoming embedded in learning and, potentially, future behaviours and decisions. There is the risk of promoting and reinforcing misleading or harmful views or outcomes.”

Interestingly, it’s not just the tech which causes bias issues. Brodie adds: “We, as humans, prefer answers, even if inaccurate, created by AI or automatic models above those created by people.

“There’s then the reinforcement double-whammy: generative language models are, generally, programmed to be plausible and assertive by using confident language. So, not only are we predisposed to believe the automated response, but we are persuaded by the model’s directness, which suggests accuracy and truth, even if that is not the case.”

And it seems that professionals are already getting sucked in. In June, a US judge in Manhattan fined two lawyers and their law firm after fake citations generated by ChatGPT were submitted in a court hearing.

Brodie therefore advises HR to have an effective mitigation process, be attuned to risks, and understand why a particular AI might give rise to certain risks.

John Morgan, principal associate at law firm Eversheds Sutherland, says the best HR teams recognise AI is not a substitute for staff, but a way to augment their efforts.

He says: “We’ve worked with clients… to put in place carefully considered ethical guidelines on the use of generative AI to ensure inappropriate usage is discouraged and improper outputs are caught and screened.

“We’ve seen a number of clients develop specific training modules and supervision processes of their HR teams and management, ensuring that the value of human input into processes is maintained.

"We’ve also spotted real moves towards transparency with employees and candidates about the use of AI, and real investment in the protection of sensitive personal data of employees where this is being inputted into AI.”

 

This is part one of an article appears in the September/October 2023 print issue.

Subscribe today to have all our latest articles delivered right to your desk.