Research by fellow WPP agency Wunderman Thompson found that 77% of businesses in the UK use some form of artificial intelligence, with the human resources (HR) sector being one of the leaders in adoption.
While AI has huge potential for HR, especially in terms of productivity, the way in which it should be implemented and monitored requires careful navigation.
Organisations and their HR teams are in unchartered territory. The technology and opportunities are developing faster than the rules and legislation, so it’s not easy for HR leaders to keep abreast.
HR and AI
This is especially true for organisations that operate across Europe, where the UK government and the EU are developing rapidly diverging regulations. The U.S. and many other countries are also setting their own guidelines.
The clients we speak to want to comply and go beyond the legal standards when using AI in their people-related projects.
They also understand that there must be a commitment to disclosure to employees, explaining that AI is being used. The tricky part is knowing what it means in practical terms.
The type and complexity of people policies required around AI are entirely unique for each organisation. They are dependent on the size of the business, what the existing people policies are, what they are trying to achieve with AI and what type of tools they are using.
The support organisations require also varies. For some, it’s about building trust among staff and being ethical, which means transparency.
Ensuring that staff and potential recruits understand why and how AI is being used and have a form of redress to challenge a decision.
For example, it should be clearly stated when job applications are sifted through by AI so that applicants can make an informed decision on how to engage. Involving staff and letting them have a say in how AI is being used in HR processes will go a long way towards building trust in its use.
I often sense that staff have a desire to feel reassured about AI. They want to understand it on a simple level, but also feel reassured that the nitty gritty detail is available and being monitored by experts.
I’d compare it to the regulation of food products. People generally feel reassured that what they are eating is safe and adheres to quality standards.
Culturally, it is this type of collective confidence that HR leaders should aim to instil within their workforce.
Other areas, like regulatory compliance in relation to HR, are more complicated. This is still a minefield as the UK government has, so far, only released “principles” which will be turned into actionable proposals by individual regulators in the coming months.
But progress is accelerating. The government recently issued a white paper on AI and has asked interested parties to provide feedback so that it can create legislation.
It’s still early days and there is still a degree of uncertainty about what the rules for specific industries will be, but it’s clear that safety/security, transparency, fairness, accountability and contestability will be the guiding principles of the legislation.
Smart business and HR leaders are keeping a close eye on this so that they can prepare and benefit from it through their own HR policies.
They understand that no one has AI sussed, it is very nuanced, especially in relation to people policies.
The key is to establish what level of risk is acceptable. Deconstruct the risk and understand it. And then build a culture in which people feel empowered to accept or happily interact with AI.
Let’s remember that AI is made by people, and it is people who make the decisions about what AI does. As long as ethics, fairness and transparency remain at the forefront of those decisions, leaders and their staff have an exciting future ahead.
Harry Stovin-Bradford is head of BCW Navigate, a global artificial intelligence advisory service