Businesses are constantly looking for ways to streamline elements of their operations and drive efficiencies, and the recruitment process is a prime example where AI tools are providing solutions.
However, AI is, by its very nature, reliant on data – and vast quantities of it; the more the better in fact and the use of AI tools in the workplace often requires the processing of data related to employees, including personal data, some of that sensitive. Unsurprisingly therefore, the use of such AI tools in this space raises multiple data privacy issues.
Read more: Four lessons on ethical AI use in recruitment
On November 6, 2024, the UK Information Commissioner’s Office (ICO) published a report following consensual audit engagements conducted between August 2023 and May 2024 with developers and providers of AI powered sourcing, screening, and selection tools used in recruitment. The report covers the outcomes of the audits and contains a series of recommendations for recruiters (and developers and providers of recruitment AI tools) that aim to better protect the data privacy rights of candidates.
The recommendations contained in the report relate to key areas and principles of data protection compliance, such as transparency, data minimisation and purpose limitation. These are often areas HR professionals find particularly challenging when implementing AI tools in the workplace, particularly in the context of recruitment, given the inherent tensions that exist between data privacy and AI. This article sets out some of these tension points and the ICO’s corresponding recommendations.
Transparency
Data privacy laws typically include an obligation to inform relevant individuals about data processing activities. Most employers therefore maintain privacy notices or policies for data processing activities related to employees. Given that the use of AI tools in the workplace is fairly new for most businesses, it is likely that any existing privacy notice will not sufficiently inform employees of the data processing related to the AI tool, meaning an additional notice or update to the existing notice will be required.
Read more: Data compliance: Whose job is it anyway?
The ICO reinforces this requirement in its latest report confirming that recruiters have the responsibility to ensure that they inform candidates of how AI tools will process their personal information. They should do this by providing detailed privacy information as they do for their employees, or ensuring that this information is given by the AI provider.
Data minimisation
The GDPR and other modern data protection laws also include the principle of data minimisation: “Personal data must be adequate, relevant and limited to what is necessary in relation to the purposes for which those data are processed”.
In other words, companies should keep data for as little time as possible, use only the amount and type of data necessary for the model, and only for its specified use which, given the vast amount of data that AI tools rely on, seems somewhat counterproductive.
The ICO confirms in the report that AI providers should assess the minimum personal information required to develop, train, test and operate AI tools.
Assessments
Data privacy laws often include an obligation to perform a formal assessment of certain data processing activities. In the EU and UK, for example, the law requires a data privacy impact assessment (DPIA) be conducted when the processing of personal data is likely to result in a high risk to individuals.
Read more: HR analytics and ethical trade-offs – the unconsidered narrative
While the use of an AI tool that processes personal data would not always, by default, result in a high risk to candidates/employees, many will meet the threshold as certain factors of using AI in the context of a workplace increase the general risk profile. Even if the relevant obligation to perform an assessment, such as a DPIA, is not strictly triggered, it is best practice to perform and document a form of assessment when a new processing activity is to be implemented and the ICO has reinforced this approach.
The report confirms that AI providers and recruiters should complete a DPIA early in the development of an AI tool and prior to the relevant data processing activities. Even when acting as a processor, the report suggests that a provider consider completing a DPIA to fully assess and mitigate risks.
Businesses wishing to implement AI tools in the workplace should assess the relevant AI tool to determine what laws apply, consider whether the use of the AI tool is lawful and whether any changes are required to the current data privacy practices before the tool is rolled out.
By Sarah Pearce, data privacy and cyber security partner, Hunton Andrews Kurth