· Features

AI could present legal challenges in talent recruitment

Increased use of AI relies on a parallel increase in stored data, which is at odds with the regulatory environment

According to a recent report from PwC, around 30% of UK jobs could be automated in the next 15 years – borne out by the fact that employers and recruiters are already increasingly using automation and artificial intelligence (AI) in their recruitment processes.

This makes sense from a cost and efficiency point of view, and holds the additional attraction of eliminating the bias that may creep in when applications are assessed by people. But such technology is not the panacea that it may at first appear: there are warning signs HR teams should heed.

This is illustrated by a recent employment appeal tribunal concerning the use of automated recruitment processes and a disabled applicant. Under the Equality Act 2010 prospective employers have a duty not to discriminate against disabled candidates and must make reasonable adjustments to 'level the playing field'. An applicant to the Government Legal Service (GLS) who had Asperger Syndrome, Ms Brookes, won a case on the grounds of disability discrimination after she was unsuccessful in her job application, which partly consisted of multiple choice questions.

Brookes argued that her disability made it difficult for her to provide answers using a multiple choice format and she would be better able to cope if the GLS adjusted the test by allowing her to give short narrative answers. Brookes claimed that GLS had indirectly discriminated against her on the basis of her disability by applying a practice that placed her at a particular disadvantage and could not be justified. She also claimed that it had failed to make reasonable adjustments to the test.

The EAT ruled in Brookes’ favour. This begs the question of where the line needs to be drawn in terms of making recruitment tests effective while retaining a degree of flexibility for those who require it. The EAT didn't provide any definitive guidance on this point, but felt that the GLS could have assessed the core competencies required while allowing for narrative answers.

The learning point here is that the ability to allow flexibility and tailoring to the specific needs of individuals must be built in to automated processes.

There are other risks around AI too. For example, technology is developing to filter CVs prior to interview. The legal risks are likely to be low, provided that the CVs being scanned are retained only for that purpose and for a limited time. However, at the more sophisticated end of AI-assisted recruitment there may be a corresponding increase in legal risk. For example, technology is developing that will automatically scan the internet for newly-published material on candidates and will alert employers of a change that may mean a (previously rejected) candidate is of interest. However, retaining individuals' data for this general purpose may fall foul of the restrictions on lawful processing in the Data Protection Act 1998.

Increased use of AI analytics inevitably relies on a parallel increase in the use of stored data and 'big data', which is diametrically at odds with the increasingly restrictive regulatory environment. This is set to become even more relevant when the General Data Protection Regulation (GDPR) comes into effect on 25 May 2018.

So while AI and automation may appear to illuminate many questions, HR teams and recruiters need to proceed with a degree of caution, using their human intuition, and not just rush towards the light.

Sarah Maddock is an associate solicitor at Bevan Brittan