· 2 min read · Features

What HR needs to know: GDPR and AI

Published:

Complete transparency about how your AI technology works, whatever the process, is essential. Without it, employers can run into GDPR challenges and lose the trust and buy-in of colleagues. An impending class action against controversial taxi app developer Uber could have some implications for how such technologies are used in the workplace.

The App Drivers & Couriers Union is currently bringing a legal challenge to courts in the Netherlands over more than 1,000 claims by Uber drivers that they have had their right to work for the company terminated because of fraudulent activity flagged by the app. The claimants believe they were wrongfully terminated, and Uber has refused to give them any explanation over what they did wrong or how the app came to its decision to automatically terminate their accounts.

The core of the Uber driver’s complaint is that there was no human intervention in the termination of their contracts – which is what may be unlawful.

By contrast, HR professionals can legally use algorithms to help them work out who they should be recruiting as the final decision ultimately lies in human hands. Under GDPR and the Data Protection Act (DPA), Excello Law employment specialist Hina Belitz explains: “It’s unlawful to take a “significant decision” solely on an automated basis.”

Using algorithms in recruitment may also fall into an exception to GDPR law that deems, according to Belitz: “It’s permissible to use solely automated decision-making where it’s necessary for entering into, or the performance of, a contract.

“EU guidance suggests that recruitment may be one of these situations. After all, the processing is with a view to entering into a contract of employment. This is called a qualifying significant decision.”

So long as there are safeguards in place to protect candidates’ rights and freedoms, Belitz adds that automated sorting in the job application process is allowable. People must be told that their data is being handled in this way, which also means they have the opportunity “to request a reconsideration or the making of a new decision which has a human in the mix” adds Belitz.

An employer is obligated by law to carry out a data protection impact assessment before implementing such tools. Belitz says: “It would have to consider whether it was really necessary, as opposed to convenient or cheaper.

"For instance, if you get a thousand applications for every job opening - but not perhaps if you get 20 or 30. And it ought also to consider discrimination aspects.”

It may also be possible to positively discriminate using an AI tool. Belitz adds: “Any skewed outcome where an algorithm is used is a possibility [for positive discrimination].”

When it comes to social deprivation though, as it is not a protected characteristic at present, Belitz says: “There is unlikely to be any legal recourse for an individual. The main protection to bear in mind is to ensure there isn’t sole reliance on automated decision making.”

The full article of the above is published in the 2020 Technology Supplement. Subscribe today to have all our latest articles delivered right to your desk.