· News

Uber Eats driver wins payout for racially biased AI checks

The claimant was removed from the app after a series of failed facial recognition checks - ©Gorodenkoff/Adobe Stock

A black Uber eats driver has received a financial settlement after facial recognition checks required to access the app were ruled as racially discriminatory at an employment tribunal.

Pa Edrissa Manjang, the claimant, worked as an Uber Eats driver in Oxford from November 2019. 

At the start of his employment the Uber Eats app, which drivers depend on for work, did not require facial recognition verification.

But in March 2020 the Microsoft-powered app introduced verification checks that required employees to provide a real-time selfie to log on for work.

In 2020, Microsoft admitted that its facial recognition software worked less well for people who belong to ethnic minority groups.

The app continuously asked Manjang to resubmit images of his face to the app as its facial detection software did not recognise him.

This prevented him from getting work and, after a failed recognition check in 2021, he was removed from the platform – an automated process that removes those who have failed the verification process multiple times.

The app reported continued mismatches in photos of Manjang's face, though his appearance had not changed.

The chair of the Equality and Human Rights Commission (EHRC) said that Manjang was not given a clear and effective route to challenge the technology. 

The EHRC and the App Drivers and Couriers Union provided funding for the case due to their concerns over the role of artificial intelligence (AI) and automated processes in preventing workers’ access to jobs.


Read more: ICO bans leisure centres from using biometric data to monitor employees


Manjang's lawyers argued that not only was Manjang unfairly dismissed due to racially biased facial recognition technology but that the repeated facial recognition verification checks he was subjected to while working for Uber Eats amounted to racial harassment.

The case is due to continue to a full hearing.

Hannah Wright, partner at Bates Wells and the lawyer who represented Manjang, commented that this case would set a precedent for AI discrimination in the workplace.

She said: "This has been a very important case. It is among the first to consider AI and automated decision making in the context of work and the potential for unfairness and discrimination.

"Sophisticated AI systems are increasingly becoming a part of how people are managed at work.

"This carries substantial risks, particularly where decision making processes are opaque and particularly in terms of equality. The current protections are inadequate and the process for challenging decisions involving AI is fraught with difficulty."

Kate Palmer, employment services director at Peninsula, told HR magazine that employers should consider these risks before introducing AI to their organisation.

She commented: “With more and more businesses utilising AI, this case serves as a timely reminder for employers that there are many important things to consider thoroughly before introducing AI into business operations.

“When using AI, employers are running a risk of errors. AI is only ever as good as the programming and with the technology behind AI still being relatively new, it’s likely that some issues will arise, as already seen in some cases.”

Niloufar Zarin, head of AI at Thrive Learning, explained that because of this programming, AI risks discrimination in the workplace.


Read more: Cover story: AI risks and hazards


Speaking to HR magazine, she said: “There is no doubt that AI can be extremely positive and help businesses operate more efficiently, but there are a number of risks to consider.

“AI risks discrimination at work due to biased data and algorithms, lack of diversity in development or lack of transparency.”

She suggested that employers could prevent this by carrying out risk assessments prior to introducing AI. 

She continued: “Employers can prevent this by ensuring the presence of humans in the loop, using diverse and representative data, assessing the vendors they work with, fostering diverse development teams, prioritising transparency in AI systems, following ethical guidelines and implementing continuous monitoring and improvement processes.”

Palmer added that employers should set out an AI policy that details what to do following reports of an error.

"Companies should have an AI policy in place clearly setting out and explaining the repercussions for employees who fail to follow the policy.

“It’s vital that if an employer does want to introduce AI into their organisation, they don’t rush it. Carry out a full risk assessment ahead of time to identify what the potential risks are with the proposed use(s) of AI and if there are any controls that can be put in place to mitigate and reduce the risk."