· Features

Can you get fired by a computer?

Automated systems aren't infallible, and without human oversight mistakes can be easily made

This story starts when an employee’s key card won’t let him through the gate of his office building. Nonetheless, the receptionist knows him and waves him through. He arrives at his desk to find that he can’t log in to his computer. Next thing his phone rings and it’s the recruiter who got him the job. She’s panicked as she's received an email saying her client has been terminated.

As the issue works its way through management the response is always the same – ‘you’re not fired but your employment has been terminated on the system and we can’t undo it’. Security, who have been automatically alerted that an ex-employee has tried to enter the building, arrive at his desk and escort him from the premises.

That’s the true story of Ibrahim Diallo, an American programmer who is the first person we know of to be fired by a computer. Diallo's departing manager had failed to indicate that his contract had been renewed, setting in train a series of automatic processes that cancelled Diallo's key card, disabled his accounts, and then instructed security to remove him from the building.

This is an example of automation gone wrong. But what does the law make of all this?

New developments emerge and take years or even decades to be reflected in legislation or judicial decisions. This means new technology is being forced into an inflexible framework of old laws. Employment law has a particular habit of running behind the times.

So could a computer fire an employee in the UK? Unlike some jurisdictions, there’s no need for a 'wet ink' signature on a dismissal notice and there are no strict rules about who is authorised to dismiss. There are also no statutory requirements as to the mechanics of giving notice. The common law simply states that notice must be ‘actually given’ and ‘effectively communicated’. This means an automatically-generated letter could be good notice. You might be out of a job because the computer says 'no'.

Luckily for the job security of HR professionals and employment lawyers we are still a long way off automating a performance improvement process or a disciplinary, so the practical likelihood of an employer handing these decisions to its IT department remains low.

The GDPR also has specific rules about automated decision-making that might create a snag here. It states that employees have the right to be informed about, and object to, automated decision-making. What that means in practice is that employees could ask for a human review of their dismissal decision. If every adverse decision will have to be looked at again that’s going to kill off any potential efficiency savings.

AI also raises interesting questions in the world of discrimination. Let’s say a talent management team programs an AI algorithm to look at certain key performance indicators and finds that it consistently recommends promoting or hiring white males – do you have an inherently racist or misogynist algorithm and what’s that going to do to your workforce? Without proper oversight you may end up sleepwalking into a discrimination lawsuit.

On the flip side, you can argue that automation levels the playing field when assessing performance by removing managers’ unconscious bias from the process, and potentially protects your company from claims of discrimination.

Some companies are certainly recognising that potential and are attempting to grapple with the growing use of algorithms to make ‘objective’ employment decisions. It really does depend on how the system is designed and the information that is fed into it.

Finally, can attributes such as potential, fresh strategy and responding to change get lost when there is an over-reliance on automation? We think so – but perhaps we’re just being typical arrogant humans.

Raoul Parekh is a partner and Dónall Breen is an associate at GQ Littler