Do digital co-workers bully and discriminate?
TUC research has concluded it’s more than likely - rather than a way of reducing discrimination and unconscious bias in the workplace, the use of ‘objective’ non-human artificial intelligence (AI) has been found to exacerbate problems and open up employers to complaints and legal challenges.
The study, Technology Managing People - the legal implications, highlights examples from employers such as Amazon and Uber. The online retail giant had opted to use a specialist AI recruitment tool to select the best new candidates. Its reliance on previous data meant the tool picked up on and magnified a bias, leading to a tendency to ignore female applications. Uber ended up being taken to court by drivers who claimed the decision to sack them had come from an unreliable AI algorithm.
Covid-19 has accelerated interest in the use of AI applications for assessing and ‘managing’ people, supporting remote and digital forms of working and reducing the need for human contact. Employers, the report claims, are using AI to run performance management systems, check on activity, even used to monitor facial expressions and tone of voice to assess people’s suitability for roles.
In other words, AI is being used to “make life-changing decisions about people at work – like who gets hired and fired”, says the TUC.
So there’s a need for workplace regulation that takes into account the potential for intrusion and bias by technology that can’t be assumed to be fair or reasonable. It’s too dependent on rigid, inhuman algorithms that struggle to reflect the nuances and grey areas of human working life - meaning tech that acts like a bully, imposing unreasonable targets, having no empathy, no understanding of changing context.
HR's relationship with AI
Regulation needs to include making sure no decisions being made without human overview, argues the TUC; a legal duty for HR to consult trade unions over potentially discriminatory use of AI; updating of the UK GDPR to prevent discriminatory algorithms and a legal right for employees to be able to ‘turn off’ AI co-workers who are otherwise constantly monitoring activity.
The use of AI is certainly a new source of workplace conflict. Particularly because employees will find it much easier to complain about an inhuman algorithm than a colleague or manager, where there are none of the usual social barriers to airing grievances. No fear of awkward scenes or recriminations. No ambiguities - because the logic of the AI is always explicit and open for challenge.
So HR need to be ready for a lopsided human versus technology conflict. That means needing to extend feelings of trust in the organisation to include AI and autonomous tech. It means being aware of the impact of AI on relationships between staff and managers running the tech.
Most of all, HR have to be confident in the culture, that people feel able to talk about concerns and raise problems early without just resorting to formal action - and that managers have the skills to deal with the difficult conversations that will be involved.
That means introducing good processes that build confidence and trust among everyone involved with an organisation: mediation, neutral assessments and dealing with grievances in general. But processes are just one part of the solution.
Deeper and more long-lasting change comes from institutions where the right behaviours and skills have become commonplace. That means building a culture of ‘Conversational Integrity’ that leads to better handling of difficult conversations. This will help employees to feel able to be open, and most of all to create a sense of psychological safety in workplaces - people can be themselves, discuss problems early on without fear of recriminations or being ‘gagged’.
In other words, as workplace AI and digital people management becomes more widespread, the more human qualities are at a premium. HR need to make sure there's a safety net of people skills and processes.