· News

AI guaranteed to go wrong, says MP

AI is guaranteed to go wrong and has potentially devastating effects on the workforce, according to Conservative MP David Davis.

Speaking at a Trades Union Congress (TUC) conference on 18 April, Davis said the progress of AI must be slowed down until its dangers are better understood. 

Agreeing with an open letter signed by tech giants Steve Wozniak and Elon Musk, Davis argued no one fully understood the effects of advancing the technology, calling it ‘a profound risk to society and humanity’. 

He said: “If Wozniak, co-founder of Apple, doesn’t understand the scope of AI, what chance does a non-specialist governmental body have to regulate it properly?"


More on artificial intelligence:

HR and AI: How can HR use AI effectively and ethically?

AI used as a training tool to help improve employee pitches

Businesses warned employees lack skills to handle AI


Renate Samson, interim associate director of AI research institute the Ada Lovelace Institute, reiterated Davis' sentiment.

She said: “Even the experts don't understand what they're actually building. We’ve had 20 years of change in six months. Things are moving so fast that they are going to get broken.” 

Davis said any form of workplace automation must be treated with caution, comparing the unchecked use of AI to the post office scandal, which saw over 700 subpostmasters wrongly prosecuted for theft, false account and fraud due to a faulty computer accounting system.  

He argued we could face similar wide-scale miscarriages of justice if AI was left unchecked. 

Also speaking at the conference, Labour MP Chi Onwurah said AI in the workplace is causing serious ethical concerns due to the flawed data it is based on. 

She said: “AI is seen as very convenient because it can automate so many processes...but it can also automate racism, sexism and exploitation. 

"If something is not diverse by design it will not be equal by outcome." 

Onwurah said using AI in workplace surveillance was also taking away the privacy and autonomy of workers. 

She added: “My constituents feel tech is something being done to them by opaque algorithms. 

“Workers need stronger bargaining mechanisms about how AI is implemented.” 

According to the TUC, the percentage of workers reporting workplace surveillance and monitoring increased from 53% in 2020 to 60% in 2021, likely due to the pandemic. 

A report published earlier in April by thinktank the IPPR found young people, women and black people are most likely to be affected by workplace surveillance. 

Christina Colclough, founder of human rights and technology consultancy, Why Not Lab said methods of AI workplace monitoring can include location tracking, facial recognition, word and voice monitoring (including evaluating tone of voice and frequency of words said) and keyboard tapping monitoring. 

She detailed a number of potential harms from monitoring, including mental health pressure, work intensification and loss of autonomy. 

Speaking at the TUC conference, she said: “Where AI is seen as a money saving measure for administration, workers end up inputting data on top of their usual work. 

“I also hear of a lot of workers ‘double jobbing’. They have to do their jobs both on paper and input data to an AI process, simply because they can’t trust the AI.” 

Colclough said as jobs shift or decline due to automation, there should be a ‘disruption obligation’ on employers. 

She added: “If employers introduce disruptive automation they should have to retrain and upskill workers into new roles, which will inevitably be created by AI.” 

Davis concluded the best way to tackle these issues was solidarity across the workforce.  

He added: “It isn't just the warehouse worker at risk, it's also the architect who built the warehouse. We need solidarity.” 

 

Listen to the HR Most Influential Podcast's episode (S2, E4) dedicated to the human impact of AI here.