Algorithms on the job: data-driven wages and discrimination drivers

As with all things to do with AI, algorithmic or 'algorithmically personalised' wages is a hot topic.

Developments in AI and tracking have enabled employers to manage their workers through algorithms based upon data collected from those very same workers, and this practice has extended to automatically determine workers’ wages.

In the context of ride-share drivers, for example, datasets collected from drivers while using the app (such as historic routes, times, fares, bonuses, etc) are analysed by the company’s machine learning technologies to automate drivers’ ride allocations and the payments associated with those allocations.


What HR needs to know: GDPR and AI

Algorithm-generated rotas: A good idea?

Do digital co-workers bully and discriminate?


Algorithmic wage-setting presents nascent legal and societal concerns.

One such concern is 'algorithmic wage discrimination', though the impacts might not necessarily be unlawfully discriminatory (i.e., against protected classes) given varying standards by law across jurisdictions.

Nonetheless, algorithmic wages can result in variable pay – unequal, inconsistent, and/or unpredictable wages – that can have negative workforce consequences and lead to disputes

While variable wages exist in workplaces free of algorithm-driven wage determinations, and algorithmic wages have only emerged thus far within the context of dynamic pricing and independent gig work, there are steps we can take to address and curb the economic and social inequities emerging from algorithms in the gig economy. 

In addition to privacy and transparency concerns, critics of algorithmic wages argue that because these wages lead to unequal and inconsistent pay for the same work (i.e., either by different workers at the same time, or the same worker at different times), they inject concepts of gambling into earning a living wage and hinder a worker’s overall economic mobility.

Moreover, because workers in the gig economy are disproportionately drawn from ethnic minorities, economic impacts stemming from this practice can be highly racialised.

The complex and secretive nature of algorithmic wages also presents hurdles in remedies, such as challenging wage determinations, particularly for economically vulnerable workers.

Critics argue algorithmic wages challenge longstanding values of fairness and justice associated with labour and the workplace and could devalue work as we know it. 

Various workers’ groups and courts have already taken action, and the legal landscape is developing, with some legal scholars calling for an outright ban of the practice.

Beginning in 2021, workers across Europe scored victories against Uber and Deliveroo for using automatic algorithmic determinations in terminating and allocating work, respectively.

Lyft and Uber drivers in California have a pending anti-trust lawsuit challenging “non-linear compensation systems based on hidden algorithms". With the recent passage of the California Privacy Protection Act, which is based on the EU’s General Data Protection Regulation, workers in California can now request the data extracted from their work that their employers use in setting algorithmic wages.

While regulations like Proposition 22 in California, for example, allow for some forms of algorithmically personalised wages, using algorithms to determine workers’ wages is not necessarily free from legal consequences.

Recently in the US, the Equal Employment Opportunity Commission and other federal enforcement agencies announced their joint effort to monitor and “protect the public from bias in automated systems”, which I expect to be a new source of wage-focused litigation.

As such, algorithm-driven workplace technologies need to be continuously evaluated for legal and regulatory compliance. Further,

  • employers should adequately notify workers of these technologies in their workplace;
  • algorithm-driven workplace technologies should not discriminate against workers based on protected characteristics, and they should be monitored for adverse impact on protected groups (which might vary depending on where workers perform services); and
  • worker data should be safeguarded and protected from misuse, including pseudonymisation and other data diligence, and algorithms should be continuously evaluated to mitigate unintended consequences.

These are just a few safeguards society, leading with employers, workers and technologists, can develop and implement to prevent inequities while compensating workers in an increasingly automated world.

Judd Grutman is counsel at US law firm The Torrey Firm, part of the IR Global network