· Comment

AI isn’t the problem in hiring, outdated algorithms are

If we build better systems, AI can be a driver of fairer, faster recruitment process, says Matrix's Roger Clements

The Workday lawsuit highlights the danger of outdated algorithms, not AI itself. Smarter, self-regulating systems and clear regulation are the real path to fair, scalable recruitment.

When it comes to AI in hiring, I’ve always believed it can be a force for good, not just for efficiency, but for fairness. But the recent lawsuit against Workday is a timely reminder that how we implement AI is just as important as whether we use it. This isn’t about fearmongering or throwing out the tech. It’s about making sure we’re using the right tools, in the right way, at the right time and in balance with the optimum level of human intervention and oversight.


Read more: Is your AI fair? How HR can tackle algorithmic gender bias


According to the news report, multiple applicants over 40 claim they were filtered out of job applications by Workday’s hiring platform – some within minutes of applying. They argue that algorithms, not humans, were making decisions at the very start of the recruitment funnel. If true, that’s a problem. Not because AI shouldn’t be there, but because outdated and ungoverned models are being used for complex decisions they were never designed to handle.

The issue isn’t AI itself, it’s the architecture. Early systems often relied on narrow, rule-based or shallow-learning algorithms trained on data sets of a size not appropriate for the use case. These models weren’t deep enough, wide enough or adaptive enough to avoid reinforcing historic bias. They lacked self-regulation or circular iteration and, as a result, could perpetuate exclusion rather than open doors.

What’s available today is very different. We’re now entering an AI era that is multilayered, self-aware and able to regulate their own learning. These stack-based models can detect adverse impact, flag anomalies and retrain themselves to correct bias. This isn’t science fiction, it’s already being used by forward-thinking employers who understand that fairness isn’t static. It’s evolving.


Read more: HR and AI: How can HR use AI effectively and ethically?


That’s why we shouldn’t be looking to roll back AI in hiring. We should be accelerating the move toward smarter, more transparent models that put candidate experience, equity and accountability at the centre. Because let’s face it, hiring at scale without AI is no longer realistic or practical in the context of the counter demands of speed of hiring requirements and immediacy of response. Human-led filtering for every applicant at the top of the funnel is inefficient, inconsistent and potentially just as biased. The key is not to remove AI from the process but to be crystal clear about what it’s doing, how it’s doing it and where the lines of responsibility sit.

This case also invites a conversation about regulation. Compare the US with the EU: under the EU AI Act, the regulative principles would drive adoption of the type described in the Workday lawsuit non-permissible. The legislation mandates human oversight for any AI-driven decisions that materially impact someone’s future. In the US, by contrast, regulation is piecemeal, devolved to states with limited overarching guidance. That creates a vacuum where the risks of systems deployment without sufficient checks and balances becomes very real.


Read more: Four lessons on ethical AI use in recruitment


We need global alignment on what ethical AI looks like in the hiring context, because talent is global and trust is universal.

Where does this leave employers?

If you’re using AI to support the screening and ultimately ranking of candidate suitability, make sure it’s the latest generation – not legacy approaches built on outdated data models. Ask yourself: does your application self-correct? Can it flag patterns and explain decisions? Are you auditing regularly across demographics? And, have you built your processes and operating models with human intervention and oversight at critical selection or pre-selection intervals? This isn’t about abandoning progress. It’s about ensuring progress serves everyone.

Because technology doesn’t operate in a vacuum. It reflects the values, data and priorities we feed into it. If we build better systems and regulate them with clarity and consistency, AI can, operating in synch with human oversight, absolutely be the engine of a fairer, faster recruitment process.

But if we keep relying on black box systems and outdated algorithms, we’re not building the future. We’re just automating the past.

 

By Roger Clements, chief growth officer, Matrix Workforce Management Solutions