Levelling the playing field

,

Add a comment

AI can take bias out of the recruitment process but it needs the care and trust of HR first, finds Beau Jackson

Artificial intelligence (AI), when used to the best of its ability, should be about making tasks simpler and more intuitive. Andi Britt, vice-president of IBM Global Business Services, argues it has great potential for human flourishing and to serve humanity, yet as a new technology we don’t wholly understand, it can seemingly make things more complicated.

In 2018, Amazon made the headlines for its so-called ‘sexist AI’ tool that was inadvertently sifting women out of its recruitment process. Problems like this can occur due to biases that exist within the data that the AI tool is processing.

For example, if the majority of your workforce is already made up of white males, without taking that into consideration when creating the tool, there’s a chance that it could start to recognise identifiers such as ‘white male’ as successful and flag any that don’t match that demographic as unsuitable for the role.

Similarly, if people from the same demographic group are the only ones in the room when developing the AI, biases in the technology could easily go unnoticed by creators subconsciously programming them into the process.

Given the risk of bias, it can therefore seem easier for HR to play it safe during the recruitment process by avoiding AI. Yet this would also be ignoring the overwhelming bias that humans already apply to any selection process.

Gareth Jones, CEO at early talent hiring tech-developer Headstart.io, says: “People are saying, rightly so, they’re concerned about AI in recruitment particularly because we’ve got examples where AI algorithms have been constructed in such a way, and they’ve been fed a certain level of data that means they end up being biased towards candidates.

“If you look at that on face value you could be forgiven for thinking that everything we do now is bias free, and that experimenting over here with AI is where the problems are when actually, it’s not. That’s just the natural evolution of technology, what we’re doing now is incredibly biased.”

AI shoulders the blame because it is often bringing our inescapable, subconscious processes to the fore, which is uncomfortable for many, suggests D&I consultant Toby Mildon. In humans it happens on what Mildon deems the ‘other-than-conscious’ level, so we’re rarely if ever aware of it.

He says: “Bias is just a preference for something, but we have preferences that have a negative or a positive impact on others.

“It’s something that’s always happened, it’s something that will always happen, because it’s a product of our neurology, the way that our brains are wired, and it’s the product of social conditioning.”

The difference between humans and AI is that we are able to spot biases in technology much more easily than we are able to spot our own.

Examples like Amazon’s ‘sexist AI’ show us uncomfortable truths about wider society, Mildon adds. “I think what we’re noticing is that because technology is so much more binary, AI is holding up a mirror to society, and it’s basically reflecting our biases.”

It will take decades if not centuries to weed out the unconscious biases that negatively affect our decisions, so it could therefore be much quicker to re-programme a biased AI to help us make better decisions. An easy way to do this is through an AI audit.

Mary Kay Evans, chief marketing officer at AI recruitment software provider Pymetrics, says: “While it is impossible to correct human bias, it is demonstrably possible to identify and correct the bias in AI technology. AI can be designed to be audited and the bias found in it to be removed.”

Evans says an AI audit should function like safety tests of a new car before it is allowed on the road. She adds: “If standards are not satisfactorily met, the defective technology can actually be corrected before it is allowed into production.”

While technological advancement must always be tempered with ethical implementation, if it is possible to create an AI that can remove bias from the selection process, what’s stopping HR from using it in the same way it does with other technologies?

Where biased, human intuition is holding us back, is it wrong to delegate to a machine?


Widening the pool

Though more work to reduce bias arguably needs to be done at the interview stage itself, and throughout the talent pipeline, AI is currently proving it can have a real impact in sourcing a more diverse pool of people for recruitment.

One of the core elements of Headstart’s technology, according to Jones, is to “try and neutralise the Ivy League and Red Brick university approach”, and to tackle issues of social deprivation.

It does this by creating a ‘fingerprint’ of a job description, containing the core requirements of a role. Data for each candidate is compared to this core fingerprint and each is allocated a match score on how well they meet the criteria.

One of the things taken into account in candidates’ match scores are their answers to a set of questions aimed to communicate an idea of their socioeconomic status, questions like: “Are you the first person from your family to go to university?” and “Where were you when you were 14?”

Answers to these questions are aligned, using AI, with data the company crunches from public sources, such as school reports and free school meal data, which helps to create a picture of local geography and pinpoint where people may have been brought up in more challenging circumstances.

Jones explains: “What we’ll do is we’ll take the data we’ve collected, and we will uplift someone’s match score if they’ve been through or they have indications of social deprivation, so they get an uplift of up to 8%.”

Rather than going in the opposite direction and excluding people’s education from the process entirely, the aim of including socioeconomic information in candidates’ scores is to try and level the playing field in how success is measured. Acing the exams at a prestigious college is different than, for example, getting high grades at a no-name college while also taking care of a parent.

With this method, Jones hopes to be able to help accelerate the way society has started to move when it comes to D&I. Working with Accenture, Headstart’s method realised a 2.5% increase for black and ethnic minority hires for technology-focused roles, and a 5% increase in female hires for the company – making the incoming cohort over 50% female overall.

Jones adds: “We’ve in theory had a focus on diversity and changing our approach and our attitudes. But little has changed.” This is one of the reasons he says his company focuses on early talent, as it’s at the start of the overall pipeline.

“It’s probably the only area in a business that you can put quite a large number of people into the organisation at a time when they’re much more open minded and are much more diverse – you can move the dial more,” he says.

Another CV-alternative aimed at reducing bias in selection processes is the use of job-related assessments. In his consultancy work, Mildon describes experimenting with AI screening technology from GapJumpers, a company that seeks to advance the practice of objective decision-making in recruitment.

Through GapJumpers, candidates are assessed for their suitability for a role by completing an online challenge related to the work they would be doing.

When tasked with improving the diversity of a recruitment process for one of his clients, Mildon and his team noticed that part of the problem was at the CV screening stage. Implementing GapJumpers’ method instead, he says: “We saw a 130% increase in people from an ethnic minority background getting called in for interview compared to the normal way of screening candidates – so that goes to show how technology actually helps de-bias a process.”

On this occasion, Mildon adds that technology also outperformed other potential solutions for de-biasing recruitment. “There are other good tools out there, but GapJumpers was a tool that was a lot more effective than putting the recruitment team through unconscious bias training for instance, which had no impact whatsoever.

“The technology that we used outperformed them in terms of the impact that it made.”

Mildon argues that when technology works as it should, it is a great solution to combat bias. He adds: “You have to pick the right tools for the right problem space. We noticed that there was something going on in CV screening so we selected a tool to address that particular issue.”

Adding evidence to the ‘business case’ for D&I too, there is research out there to suggest that using AI in this way not only helps to reduce bias, but also improves the quality of your talent pool.

In a study of common CV screening algorithms led by Danielle Li, associate professor at the MIT Sloan school of Management, a standard screening approach which selects applicants based on how their CVs match those of people that already work at a company – who went to the same schools, for example – resulted in a hiring success rate of just 10%.

The team therefore built an AI algorithm which allocated an ‘exploration bonus’ to applications that had criteria outside of the norm. They found that using such a method not only increased the racial diversity of people within the hiring pool, it improved the quality of candidates, i.e. those that would be hired following an interview.


Skills: the new currency

For IT services company Tata Consultancy Services’ (TCS) UK & Ireland HR director Ramkumar Chandrasekaran, the biggest argument for the use of an AI is in volume recruitment.

Usually, TCS hires around 30,000 people each year, sourcing talent from top graduate schools. In the past year though in India it introduced a national qualifier test for new recruits.

In this process, the company removed candidates’ education data and anonymised applications, which Chandrasekaran suggests is helping the company eliminate at least 50%–60% of bias in the shortlisting process. Following the qualifier test a higher proportion of successful candidates were female.

When it comes to a question of increasing diversity in such volumes of candidates, Chandrasekaran argues that employers have no other choice. He says: “There may be better ways of reducing bias – I do not know, honestly – but I have not thought of a way in which you can reduce bias systemically at scale without using technology.”

He adds: “Training, making people aware of their biases, coaching – those kinds of things still happen. But if you want to have an impact at large scale, we have to depend on technology.”

Questioning whether companies need to hire in such volumes therefore changes the conversation when it comes to AI and D&I. For Headstart’s Gareth Jones, this is one the big flaws in current hiring processes.

McKinsey & Company’s ‘war for talent’ described how competition for good quality candidates was being intensified by a demographic shift in the US and Europe, meaning there were fewer people to ‘supply’ roles, though ‘demand’ for them remained high.

Coined in the late nineties, the term made employers start to look outwardly for talent. Jones says: “That pushed our lens externally, which was when we started to ignore internals, which is the wrong thing. I think we need to shift back to saying, do we really need to hire 200,000 people a year? Or could we reduce that number?”

It’s a necessary mind shift, he says, but one that technology can help us with: “The quality of data that we’ve got on people in our business and what we do with it is quite poor still. A good quality business has a 360-degree view of the customer […] By comparison, we’ve probably got a 20-degree view of people because we just under invest in it. We don’t understand our talent.”

The way that AI proposes to help companies better understand their internal talent will continue to push the focus away from CVs. It will move processes towards skills and personal qualities that people have which make them a good fit culturally for an organisation, as well as in terms of their ability.

IBM has been using AI in its HR processes for several years thanks to the creation of Watson, a natural language computer system predating Amazon’s Alexa, which can answer questions in a human-like way.

IBM’s Andi Britt says: “We’ve been doing a lot of personalised recommendations for things like learning and careers, where we’re using some smart combinations of AI, data and analytics to make HR much more personalised.”

Rather than just offering mass information through an employee guidebook, AI at IBM uses data and knowledge it has about who’s using it to make informed recommendations. As an example, the company has developed the Watson candidate assistant, which helps prospective job candidates identify the roles that are right for them within the company.

Building on this platform, the next stage for IBM will be using data from its own workforce to build out what Britt deems “success profiles.” By collecting data on employees who perform well in their roles the company aims to analyse exactly what qualities make them so successful. It will store this information in profiles that will help to both recruit new candidates and move people between roles internally.

None of this data, Britt explains, includes anything relating to individuals’ protected characteristics. Not only does this effectively take bias out of the equation, it also takes ideas about what makes a good candidate out of human hands – where it has the most risk of being influenced by bias.

He says: “Recruitment is going to become more scientific – it’s going to be insights driven and it’s going to be predictive rather than just gut feel.”


The black box

For Britt the future of work is heavily focused around AI. He says: “You only have to look back at the history of technology innovation just to understand how quickly it becomes dominant and mainstream.”

He compares the influence of the technology to the way mobile phones and Wi-Fi have changed working practice over the last three decades. “AI is going to do the same. We’ve only had 20-30 or so years of the mobile phone and look how far we’ve come. We’re at the start of our journey, in the third decade of the 21st century – we’re really at the start of AI being used in HR,” he says.

Yet HR should still proceed with caution to make sure biases are avoided, warns Asad Dhunna, founder and CEO of D&I consultancy business The Unmistakables.

“To truly reduce bias in HR processes, organisations need to take a holistic approach that works from the inside out. That means assessing the structures and behaviours and breaking them down to understand where human bias can play a role.”

“Implementing technology at some or all of those stages can create a more inclusive way of working, but only if that technology is designed and developed with inclusion in mind and not used as a sticking plaster.”

Though a firm advocate for the technology, Rob McCargow, director of AI at PwC UK, admits that there is a real need to thoroughly consider risk before implementing AI. He says HR needs to take a look inside the ‘black box’ of AI to understand how it really works.

“There’s an absolute need to understand how [an AI tool has] reached a decision – and the more powerful they are, the more opaque they can be. So, there are several tools you can apply now to help you to peer into the black box,” he says.

From the technical side, it is essential that AI tools are rigorously tested both before implementation and during use to ensure they aren’t making biased suggestions. McCargow adds that there is also a lot from a governance perspective that teams can do to make sure a model “doesn’t shift and start leading to erroneous outcomes.”

This advice is also upheld by the CIPD. Claire McCartney, resourcing and inclusion adviser at the CIPD, advises employers to add more rigour, consistency and challenge to all aspects of recruitment and selection processes. She says: “In particular, they should be making a conscious effort to reduce bias among hiring panels as, ultimately, they’ll be the ones deciding who gets the job.

“The considered application of appropriate technologies can be hugely beneficial and cost-effective in improving recruitment processes. But employers do need to make sure that it’s implemented carefully, complements an organisation’s brand and values and is optimised with the end-user experience in mind.”

Careful not to simply abdicate responsibility to AI to address biases in recruitment, a blended approach is likely to be a good place to start, especially when also considering candidate experience. James Gordanifar, head of student recruitment, says this is the approach the team is taking at EY UK&I.

“It is entirely possible to identify candidates through a purely digital recruitment process managed by automation, online assessments, gamification and AI hiring decisions. However, we believe a blended approach works best for us.

“While at EY, we opt for a personal touch during the recruitment process, we also recognise that most candidates expect a technology-enabled, almost consumer-like experience from employers.

“It’s about balance because while technology certainly helps provide this – not to mention support in dealing with a significant volume of applications in an efficient way – having little or no human intervention means candidates don’t always get a sense of an organisation’s culture or a feeling of how they might be treated when they join.”

Gordanifar adds that it is also essential to identify what ‘good’ looks like when it comes to D&I in recruitment, and track that alongside the process. He adds: “This can help organisations to make hiring decisions based on what actually makes someone successful in their role and support the removal of potential bias.”

Above all, McCargow argues that a multidisciplinary approach is what is needed to make sure AI is a positive tool for talent management. He says: “I think it’s critical that the broader workforces are included and participate in the design and deployment of these technologies.

It’s absolutely essential that we avoid that trap when we’re building trust that it feels like a technology is being done to the workforce, rather than developed in concert with them.

“There are far too few situations where the HR professional is in the room. That needs to improve.”


The full article of the above is published in the 2020 Technology Supplement. Subscribe today to have all our latest articles delivered right to your desk.

Comments
Change the CAPTCHA codeSpeak the CAPTCHA code
 

All comments are moderated and may take a while to appear.