Connect with us

Technology

Stop AI Hiring Bias: 3 Keys to Keeping It Under Control

Published

on

Following the initial rise of ChatGPT at the end of 2022, AI has been catapulted into mainstream use, and over the past year, companies in nearly every industry have implemented the technology into their business processes. In fact, new data found that 55% of staffing firms are experimenting with AI.

But continuous questions around algorithms’ ability to combat AI hiring bias has left recruiters and job seekers with concerns around fairness and trust. According to the American Staffing Association, 49% of job seekers believe AI tools used in recruiting are more biased than human recruiters.

There is real cause for concern about AI hiring bias, and the onus is on leaders to implement strategies that curb this unease and prevent potential discrimination. By focusing on quality data, creating the right level of transparency and balancing human involvement with AI processes, leaders can ensure their models are reducing bias, rather than enhancing it, while increasing trust among customers, recruiters and candidates.

Quality, Diverse Data Counters AI Hiring Bias

In 2022, researchers at Cambridge University found that recruitment tools were not able to eradicate AI hiring bias, causing many companies to abandon these tools. Since then, we’ve come a long way in our understanding of AI algorithms and development of the technology, flipping the narrative on how to integrate AI into the recruiting process.

As a result, there’s a massive opportunity to improve efficiency in the process when AI is used correctly. AI is only as smart as what you train it on, and to ensure AI can reduce or eliminate bias in the hiring process, it’s critical that the model is trained on diverse and inclusive hiring data.

Unintentional bias can arise from using homogenous datasets, or data that has the same variable — for example, all people of the same age — as it doesn’t provide a representative sample of individuals to judge outcomes from. If an organization has historically staffed their business with a significant slant toward one gender, or one race, the data will lean toward those patterns because it’s the only data it has to work from.

To ensure that AI hiring bias is removed, you need a broad, heterogeneous dataset from different geographies, backgrounds, genders, etc. For example, at Bullhorn, our algorithms are trained on data points from hundreds of millions of candidates, coming from over 10,000 companies, which results in statistically significant training data.

Test Models and Increasing Transparency

According to a Pew Research survey, 41% of Americans are against AI being used to review job applications – proving there’s a lack of understanding about how AI is often used. Building transparency into AI hiring technology helps to build trust around the process and increases fairness and accountability. By ensuring that your AI models are trained on a heterogeneous dataset, you can eliminate any concerns about AI hiring bias.

Frequent disparate impact testing is an essential step in preventing AI hiring bias. Analyzing outcomes of AI-powered hiring through third-party audits is crucial to understanding if the models are disproportionately focusing on specific demographic groups and excluding others. With regular audits, leaders can catch potential AI hiring bias in the model before it leads to discrimination.

For organizations evaluating outside vendors to implement AI software, it’s crucial to ensure the vendor conducts third-party audits of their models. If you request an AI bias audit report from a vendor, and it won’t or can’t, it raises a red flag that their model may have been built on a homogeneous dataset.

In addition to addressing concerns about bias and transparency, and building trust in the technology, in many cases AI bias audits are becoming a legal requirement. For example, New York City’s recent Local Law 144 requires employers using automated employment decision tools (AEDT) to complete a bias audit before use and requires job candidates in NYC to receive notice that the employer or employment agency uses an AEDT.

Additionally, the recently passed EU AI Act, which is likely to spur additional legislation, impacts some organizations in the U.S. and includes requirements around data governance and testing datasets.

Strike Balance Between AI and Human Involvement

While AI is a fantastic tool to augment the hiring process, it cannot be used in a silo. Instead, leaders should leverage AI for efficiency and automation, not for final decision-making. For example, AI can be used independently for a variety of processes, from scanning resumes, to helping recruiters communicate with candidates more efficiently, to sourcing the best candidates for a role. But in its current state, AI can’t replicate a human’s evaluation of communication style, culture fit or soft skills in an interview. So real people must still be heavily involved in evaluations and final candidate selection.

Beyond striking the right balance of AI and human participation, leaders must actively invest in training for recruiters to arm them with the tools to use AI efficiently. With the right guidelines and tools, recruiters can determine how to engrain AI into their daily workflows to increase efficiency and focus their work on the human connection.

It’s clear that staffing and recruiting agencies utilizing AI for hiring have better results. Recent data from Bullhorn found that while the adoption of AI is still in its earliest stages, firms that experimented with AI in 2023 were already 31% more likely to see revenue gains than firms that have not begun experimenting with AI.

Moreover, as companies recognize the potential of AI to drive business success and revenue gains, they need to prioritize upskilling efforts for their employees to ensure a seamless integration of AI into their existing recruiting processes. This is critical to increasing both success with and trust in AI, and developing a healthy relationship between AI and human recruiters, which further prevents bias.

When using quality and diverse datasets, implementing audited technology and arming recruiters with the tools to use the technology effectively, AI can help recruiters fill roles faster with the best candidates, while eliminating AI hiring bias. As AI advances, it will become more important than ever for leaders to invest in the technology that will provide a competitive advantage as the talent market evolves.

Read the full article here

Trending