Connect with us

Technology

HR has an AI-powered disability problem

Published

on

Unless the unintended consequences of AI-powered HR technology are urgently addressed, hundreds of millions worldwide face lifetimes of economic and societal exclusion.

AI recruitment tools have become the first line of defence against high-volume online hiring. A recruiter’s priority is to quickly discard as many applicants as possible, to narrow down to the talent deemed worthy of human consideration. An increasingly controversial $38bn-dollar industry stands ready to help.

 Just imagine:

  • You lose your dream job because your stammer caused you to go 15 seconds over the three-minute limit for the video interview.
  • You have a facial disfigurement: the camera doesn’t recognise your face as real.
  • You have significant sight loss but it’s impossible to ask the video assessment to disregard your non-standard eye contact.
  • You usually lip-read …but the interviewer’s a robot.
  • You have used a wheelchair since you were four, but the virtual reality test drops you walking into an ancient tomb. You struggle to even imagine standing up, never mind standing up and solving complex puzzles.
  • And how will you know if your profile, produced by scanning everything you have put online, tells the recruiter you are angry and belong to a disability rights network? Is that why your application got nowhere?

Some thought leaders have begun to address race and gender bias in HR tech, but the world’s more than 1.3bn disabled people are still so excluded from this debate that no one, including HR, has even noticed they aren’t there.


Read more: How can HR use AI effectively and ethically?


Neither the AI creators nor their HR customers understand disability discrimination.

AI creators often claim they have removed human bias because their process treats everyone the same. But standard processes are inherently discriminatory. Employers must make reasonable adjustments if they want to employ disabled people on an equal, I stress equal, basis. We treat people differently to treat them fairly. Imagine insisting that the next Stephen Hawking climb stairs to the interview because every candidate must do so.

This is not just about the data which, let’s face it, is always ‘disability biased’. Biased data, while deeply problematic, is quite different from the concrete reality of discriminatory assumptions in the ‘science’, then bedded into ways of working, such as refusing to adapt an automated process so a disabled candidate can be accurately assessed.

We see classic ‘market failure’: neither the buyers nor their suppliers understand disability discrimination. Neither party seems to know how to design a fair recruitment process that is both barrier-free for groups with similar access needs (i.e. accessible application forms) and flexible for individuals needing reasonable adjustments so they can demonstrate their potential (i.e. bypassing tests which are not valid for autistic people).

AI creators are not legally obliged, anywhere, to prove their products do not discriminate against marginalised job seekers. Indeed, some argue it is employers under existing equality legislation who will be held accountable. But surely both parties must share liability? Manufacturers and buyers recklessly deploy these tools, having failed to exercise due diligence.


Read more: Uber Eats driver wins payout for racially biased AI checks


While regulators deliberate liability, HR practitioners, acknowledging the risks, can and should begin to ask their suppliers: “How have disabled people been involved throughout your development and risk assessment process? How does the system adapt for individuals, so that they can be assessed accurately and on an equal basis?”

Then, HR and procurement must combine forces, to mitigate risk to people with disabilities and to the business, and start defining the missing red lines.

Susan Scott-Parker is founder of Business Disability International

Read the full article here

Trending