Should artificial intelligence be used to assess potential employees?

General Interest, Privacy

By: Lisa Hoover

The Illinois legislature recently passed the Artificial Intelligence Video Interview Act,which requires potential employers to notify interview candidates that artificial intelligence will be used during an interview and secure their consent. 

An eye peers out from a hole in drywall.

Apparently the legislation, which targets the use of AI to “assess” candidate behavior during an interview, is primarily concerned with hidden biases, according to Workforce. Like so many other services these days, the AI video assessments use algorithms to determine whether or not a candidate has certain attributes or is likely to do well in a role, based on data about past performance by others. 

While some of these companies claim they can use AI to eliminate bias from the hiring process, for anyone who has heard about the work of Safiya Noble (Algorithms of Oppression) and others, this raises some immediate concerns. 

If the underlying data is biased, as algorithms often are, the assessment of the candidates will be biased too. And, troublingly, many employers “can’t explain how the AI assessment works, what criteria they look for or how those criteria align with performance success,” according to Workforce

In addition to the bias, there are potential privacy concerns as well. The programs will identify “a candidates’ nervousness, mood and behavior patterns” to help determine whether the person is a good “fit for the company,” according to Tech Crunch

As with any other technology, I wonder what is happening with this data afterward? Is it destroyed? Is it stored? Who has access? If I re-apply at the company 5 years from now, will they review the report from my previous interview and use it to eliminate me without a new interview, even though I may have undergone significant personal growth? And who decides the answers to these questions – the company itself? 

Furthermore, the idea of requiring candidate consent seems inherently flawed to me – is it really possible to have meaningful consent here? 

Is the average potential employee likely to feel they can actually say no? This will probably be especially problematic for the most vulnerable potential employees – those who are desperate and/or new to the field. Regardless of how much you dislike the idea, how many of us will say no if we’re concerned we’ll be rejected for declining? And if we say yes in that circumstance, is it really consent? 

I also wonder how many potentially wonderful candidates will be disadvantaged by this technology. I was particularly struck by a quote from an article in CNBC: “‘first impressions’ so carefully emphasized by career coaches are now being outsourced to artificial intelligence.” 

If a candidate looks promising enough on paper to warrant an interview, don’t they deserve the chance to make that first impression to the hiring committee itself? Doesn’t the hiring committee deserve the chance to vet that person themselves? What gets lost in translation, hung up by what a computer sees rather than what a human might experience? Nervousness can certainly impact those critical self impressions, and I imagine for many of us knowing that a computer is analyzing our every muscle twitch or nervous tick will only increase that interview anxiety – especially for those new to interviewing. I can’t help but think that this use of technology will ramp up the pressure inherent in an interview, potentially removing wonderful candidates without a chance to make an impression. I wonder how much that really serves either party? 

interviewing

As someone who went through the interview process within the last few years myself, I tried to imagine knowing that my face and voice were being analyzed by a computer the entire time. It’s hard for me to imagine this ever being a right fit for a library position – or any customer service job for that matter.  It’s hard to imagine a computer ever being able to determine who is best going to mesh with patrons or customers more accurately than other people can. 

I can see the appeal – why not cut down to interviewing only the very best candidates? But until we thoroughly address potential privacy and bias issues, and thoroughly consider the impact on potential employees, I think this is one use of AI I am not excited to experience. 

References: 

Butcher, M. (2019) The robot-recruiter is coming – VCV’s AI will read your face in a job interview. TechCrunch.com. Retrieved from https://techcrunch.com/2019/04/23/the-robot-recruiter-is-coming-vcvs-ai-will-read-your-face-in-a-job-interview/ September 13, 2019. 

Gale, S. F. (2019) Could video interviewing land you in court? Workforce. Retrieve from https://www.workforce.com/2019/07/01/video-interviewing-land-you-in-court/ September 13, 2019. 

Riley, T. (2018) Get ready, this year your next job interview may be with an AI robot. CNBC. Retrieved from https://www.cnbc.com/2018/03/13/ai-job-recruiting-tools-offered-by-hirevue-mya-other-start-ups.html September 13, 2019


Lisa Hoover

Lisa Hoover is a Public Services Librarian at Clarkson University and an Adjunct Professor in criminal justice at SUNY Canton. In addition to her MLS, Lisa holds a JD and an MA in political science. She began her career as an editor and then manager for a local news organization, adjunct teaching in her “spare time.” She teaches courses in criminal procedure, criminal law and constitutional law. She is passionate about 1st Amendment issues. She recently began her career as a librarian, starting at Clarkson University in June 2017 teaching information literacy sessions and offering reference services. Lisa and her husband Lee live in Norwood, New York with their cats Hercules, Pandora and Nyx and pug-mix Alexstrasza (Alex). Find her on Twitter @LisaHoover01.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.