Join getAbstract to access the summary!

Want a Job? The AI Will See You Now

Join getAbstract to access the summary!

Want a Job? The AI Will See You Now

MIT Technology Review Podcast

MIT Technology Review,

5 min read
3 take-aways
Audio & text

What's inside?

AI systems are increasingly taking over the task of hiring – but are they up to the job?



Editorial Rating

9

Qualities

  • Eye Opening
  • Overview

Recommendation

Companies are increasingly employing automated systems in their hiring processes – but are machines are up to the job? A podcast from MIT Technology Review’s series on artificial intelligence (AI) in hiring delves into AI-based video interviewing systems, whose use has boomed during the pandemic. Journalists Jennifer Strong and Hilke Schellmann spoke with executives at AI system vendors, users, and experts in psychology, computer science and ethics about how these systems work and why some are raising red flags. Anthony Green, Jennifer Strong, Emma Cillekens and Karen Hao produced.

Take-Aways

  • AI-based video interviewing systems are changing the way companies hire.
  • Controversy about their relevance and ethics surrounds the use of these systems.
  • In an experiment, a journalist posed as a job seeker and received perplexing results.

Summary

AI-based video interviewing systems are changing the way companies hire.

Companies are increasingly using interviewing systems that employ AI to evaluate candidates before their applications ever reach a hiring manager. These systems typically have applicants respond to standard questions via video, and the AI then analyzes and scores their responses – based on what they say, their tone of voice and in some cases their facial expressions. In most cases, vendors recommend combining the AI’s results with the hiring manager’s own judgment, but at least one recommends allowing the AI system to perform screening autonomously.

“You shouldn’t always listen to the tool. But the tool will help you make an informed decision.” (Re:work Training CEO Shelton Banks)

Advocates say these systems can offer more objective, fair evaluations than humans alone can provide, and can help companies fill vacancies quickly at scale. The industry’s leading vendor – HireVue – conducted six million interviews in 2020. In one case, a HireVue customer held 50,000 interviews in 1,500 locations – in a single weekend.

Controversy about their relevance and ethics surrounds the use of these systems.

Critics point out that AI’s ability to understand and score video interviews hasn’t been proved – and vendors have been close-lipped about how their algorithms function. One HireVue customer noticed the algorithm seemed to be giving more weight to candidates’ delivery – whether they sounded convincing – than to the content of their responses. But these same candidates went on to perform well in live interviews, suggesting that hiring managers also pay more attention to delivery than content.

“The underlying assumptions about what is valued and what is not are the key here.” (AI ethicist Suresh Venkatasubramanian)

In particular, the analysis of facial expressions has created controversy because experts question whether AI can “read” tone of voice or facial expressions at all – so it’s unclear what these systems are actually basing their evaluations on. AI ethicist Suresh Venkatasubramanian says companies’ claims to accuracy only mean the AI system’s results match its training data – but questions remain about the accuracy of the training data itself. Venkatasubramanian has raised concerns over the industry’s integrity and companies’ ability to self-regulate.

In an experiment, a journalist posed as a job seeker and received perplexing results.

Journalist Hilke Schellmann tested AI video interviewing systems by first performing an interview in English, responding normally, and then repeating the interview in German. On the basis of her responses in German, the Curious Thing system gave her a “competent” score for her English skills. The myInterview system, which analyzes both content and intonation, first produced a transcript full of gibberish – and then gave her a 73% match for the role. It even generated a complete personality analysis, based solely on intonation.

“Very few people are actually getting access to see how these black-box algorithms work.” (Hilke Schellmann)

A Rice University psychology professor who researches AI and hiring says more research is needed to determine the reliability and relevance of AI interviewing system scores.

About the Podcast

Anthony Green is a podcast producer for MIT Technology Review.

This document is restricted to personal use only.

Did you like this summary?

Get the Podcast

Comment on this summary