How do companies use AI-based hiring algorithms?

Algorithmic hiring
Algorithmic hiring

This is the full transcript for season 5, episode 8 of the Quartz Obsession podcast on algorithmic hiring.

megaphone-QMIA4881270887

Listen on: Apple Podcasts | Spotify | Google | Stitcher

Read more

Scott: Gabby, welcome to the podcast.

Gabriela: I’m excited to be here!

Scott: I’m excited to have you. We’re gonna talk about work. And the ways in which technology is changing work.

Gabriela: You ever hear of a little word called disruption, Scott?

Scott: Ooh, disruption!

Gabriela: It’s not used very often, but, uh…

Scott: How do we disrupt innovation with thought leadership?

Gabriela: Everyone around the table!

Scott: Yeah, it’s overused, but it’s also true. Artificial intelligence could disrupt every major industry, but, in one important way, it’s already changing how we work… or how we get work. It’s changing hiring. Algorithmic hiring promises to help companies find the best candidates for open jobs, but how those algorithms work is incredibly opaque, and the ethics of handing hiring over to a bot are far from settled.

I’m Scott Nover, the host of the Quartz Obsession, where we’re taking a closer look at the technologies and innovations that may someday change our lives today. Algorithmic hiring.

Today I am here with Gabriela Riccardi, who’s a deputy editor at Quartz at Work, which covers the modern workplace. Gabby, you’re employed.

Gabriela: Somehow.

Scott: How did you get this job? I said that with such suspicion! “How did you get this job?”

Gabriela: Well, you know, Scott, it’s probably going to sound like a pretty familiar process to you. I saw a job posting online here at Quartz. I dusted off my resume. I wrote out my cover letter and I sent it into a portal, and then just a couple days later, I got an email from my manager (or who would become my manager) saying that I got an interview. So talked to him, talked to his boss, talked to her boss, did a little assignment to show how I would do the job myself, and then I heard back, and turns out, I got it!

Scott: That is a very traditional job application process, right?

Gabriela: Mm-hmm.

Scott: It could happen in 2005, and it could happen in 2023. A analog version could certainly have happened before the internet, but the process of applying for a job and getting a job is changing.

Gabriela: Big time. Big time. To, uh, harken back to a word that we love, “disruption,” it’s being disrupted.

Scott: How is the job application process, to use that buzzword, being disrupted?

How is the job application process changing?

Gabriela: So, in the future—and the future is coming fast—your next job application process might involve fewer humans and more computers. And that’s because companies are increasingly experimenting with AI and artificial intelligence and algorithms in their hiring processes.

So I have been looking into this. I’ve been talking to some people who study AI and ethics, and they’ve told me that those AI systems that we’re now using and hiring kind of come in two categories, which I can tell you about.

Scott: Please.

How are AI-based algorithms being used in hiring?

Gabriela: So the first are known as resume screeners, and those are pretty straightforward. Those are systems that help you as a recruiter or a company gather, index, sort, read job applications, instead of humans. So this AI would screen resumes that are coming in through the portal for keywords or for skills or for certain sets of experiences that match your job description. But now, there is an emerging field of more subjective tools that are kind of forming their own secondary category. And those measure behavioral attributes to determine if you’re the right quote unquote “fit” for a company.

Scott: Mm-hmm.

Gabriela: And that is where things are starting to get pretty weird.

Scott: What problem is this technology trying to solve? What’s the sales pitch to HR teams and C-suite executives?

Gabriela: This was initially a pitch for efficiencies. Companies can get hundreds, maybe even thousands of applications for one single open role. So to bring in automated tools and AI tools, you can cut down the work of recruiters and they can spend more valuable time with the right candidates. But increasingly, we’ve also seen a sort of secondary promise pop up with this AI software.

Scott: Mm-hmm.

Gabriela: We’ve seen software companies promise that they can make hiring not only more efficient, but also more equitable. They’re basically promising that AI can make more science-based decisions without human bias.

Scott: What does that mean? How can companies promise that?

How can AI hiring tools reduce bias in recruiting?

Gabriela: They’re saying that we can move our decision-making over to computers and basically create a future where our human preferences, our human errors, our human bias can be taken out of the equation, and that somehow computers can be beyond bias.

Scott: Right. It’s that search for objective evaluation of candidates—it sounds like a noble cause. Is that possible in practice?

Gabriela: Well, according to the experts that I’ve talked to—I have talked to people who study human rights from a legal perspective. I’ve talked to people who research ethics and AI and the possibilities of responsible AI, and for the most part, they’re telling me, “This is all snake oil,” at least where we are right now, it’s snake oil AI. And that’s because as we track them, these automated systems, these algorithmic systems have been found to make arbitrary and sometimes discriminatory decisions.

Scott: So the technology that we are using to eliminate bias can actually perpetuate it.

Gabriela: That is exactly what can happen.

Scott: So gimme a few examples of how this could all go wrong.

Gabriela: It’s all over the map. So, for one example, a German public broadcaster found that AI analyzing your appearance on video flagged people for appearing in glasses or wearing a headscarf, or even just having a bookshelf in the background.

Separately, a BBC Three documentary found that people were ranked differently depending on whether they had regional accents on a tool that was doing vocal analysis. People have been down ranked for having a women’s college or a women’s sport on their resume, or having a name that sounds black or Latinx.

And there’s also plenty of potential for ADA violations. So say you’re dealing with a tool that’s having you record interview answers on camera, and it’s conducting one of those vocal analyses. But you have a speech impediment, like a stutter. Is the AI trained to be able to handle that equitably? We don’t know about that.

Or say you’re talking with a chatbot, but you have a physical disability that limits your ability to use a keyboard for prolonged periods of time. Is the chatbot going to be able to sense that, feel that, accommodate that? Not so sure about that either.

Scott: So it sounds like you’re talking about a lot of different types of hiring software.

Can you walk me through some of the ones that job candidates might encounter if they’re to apply to a job at a big company in 2023?

Which AI tools might job candidates encounter in 2023?

Gabriela: Absolutely. So I’ll start off by saying, one of the experts that I was talking to about this, his name is Ben Winters. He leads the AI and Human Rights Project at the Electronic Privacy Information Center.

And when I said, “Can you give me a comprehensive list of all these types of AI tools that are being deployed in the hiring process?” he said to me, “It’s hard to paint a complete picture of what these subjective tools do and what their impact is on people, because there are so many different vendors that offer these tools to companies big and small, you just can’t track them all.”

Scott: Hmm.

Gabriela: So it touches on an important point, which is that this is just an emerging and rapidly accumulating field where we’ve been able to sort of surface and find some of these tools, but there’s also a lot of transparency that’s lacking in the field. And more and more are being developed.

And we’re not… we don’t have an encompassing idea of all of them.

Scott: It sounds stressful for job searchers to think that, “Oh, they’re up against a litany of new technologies that they’re not even aware of.”

Gabriela: Yep.

Scott: How do you even put your best foot forward when you’re faced with that?

Gabriela: That’s a really good question, because, you know, you’ll know when you face some of them, but you won’t know with others.

Scott: What are some of the more common new technologies that a job-seeker might encounter as they’re going through the interview process?

Gabriela: You may enter a job application process with a company that maybe deploys a chatbot that asks you about your qualification and rejects you if you don’t meet some preset requirements.

You might be asked to record a video interview, where you answer some computer-provided questions. And then the computer will analyze your speech patterns or facial expressions or body language for subjective traits.

You might be made to play logic games that are stand-ins for personality or IQ tests. Those are said to be informed by neuroscience, but we’re not entirely sure exactly how much that checks out.

You may also have your social media scraped. And this one’s a really interesting one because companies will use that not just to background check you if you are a candidate, but they’ll also deploy it to go find ideal candidates. So just think about this, Scott, how many companies have done a deep dive on you? Just because they want you to work for them, not because you’ve even applied.

Scott: It is terrifying to think of that, given that how much I post things on the internet… and dumb things on the internet, so I kind of know that what I’ve signed up for, but I can see it being a problem for someone that maybe forgot to put a few pictures on private on their Facebook account from 10 years ago, or had this Tumblr account that they’ve been locked out of and they can’t even get in anymore, and… or a Pinterest board that they didn’t realize was public about their hopes and dreams and aspirations. That’s all fodder for companies to scrape and potentially discriminate against someone for.

Gabriela: Absolutely. So you should probably rethink your ability to be a reply guy these days.

Scott: My Twitter presence is what it is, but I should probably make sure that my Pinterest searches for “home bar setups” for the last 10 years is private, because that could be really embarrassing for me.

Gabriela: You know what? It really does make me think of a future where everybody is going to have to eventually pay for a service that audits and scrapes and, like, shuts down their social media presence from the past.

Scott: Right.

Gabriela: It’s just marching towards this future where we’ve got computer tools being unleashed to figure out who we are, rather than just humans.

Scott: So social media scraping sounds like something I’ve heard about before. It’s kind of been in the news since social media has existed that companies could see what’s on your MySpace or Facebook account and maybe not hire you because of that.

But some of these other tools that you mentioned are more new. The idea that a chatbot might ding you for not using an exclamation point instead of a period, or a video interview might critique your posture and dock you points because you’re not sitting up straight.

Gabriela: Exactly.

Scott: Is that the kind of opacity that we are struggling with when we think about these tools? We just don’t know what they’re using to evaluate us?

Job-seekers need transparency in AI hiring process

Gabriela: Yeah, that’s the truth. And to one of the points that you just made before, Scott, I think something that’s so compelling about this is that these subjective tools are wild because not only are they automating parts of the hiring process, they’re creating parts of the hiring process that never existed before.

And there is a total opacity about it. I spoke to Mona Sloane, who is, among other things, a senior researcher at NYU Tandon’s Center for Responsible AI. And the way that she characterizes these emerging tools is as a total black box. She really works on transparency for these tools, and the problem, she says, is that companies do not want to admit to the tools that they’re using, and the software providers don’t wanna give up, you know, all of the keys to their algorithms and their intellectual property.

So what it means for us, the job candidates, just the regular people out here, is that we don’t have any kind of insight into the ways that we’re being measured and tracked. And there isn’t meaningful regulation yet on the books, though there are some agencies and some governments starting to get involved.

Scott: Right, I want to hear all about those efforts to regulate and reign in this technology, but do you think that job-seekers should have more information about what they’re up against? Should companies need to disclose what vendors they’re using for the job application process? And should those vendors give a better idea of what markers they’re using to evaluate candidates, or is that all proprietary information and we should just move on?

Gabriela: I think it’s a good ethical question!

Scott: That’s a very leading question.

Gabriela: Well, I can tell you that everyone who studies ethics in this space that I’ve spoken to says “We need transparency.” There is too much power concentrated in the hands of these large corporations, companies, and not enough insight for just regular people.

Scott: Coming up, we’ll talk about which companies are using algorithmic hiring tools. But first, a quick break.

We are back with Quartz’s Gabriela Riccardi, talking about algorithmic hiring. And Gabby, when we talk about who’s using these tools, are we talking about businesses of all sizes and across sectors, or is it really like giant corporations like your Boeings and Disneys?

Which companies are using AI hiring tools?

Gabriela: You know what’s really interesting is that the highest profile examples that you’ll find are coming out of these bigger companies, because they are developing their own in-house technologies. Amazon is a really interesting one, and I will take you through that. But increasingly these software developers that are creating these AI tools are creating off-the-shelf products that any company can buy.

Scott: Mm-hmm.

Gabriela: So it is not limited to just the big, powerful ones that have a lot of money or have the wherewithal to develop their own tools. There is off-the-shelf, licensable stuff that is accessible to any business.

Scott: So tell me what happened with Amazon.

What happened with Amazon’s hiring algorithm?

Gabriela: It’s really a good example of how we should be cautioned that rather than eliminating bias, these hiring tools can often perpetuate bias. So between 2014 and 2017, Amazon tried to build its own algorithmic system to analyze resumes, like the screeners that we use today, and suggest the best hires for open roles.

They trained it on 10 years of its own hiring data and an anonymous Amazon employee called it “the Holy Grail if it actually worked,” but it didn’t, and that is probably because 10 years of its own hiring data of its own existing staff was male-dominated. So the algorithm that was spit out reportedly showed a deep gender bias.

So the word women, like in women’s sports or women’s college would cause the algorithm to rank applicants lower. They couldn’t debug that bug and so they dropped the project. That’s a big yikes.

Scott: Yeah.

Gabriela: But there’s two interesting things to me that come out of that. First of all, that Amazon story is from 2018, but it’s still probably one of the most high-profile examples that’s cited when we talk about bias in algorithms and algorithmic hiring.

That’s made me wonder: Why don’t we have other high-profile case studies and the dangers of these tools from the last five years? And so I asked experts, you know, “Are companies just getting better at keeping these missteps quiet?” And they said, “Yes, absolutely.”

Scott: Right.

Gabriela: There’s building scrutiny on this stuff, and companies are way less willing to be transparent about the tools that they’re building now because it’s going to take a hit to their reputations if these tools turn out to be arbitrary or discriminatory.

Scott: Right, and Amazon has had big snafus in the press and a lot of criticism about its treatment of warehouse employees.

Gabriela: And there’s actually two interesting points on that. First of all, I’ve recently discovered that Amazon may now be back in the game trying to build its own algorithm a second time over.

Scott: Ooh.

Gabriela: Yeah, so Vox gained access to an October 2021 internal paper. It was labeled “Amazon Confidential.”

Scott: Of course.

Gabriela: And apparently they’ve been working for a year before that paper to hand over some of its recruiters’ tasks to AI and replace people in recruiting with algorithms instead.

Scott: First of all, I totally understand why Amazon would want some software to help them with hiring. They’re an enormous company, and it takes a ton of time and money to do the hiring that they do. But was the problem with the original Amazon experiment that they fed the algorithm their longstanding, biased, very human hiring results, and it kind of learned what human managers at Amazon wanted in a candidate and what they didn’t?

Gabriela: You got it exactly right. I think the underscoring point here is that whenever we talk about computers being “beyond bias,” we have to remember that these machines are trained on human data, and humans themselves are biased. So you can’t necessarily build a machine that’s created by humans without human attributes.

Scott: Right. We are imbuing our own biases in some ways into the algorithms that we create.

Gabriela: Exactly. And people I’ve talked to who do data day in and day out and are fighting for equity around these emerging tools, emerging algorithms, emerging artificial intelligence, they emphasize that it’s all about the data. The data has human bias built in.

Scott: Right. We can’t code our way out of a bias problem. That is exactly it. We could just code our way into another one. We’re talking in broad strokes about different software that companies are using to make hiring decisions easier and more efficient and smarter. So what companies are we actually talking about that provide that software? Who are the main players?

Who are the main software providers in AI-based hiring?

Gabriela: So there’s a whole slate of vendors with, like I said, these off-the-shelf versions of products. And again, they’re available to companies on all kinds of scales. I’ll take you through two big ones that I encounter quite a few times over. So one is called Pymetrics. That is the kind of software that makes you play logic games to determine if you’re an organizational fit.

They say that they’re grounded in neuroscience. They put out a lot of reports about their data. They’re actually quite good at talking to the public. They’ll appear at hearings, they’ll talk to journalists. They actually went to the Quartz offices in 2019 and we wrote a profile on them questioning if this is the future of hiring.

But they’re really interesting with these logic games. They have high profile clients, and many of these software providers do. So Pymetrics… just on the list, they’ve had clients like Unilever, Nielsen, LinkedIn, Accenture, Mastercard, Boston Consulting Group…

Scott: So what does a Pymetrics hiring game look like?

Gabriela: So I haven’t gone in myself to play, but I have been told they can seem pretty arbitrary from the outset, or at least the ones that I have read about seem pretty arbitrary. For example, one of them has you hit a space bar when a circle turns green instead of red.

Scott: That seems completely useless to me.

Gabriela: I don’t know what that’s going to tell you about anybody.

Scott: I think if I was in a job application and I had to play, like, games to test my fine motor skills, I would just remove myself from the process. It also doesn’t sound very friendly to people of all types of abilities.

Gabriela: Truly. I mean, imagine you’re color blind. Imagine you have hands that shake. Imagine you can’t use the keyboard for an extended period of time. Imagine, imagine, imagine, imagine.

Scott: Yeah. It seems like there are more aspects of the job process that eliminate you from contention for arbitrary reasons and without, kind of, a way to explain yourself.

Gabriela: That’s how I feel about it, because when you’re dealing with a human, you can ask for accommodations, but what if you get this far in the hiring process and you haven’t even interacted with a human yet? How do you connect with an unfeeling computer? I don’t think there’s a way.

Scott: Maybe the jobs that they’re hiring for are space bar pressing engineers.

Gabriela: Indeed. That’s how they drop all of the packages in the warehouse at Amazon,

Scott: Right, it’s just a big crane game.

Gabriela: So in another Pymetrics game, you have to hit a key on your keyboard as fast as you can until the screen tells you to stop. So the game constantly alternates between start, stop, start, stop. And supposedly this is going to assess your ability to follow instructions and whether you’re impulsive or not.

Scott: I am very impulsive. I would just fail all of these games, Gabby. You said there were two major vendors. Pymetrics is one. What’s the other one?

Gabriela: Another big one in the game is called HireVue. Those are the ones that do video assessments, right? So say you go click, click, click onto the computer, you get put into a portal where the computer will feed you an interview question, and then you’ve got a minute to film your answer.

Scott: Mm-hmm.

Gabriela: HireVue once used facial analysis as part of its tool, and I have seen it compared to modern day phenology. It did drop the facial analysis in 2020, but as recently as March 2021, it’s said that its platform has hosted more than 20 million video interviews. So that just gives you a sense of scale here, right?

Scott: If I have a product, I don’t want it to be compared to modern day phenology.

Gabriela: You know, the other thing that’s wild is that some of these video assessments… we’re talking subjectivity. They’ll look at your body language, maybe your facial patterns, and also your vocal analysis. And they’ll rank you on a scale of how open you are or how agreeable you are, maybe how neurotic you are.

And it’s sort of like, how do even humans rank these things? In what universe can a computer just assess this from you, from looking through your grainy webcam, um, in a one-minute video? It’s just kind of preposterous.

Scott: Right. Testing personality traits sound pseudo-scientific and might lend itself to discrimination. It sounds like some smart regulation could be helpful to make sure that everyone’s got a fair shot at a job.

The role of regulation around AI tools

Gabriela: Yeah, absolutely. And thankfully, there is some regulation on the way. Last spring, the US Department of Justice and the Equal Employment Opportunity Commission, or the EEOC issued guidance on what businesses and government agencies need to do to ensure that their use of AI and hiring complies with the Americans with Disabilities Act. What I thought was really interesting was that they characterized this as “sounding an alarm,” so they’re alarmed about it. This is really high on their radars. EEOC chair Charlotte Burrows said “We cannot let these tools become a high-tech pathway to discrimination.”

And then in other pieces of news, there’s also two major pieces of legislation that are being enacted right now. I mean, they’re specifically aimed at AI tools in hiring. So one in New York City prohibits employers from using AI to screen candidates unless the employers first conduct an audit to determine whether there’s bias present in the tool.

Scott: OK.

Gabriela: And then secondly, overseas in the EU, the EU AI Act is expected to pass later this year. I mean, that one’s interesting because it’ll rank tech tools in various categories of risk. So if you enter the highest risk category, you get immediately pulled from the market and you can’t go back on the market until you de-escalate your risk.

So these are interesting measures. Critics are split on whether they go far enough. Regulation around AI tools is just one solution, and legal experts also point to broader privacy laws, like ones on the books in Colorado and being written in California as more likely to affect meaningful protections.

So, there are other tactics, too. The optimists I’ve talked to, like Mona Sloane, that senior researcher around AI ethics, say there can be smart and ethical ways to use hiring algorithms, but that involves eliminating the “black box” that she characterizes. We gotta increase transparency around this and we need to put in effective regulation.

And secondly, she says, AI developers should also be working collaboratively with social scientists and communities affected by its applications. Lastly, if regulation’s going to be put in place, agencies need to fund work that measures their efficacy and in turn enforces the regulations. So Mona Sloane said there need to be independent bias audits.

Those could be professional organizations, those could be think tanks. Those could be active researchers who are already doing this work. But if we’re not funding independent auditors, she says, there is no accountability here.

Scott: Right. And that’s really what we need is accountability for these black box decisions.

Gabriela: Exactly.

Scott: Is there a smart and ethical way to use hiring technology, or are we really just letting computer code run wild with our worst impulses?

Gabriela: You know, this is really the fundamental tension between tech optimists and tech pessimists in this space. The pessimists are really underlining that we can never totally eliminate human bias, and I agree with that.

But the optimists say, “You know, with more meaningful transparency and more meaningful regulation, We can use it in a way that’s more responsible, as long as everybody is in on this information and we are actively working towards equitable outcomes.”

It is a powerful technology, and we’ve seen the ways in which AI can augment our human work. We’ve seen the ways in which AI can free up our time so that we can, you know, commit ourselves to more meaningful tasks. So that’s really kind of the balance that we’re walking here. We are ultimately always going to be talking about bias and the risk of discrimination. Some people see pathways out that we can use it more responsibly.

Scott: There’s going to be bias in any human interviewing that doesn’t rise to the level of, you know, full-blown discrimination. But should we be holding technology to a higher standard than we hold humans?

Gabriela: I think that’s the great opportunity of the situation, right? We can regulate technology in a way that is more rigorous than regulating humans behind closed doors.

Scott: Right.

Gabriela: So that’s maybe my optimistic take, that if we’re looking at these tools and we’re studying them and we’re holding them to higher degrees of accountability, maybe we get higher outcomes of more equity in the ways that we get our jobs and do our work.

Scott: Right. These are also business products. They’re revenue makers. They promise something.

Gabriela: Mm-hmm.

Scott: We as a society regulate business practices in a way that is more stringent than we regulate human impulses. So perhaps we should have a higher standard for software that claims to eliminate bias or hire equitably.

Gabriela: I would co-sign that. We should all be held to higher standards, right? But, if it’s easier to do it on technology, then I’m all for it.

Scott: Gabby, thank you so much for doing this. I learned so much.

Gabriela: It was great talking, and I hope that you, Scott, never have to apply to another job again.

Scott: I promise you I’ll be at Quartz until my job is automated and they can train the AI to host the Quartz Obsession podcast.

Gabriela Riccardi covers work for Quartz.

The Quartz obsession is produced by Rachel Ward with additional support from Executive Editor Susan Howson and platform strategist Shivank Taksali. Our theme music is by Taka Yasuzawa and Alex Suguira. This episode was recorded by Eric Wojohn at Solid Sound in Ann Arbor, Michigan, and at G/O Media’s headquarters in New York City.

If you like what you heard, leave us a review. We love hearing what you think about the show. Tell your friends about us. Then head to qz.com/obsession to sign up for Quartz’s Weekly Obsession email and brows. Hundreds of stories about everything from crossword puzzles to confetti to that bear that never had to work a day in his life Winnie the Pooh.

Quartz is a guide to the new global economy for people in business who are excited about change. I’m Scott Nover. Thanks for listening.

Rachel, Susan, are you missing something?

Rachel (producer): Let’s just get a goodbye.

Scott: [sings] Goodbye…

Gabriela: [sings] Goodbye. [laughs] I don’t remember what it sounds like. If you put me singing on this final cut, Rachel…

Scott: [singing “So Long, Farewell” from The Sound of Music, sort of] Do, do, do do, do, do, do, do, do do do. Goodbyyye.

More from Quartz

Sign up for Quartz's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.