Hiring is at a turning point. AI is becoming part of more hiring workflows, but People Leaders remain divided, not because they oppose technology, but because they want to ensure fairness, transparency, and sound judgment remain central to the process.
To understand how AI can support better, more equitable hiring, we spoke with Olive Turon, Head of People and Culture at TestGorilla. Olive shared her perspective on how AI is misunderstood, where it genuinely adds value, and why human judgment must remain at the centre of hiring decisions.
Catch our highlights from the conversation below.
Q: Why do so many conversations about AI in hiring get stuck in a “humans vs machines” mindset?
Olive: “I think it’s because hiring has always been a deeply human process. From the candidate’s side, applying for a job is an act of vulnerability.
"Whether you’re taking an assessment, sending in your resume, or jumping on a screening call, you’re essentially saying, ‘I believe I’m the right person for this. Here’s who I am. What do you think?’ That’s an emotionally loaded moment, and people want to feel seen, understood, and evaluated fairly.
“From the employer’s side, the human layer matters just as much. Yes, technical skills are non-negotiable, but the questions hiring teams ask most often are far more nuanced.
"Why does this person want this role? Will they thrive in our environment? Will they add to our culture? Even the most skilled hire can struggle if they don’t feel that sense of purpose or belonging.

“These are hard things to measure, and for decades we’ve relied on experience, instinct, and conversation to figure them out. So when a powerful and relatively opaque technology like AI enters the process, it’s not surprising that people default to a ‘humans vs machines’ narrative.
"Candidates worry a faceless algorithm will make decisions without understanding them. Recruiters worry that their judgment, their craft, and even their role in the process might be diminished.”
Q: What fears or misconceptions do you hear most often from HR and people leaders about AI?
Olive: “When I talk to HR and People Leaders about AI, the fears leaders bring up are almost always rooted in the same thing: they feel a huge sense of responsibility. Hiring affects people's lives, and HR is on the front line of those decisions.
"So when you introduce a powerful technology into that space, it's natural for people to worry about what they might lose control of.
“The first thing I hear is uncertainty around regulation and fairness. Leaders want to do the right thing, but the rules are shifting so quickly that they worry they'll get it wrong.
"Nobody wants to be the organization that accidentally introduces bias or misuses candidate data because they didn't have the right guardrails in place.
“And then there's the really human one: job security. Recruiters and HR teams sometimes ask, ‘If AI is doing resume screening or scoring applications, what's left for me?’ It's a fair question, especially when the hype cycle has been loud about automation.”
Q: If AI isn’t there to make the final hiring decision, what is its ideal role?
Olive: “I think AI's real value is in helping you collect and clarify the evidence, not in making the judgment call itself. It's there to surface potential, not to gatekeep.
“The way we think about it at TestGorilla is that AI should do the heavy lifting on the parts of hiring that are, honestly, quite mechanical. Building a strong job description. Helping you choose the right evaluation tools and questions for the role. Scoring responses consistently. Generating explanations for why a score came out the way it did.
“But the criteria? The final decision? The moment when you say, "Yes, this person is right for us"? That stays human. Always.

"And crucially, humans should be able to override AI-generated scores when their judgment or intuition tells them something the algorithm missed. That's not a failure of the system. That's the system working exactly as it should.
“In practice, this means AI can help you see patterns you might have missed. It can parse a resume and flag not just job titles or years of experience, but actual evidence of impact.
"Did this person implement new tools in their workflow? Did they lead an initiative that produced measurable results? Those are the kinds of signals that matter, and AI can surface them quickly across hundreds of applications."
Q: How can AI help hiring managers challenge their biases without losing the human element?
Olive: “I think we have to acknowledge that the traditional hiring process has been built on filters. Whether we're talking about qualifications like four-year degrees or subtle filters like which college you graduated from or who you worked for, or let's be honest, even your sex and ethnicity.
"These are all potential filters that bias the decision-making process and take us further away from answering the question that actually matters: who is the best fit for the job?
“What we call ‘gut feel’ is often just bias reacting to surface-level signals. A polished resume. Work experience at a recognizable company. It's why the worst hire you ever made can look great on paper.

"Research shows we're charmed by first impressions, we prefer people who look and sound like us, and we're swayed by factors like attractiveness, weight, gender, even pregnancy, that have zero bearing on job performance.
"One landmark study from 2004 found that resumes with white-sounding names received 50% more callbacks than identical copies with Black-sounding names.
"A 2021 National Bureau of Economic Research paper confirmed these biases are alive and well, finding persistent racial and ethnic gaps in callback rates. We are victims of many biases, both conscious and unconscious, that shape our decisions.
"When you use AI and feed it the right data, data that's been stripped of the demographic markers that can lead to the kind of bias problems Amazon ran into with their hiring algorithm, you can start to see candidates differently. You're looking at evidence of impact, not just credentials.
"The key is clean data and regular bias audits. If you train AI on historically biased hiring data, you'll just automate the bias. But if you design the system to focus on job-relevant evidence and actively strip out demographic proxies, AI can actually help you dig past those biases and find the true diamonds in the rough.

"When a hiring manager's "gut feel" conflicts with objective data, it forces a conversation. It allows the human to stop being a biased data processor and instead become a data interpreter.
"They're no longer stuck making snap judgments based on incomplete or surface-level information. They're seeing the full picture of a candidate and making more informed decisions.
“At TestGorilla, we believe AI can help usher in an era of true talent discovery. Instead of filtering out talent based on one or two datapoints, the AI focuses on collecting as much evidence as possible to surface matches that might not have been apparent if you went on a filter basis.”
Q: What’s the biggest lesson you’ve learned about helping people leaders use AI effectively?
Olive: “The biggest lesson I've learned is that this is fundamentally a change management challenge, not a technology one. You can have the most sophisticated AI tool in the world, but if you hand it to a hiring manager who's been making decisions based on gut feel for twenty years, they're not going to trust it.
“What actually works is getting specific. I almost think of it like writing a job description for the AI itself. What tasks is it responsible for? What decisions does it support versus make? Where is human judgment still essential? When leaders understand those boundaries clearly, their comfort level shifts.
“Transparency is also critical. If people don’t understand how the AI is generating recommendations, skepticism sets in fast.”
Q: How do you train teams to interpret AI insights critically rather than treat them as absolute truth?
Olive: “You train them by embedding a multi-measure philosophy into your process. AI is never a magic 8-ball; it's just one data point in a whole-person picture.
“You upskill teams by training them to be data interpreters, not data followers. We teach them to look for patterns and context. How does the AI score compare to a cognitive ability test? What’s your view on the candidate’s strengths? Do they match the picture the AI paints?
“By using multiple, science-backed evaluation tools, you're training your team to look at a rich, holistic profile, which prevents them from treating any single AI score as absolute truth.”
Q: When is it appropriate for a human to override an AI recommendation?
Olive: “More often than we expect, as long as there's a clear reason behind it. The hiring manager is the subject matter expert. They understand the role, the team, and the environment in a way AI can't.
“What AI can do is help structure evaluation, surface patterns, and apply standards consistently. It's a support tool, not the final authority.
“At TestGorilla, we’ve designed our system so that every score comes with explained reasoning tied to role-relevant criteria. That way, hiring managers can see how the score was generated and compare it to their own judgment.”
Q: How is AI changing what it means to be an effective people leader?
Olive: “It reinforces their position as a strategic leader. A tactical leader is buried in resumes, refereeing subjective debates, and managing a process.
“The AI-augmented strategic leader is a talent intelligence expert. They use the time AI gives them back to have strategic conversations with the C-suite, backed by objective data.
"They can prove their hiring decisions are fair and effective. Whether it be through reduced mis-hires, faster time to hire, or other objective metrics.

“Furthermore, as AI evolves, we will see different roles emerging. A few years from now leaders will no longer be recruiting for the same set of technical skills. AI-based skills-first hiring gives you the tools to hire for those new roles.
"Because it allows you to test for underlying, universal skills that are important in an AI-first job landscape: such as AI fluency and literacy, ethical use of AI, digital agility, systems thinking, and Human-AI collaboration.”
Q: If HR leaders feel overwhelmed by AI, where should they begin?
Olive: “If I had to give one piece of advice, it would be to start with the pain. Not the tech. Not the hype. Look at your hiring process and ask: Where’s the friction? What's breaking down?
“Maybe you're drowning in resumes and struggling to spot who actually fits the role. Maybe strong candidates are getting filtered out because their experience doesn’t look traditional. Maybe you keep mis-hiring because early screening is too surface-level.
"Those are problems AI can help with — if you pick the right kind. If the issue is resume overload, maybe it’s using AI to score resumes against structured, role-specific criteria instead of keyword matching.
"If the problem is missing non-obvious talent, maybe it’s using AI to surface candidates based on demonstrated skills and not just past job titles. If it’s mis-hires, maybe it’s AI summarizing assessment results or interviews, so you’re not relying on first impressions.
"But the pain has to come first. That’s what tells you what kind of AI to look for, what problem it needs to solve, and how to know if it's actually helping. Otherwise, you're just layering tech on top of noise."
The future
AI is reshaping the hiring landscape, but not by replacing human judgment. As Olive highlights, AI works best when it:
- supports fairness
- strengthens evidence
- and frees hiring teams to focus on context, motivation, and potential
With thoughtful adoption and clear boundaries, AI can help create hiring processes that are more equitable, more efficient, and more human.
Want to connect with top brands like TestGorilla? Secure your spot at our upcoming Chief People Officer Summit (December 2nd).👇





