AI in Job Hiring Backfires: Study Shows Lower Chances of Selection
AI in Hiring Backfires: Lowers Selection Chances, Study Finds

Have you turned to artificial intelligence to polish your resume and cover letter, hoping to stand out in a competitive job market, only to face rejection in the initial rounds? Or are you an employer who used AI to sift through mountains of applications, but ended up with candidates who didn't match expectations? The problem, experts suggest, might lie in the growing over-reliance on the technology itself.

The AI Hiring Paradox: More Polish, Less Distinction

The widespread adoption of artificial intelligence is transforming how Americans look for jobs, even as the labour market shows signs of cooling down. From AI-conducted interviews to chatbot-written application materials, technology now touches nearly every phase of recruitment. But is it delivering better results? The evidence points to a surprising contradiction.

In 2025, a survey by the Society for Human Resource Management found that over half of organisations use AI tools for hiring. Simultaneously, about one-third of ChatGPT users have sought the OpenAI chatbot's help for job applications. However, recent research indicates that candidates depending on AI during their application process are actually less likely to secure a position. This is happening while companies are overwhelmed by a deluge of applications.

"The ability for companies to select the best worker today may be worse due to AI," researcher Anais Galdin from Dartmouth told CNN Business. Galdin, along with Jesse Silbert of Princeton University, analysed tens of thousands of cover letters on the platform Freelancer.com. They discovered that after ChatGPT's launch in 2022, cover letters became longer and more refined. Ironically, employers began to place less importance on them, finding it harder to identify top talent from a homogenised pool. This led to a drop in hiring rates and even lowered average starting wages.

"If we do nothing to make information flow better between workers and firms, then we might have an outcome that looks something like this," Silbert remarked about their findings.

A Vicious Cycle of Automation and Bias

As application numbers soar, companies are automating interviews at a rapid pace. A October survey by recruitment software firm Greenhouse revealed that 54% of US job seekers have participated in an AI-led interview. While virtual interviews became normal during the 2020 pandemic, many employers now use AI systems to conduct these sessions, without necessarily eliminating human subjectivity from the final decision.

"Algorithms can copy and even magnify human biases," warned researcher Djurre Holtrop, who studies asynchronous video interviews and AI in hiring. "Every developer needs to be wary of that."

Daniel Chait, CEO of Greenhouse, believes the escalating use of AI by both job applicants and employers has created a damaging loop. "Both sides are saying, 'This is impossible, it’s not working, it’s getting worse,'" Chait told CNN.

Regulatory Pushback and the Human Cost

Despite these issues, the adoption of recruitment technology continues unabated, with one projection estimating the market will grow to $3.1 billion by year's end. However, resistance is building. Lawmakers, labour groups, and workers are increasingly concerned about potential discrimination.

Liz Shuler, president of the AFL-CIO labour union, called AI-driven hiring "unacceptable." "AI systems rob workers of opportunities they’re qualified for based on criteria as arbitrary as names, zip codes, or even how often they smile," she stated.

States like California, Colorado, and Illinois are enacting laws to set standards for AI use in hiring. However, a recent executive order signed by US President Donald Trump has cast doubt on the future of state-level oversight. Samuel Mitchell, a Chicago employment lawyer, noted the order adds to "ongoing uncertainty" but doesn't override state laws. He emphasised that existing anti-discrimination laws still apply to companies using AI, and legal challenges are already arising.

In one such case, supported by the American Civil Liberties Union, a deaf woman is suing AI recruitment company HireVue, alleging its automated interview failed legal accessibility standards. HireVue denied the claim, stating its technology reduces bias through a "foundation of validated behavioral science."

For job seekers who cherish human connection, this shift is disconcerting. Jared Looper, an IT project manager and former recruiter in Salt Lake City, described his experience with an AI-led interview as "cold"—so much so that he initially hung up on the automated system. He expressed concern for those struggling to adapt to an environment where pleasing an algorithm is key. "Some great people are going to be left behind," he said.