Tim Lesnick’s Featured Article, GOSHRM Newsletter Mar/Apr Edition 2024
As a headhunter with over 20 years in the staffing industry, I’m often asked to share my opinion about the influence of AI tools on the screening of resumes. Many are expressing a fear about these tools ruling them out before their resumes are even seen by human eyes. As immediate automated “no” responses continue to increase, almost as much as radio silence as a form of rejection, many folks are blaming AI. Furthermore, as companies pursue more inclusive hiring programs around diversity, it begs the question of how the use of these tools can negatively impact DEI programs.
In today’s recruitment landscape, AI tools are increasingly utilized for resume screening, sparking concerns about their impact on diversity, equity, and inclusion (DEI) initiatives. Despite their potential benefits, AI algorithms pose significant challenges that organizations must address to foster fair and inclusive hiring practices.
One of the primary concerns surrounding AI-powered resume screening is the perpetuation of bias. The algorithms trained on historical hiring data may inadvertently replicate and even amplify biases present in society, leading to discriminatory outcomes. Candidates from underrepresented groups are particularly vulnerable to being overlooked or unfairly judged by these biased algorithms.
AI tools may struggle to accurately interpret and evaluate resumes that deviate from traditional formats or languages. Candidates with diverse educational or career backgrounds, as well as those with non-standard resume structures, risk being disadvantaged by systems that prioritize conformity over inclusivity. Two such examples might include misjudging an employment gap, by perhaps a woman becoming a mother for the first time or maybe by someone who had a severe accident or extended health absence such as we saw in 2020 due to Covid. Additionally, the lack of transparency in AI algorithms further exacerbates concerns, as opaque decision-making processes make it difficult to identify and address biases effectively.
Language and cultural biases further compound the challenges of AI-driven resume screening. Candidates whose resumes are written in languages or formats different from the majority may face misinterpretation or exclusion as often words, phrases, and grammar can get ‘lost in translation’, as cultural differences may also come into play. Both cultural and language distinctions that might otherwise make for an excellent applicant, would be more likely to get rejected by AI, therefore hindering efforts to foster diversity and equity in the workforce.
Despite these challenges, organizations can take proactive steps to promote fairness, transparency, and inclusivity in AI-driven hiring. Regular auditing and refinement of AI algorithms can help minimize biases, while providing opportunities for candidates to showcase diverse experiences beyond traditional resumes can enrich the applicant pool. Human oversight and intervention are essential in mitigating biases and ensuring equitable outcomes, underscoring the importance of integrating AI tools with human judgment and empathy. One question to ask yourself is how often is your organization truly implementing such safeguards?
As humans we too have our biases that we bring to the hiring process as hiring managers frequently favor candidates that look, speak, and/or identify like them. This is commonly referred to as “hiring in your own mirror image”. Sometimes the opposite is also true where a hiring manager discriminates against applicants that have the same professional, personal, or ethnic background as they do. Another common form of discrimination is by simply discarding resumes with names that cannot be easily pronounced. Over recent years many applicants born outside of the U.S. have tried to combat this by changing their resume to include a more culturally accepted, yet fake name. While other biases certainly exist, with some certainly being easier to spot than others, it seems that relying on AI to help with a more objective screening process could be beneficial. This may be true in some cases however when I asked ChatGPT for input on this topic, it gave me the limitations listed in this very article. Clearly, AI hasn’t learned to plead the fifth!
I do not believe we could ever truly replace the core human functions from the hiring process, even for something as routine as resume screening. As efforts around DEI programs increase, leveraging AI tools that lack some of the truly human characteristics, behaviors, or even a keen eye to find the story behind a resume are a real risk. While we are becoming more aware of some of the discriminatory practices that exist within our own biases, it may be less likely to spot such weakness within AI, perhaps assuming its objectivity. If you must consistently audit the tool, ensure all applicants are being equally considered, and account for DEI requirements…is it possible trying to use AI in this way might be more trouble than it’s worth? With all new technologies, we must continue to be skeptical of their accuracy and ensure that we are considering critical company objectives before relying too blindly on the results. The AI was able to call out its own limitations and bias, so the question remains: how well are you and your organization able to do the same?
Check out the full newsletter here: https://issuu.com/goshrm/docs/goshrm_2024_mar-apr_newsletter