By Swapnil Mishra - March 15, 2022 4 Mins Read
AI systems have the ability to significantly minimize the unconscious bias that exists in employment and recruiting procedures while also enhancing efficiency and freeing up human recruiters to work on higher-value jobs. But it is critical to guarantee AI is used fairly, responsibly, and ethically.
Artificial intelligence (AI) has transformed talent acquisition processes, resolving some of the industry’s most perplexing issues — such as managing high application volumes — and significantly levelling the playing field for candidates. A lot of companies are now screening and assessing candidates using AI. And this process will only increase in the aftermath of the pandemic, as remote hiring takes over.
However, AI-assisted hiring systems present their own set of challenges and limitations. Every technology is susceptible to occasional errors, but when an error occurs repeatedly and affects a particular group of candidates adversely, it is a strong indication that systemic bias has developed.
So how do business leaders ensure that the AI technology enhances, rather than impedes, the ethics and equity of the hiring process?
An ethical AI model is never put into production without undergoing rigorous testing that has been shown to eliminate hiring bias. Well-documented instances of bias in major companies’ hiring technology serve as cautionary tales of what happens when the proper safeguards are not in place from the start.
The ideal scenario would be to have two versions of the same model: one that has already been subjected to extensive equity testing and is in production, and another that is not in production but is designed to learn from new data continuously.
This learning model should be evaluated continuously, with the new version being pushed to production only after all factors contributing to equity issues have been eliminated. This is frequently accomplished by examining training data over time and retraining it with a new scrubbed dataset, until the issue is resolved.
Recruiters are often unable to review a significant percentage of applications, but AI can assist them in identifying best-fit candidates they would have missed otherwise. While AI can streamline and improve hiring processes significantly, it should never completely replace human decision-making. For instance, AI-based recommendations can prioritize candidates for outreach but not eliminate their applications completely.
Human judgment and intuition will always be critical in the hiring process, as algorithms can occasionally overlook atypical candidates who would be ideal for the job. Certain candidates may have incomplete resumes or a smaller digital footprint in terms of professional accomplishments. AI-based models may overlook them because the keywords used to screen for candidates may not appear.
The solution is to interview candidates for the same amount of time, and with the same set of questions and opportunities for response, a process referred to as a constrained environment. This will contribute to levelling the playing field and mitigating biases in the process.
It is becoming increasingly easy for candidates to game hiring processes by weaving keywords into their resumes to fool the algorithm into believing they are the most qualified for the role. Similarly, popular psychometric tests are becoming increasingly vulnerable to manipulation. This is why it is always preferable to use a variety of assessments and focus on potential rather than on hard skills and experience.
Artificial intelligence and natural language processing-driven behavioural assessments conducted during interviews — or via video and audio recordings — can detect if a candidate is suitable for the role. These technologies utilize linguistics-based techniques to ascertain their behavioural competencies, such as emotional and social intelligence, as well as their openness to new ideas and leadership. This increases hiring equity, particularly regarding factors such as age and economic background.
At their best, AI-driven hiring tools are invaluable for recruiters and human resource departments looking to improve the efficiency and equity of their hiring processes. However, this technology must be continuously tested and safeguarded throughout its development, production, and deployment to avoid introducing new biases rather than eliminating existing ones.
While AI assists in the development of the most ethical hiring systems, it ultimately comes down to human decision-making to uncover human potential.
Swapnil Mishra is a Business News Reporter with OnDot Media. She is a journalism graduate with 5+ years of experience in journalism and mass communication. Previously Swapnil has worked with media outlets like NewsX, MSN, and News24.
A Peer Knowledge Resource – By the CXO, For the CXO.
Expert inputs on challenges, triumphs and innovative solutions from corporate Movers and Shakers in global Leadership space to add value to business decision making.Media@EnterpriseTalk.com