Hiring can time-consuming, costly and highly consequential for both employees and companies. Therefore, to make this process better, employers have begun leveraging algorithmic techniques, in order to hire quality candidates. But the question that often comes up is if hiring algorithms prevent bias or amplify it?
Algorithmic screening tools, on the surface seem like an appealing replacement for biased human evaluations. However, there experts have started realizing that these tools reproduce and sometimes magnify human biases found in the datasets based on which these tools are designed.
Algorithms do not question the human decisions underlying a dataset – they attempt to replicate past decisions, and this can lead them to replicate all the human biases they were intended to remove in the first place.
For instance, Amazon discarded its experiment into an AI recruiting system after it began penalizing resumes that had the term “women’s” and names of women’s colleges. The software taught itself to opt for resumes of male candidates over those of their female counterparts.
Furthermore, a study done by the National Bureau of Economic Research proved affinity towards white names like ‘Greg’ and ‘Emily’ – they got more callbacks for interviews than black names like ‘Jamal’ and as ‘Lakisha’.
Removing Bias from Hiring Algorithms
The first step that needs to be taken towards removing bias that is magnified or introduced by hiring algorithms is to let go of the idea that algorithms are perfect. Employers need to look at both the positive and the negative outcomes of hiring algorithms to build a better process going forward.
So, here a few steps that recruiters can follow to make fair and unbiased decisions while relying on hiring algorithms
Feed a diverse dataset to your hiring algorithm
Hiring algorithms do not act alone, they hone results attained for the dominant group within the input datasets. If their results show bias, then the data that was fed to them consists the same bias.
Hence, it is important to remove bias from the data being fed to the hiring algorithms in order to remove them from the outcomes. A more diverse dataset needs to be used for training the AI-based hiring algorithm.
If a diverse dataset is not available, AI can be trained to optimize for underrepresented factors representing non-primary groups. Moreover, when optimizing against racial discrimination, AI can be trained to focus on non-white names as well.
Then Hiring Team Needs to be Diverse
Organizations must make their AI and hiring teams more diverse. Not just racially or sexually diverse, people need to represent non-traditional professional backgrounds as well.
However, even if the organizations focus on a diverse team of professionals, there are chances of leaving out best hires if they rely on obsolete success markers like compulsory degree qualifications. Therefore, to build a truly diverse team, organizations should open hiring to candidates with non-academic experience by focusing on a more extensive and inclusive set of skills.
For the longevity of such initiatives, organizations should also develop training programs to help diverse candidates blend into their AI and hiring teams.
Also Read: Can AI Eliminate Bias From Decision-Making?
Monitor the Outcomes at all Stages
Before jumping on the AI-based hiring bandwagon, organizations should communicate with the vendors of the hiring software and fully understand the process. It is a good idea to get a test run done as well.
It is also important to be aware of the existing biases in the sample dataset before feeding it to the tool. If organizations are already using such hiring tools, then they should hire an auditor to test the software. They can audit the outcomes themselves by looking for previously overlooked trends in the selected and rejected candidate pools.
These audits will help understand the depth, scope, and frequency of retraining that is required to eliminate the potential biases.