
Credit: Shift Drive / Shutterstock
Hiring is often cited as a prime example of algorithmic bias. This is where a tendency to favor certain groups over others becomes accidentally fixed in an AI system designed to perform a specific task.
There are countless stories about it. Perhaps the most well-known example is when Amazon tried to use AI in recruiting. In this case, CVs are used as data to train, or improve, this AI.
Since most CVs are from men, the AI has learned to filter out anything related to women, such as being the president of a women’s chess club or a graduate of a women’s college. Needless to say, Amazon did not end up using the system more widely.
Similarly, the practice of filming interviews on video and then using an AI to analyze them for a candidate’s suitability is often criticized for its potential to produce biased results. Yet proponents of AI in hiring suggest that it makes hiring processes fairer and more transparent by reducing human biases. This raises a question: is the use of AI in hiring inevitably changing the bias, or can it be made more fair?
From a technical perspective, algorithmic bias refers to errors that lead to unequal results for different groups. However, rather than seeing algorithmic bias as a flaw, it can also be seen as a function of society. AI is mostly based on data taken from the real world and these datasets reflect society.
For example, when women of color are underrepresented in datasets, facial recognition software has a higher failure rate to recognize women with darker skin. Similarly, for video interviews, there is concern that tone of voice, accent or gender- and race-specific language patterns may influence assessments.
Lots of bias
Another example is that AI can learn, based on data, that people named “Mark” are better than people named “Maria” and thus rank higher. Existing societal biases are reflected and amplified through data.
Of course, data isn’t the only way in which AI-supported hiring can be biased. While designing AI takes the expertise of different people such as data scientists and machine learning experts (where an AI system can be trained to improve what it can do), programmers, HR professionals, recruiters, industrial and organizational psychologists and hiring managers, it is often claimed that only 12% of machine learning researchers are women. This raises concerns that the pool of people designing these technologies is relatively narrow.
Machine learning processes can also be biased. For example, a company that uses data to help companies hire programmers found that a strong predictor for good coding skills was frequently visiting a particular Japanese cartoon website. Hypothetically, if you want to hire programmers and use such machine learning data, an AI can suggest targeting individuals who study programming at university, have “programmer” in their current job title and look like Japanese cartoons. While the first two criteria are job requirements, the last one is not required to do the job and therefore should not be used. As such, the AI design of hiring technologies requires careful consideration if we seek to create algorithms that support inclusion.
Impact assessments and audits of AI that systematically examine the effects of discrimination are essential to ensure that AI in hiring does not perpetuate biases. The findings can then be used to tweak and adapt the technology to ensure that such biases do not occur again.
Careful consideration
Providers of hiring technologies have developed various tools such as auditing to check results against protected characteristics or monitoring discrimination by recognizing the words male and female. As such, audits can be a useful tool to assess whether hiring technologies are producing biased results and to correct that.
So does the use of AI in hiring inevitably lead to discrimination? In my recent article, I showed that if AI is used in a naive way, without implementing safeguards to avoid algorithmic bias, then the technology will repeat and amplify the biases that exist in society and may also create new biases that did not exist before.
However, if implemented with a consideration for inclusion in the underlying data, in the schemes adopted and how decisions are taken, AI-supported hiring can be a tool to create more inclusion.
AI-supported hiring does not mean that final hiring decisions should or will be left to algorithms. Such technologies can be used to filter candidates, but the final hiring decisions rest with people. Therefore, hiring can be improved if AI is implemented with a focus on diversity and inclusion. But when the final hiring decision is made by a hiring manager who doesn’t know how to create an inclusive environment, bias can creep in.
Provided by The Conversation
This article is reprinted from The Conversation under a Creative Commons license. Read the original article.
Citation: AI can reinforce racial bias, but used properly, it can make hiring more inclusive (2023, July 25) retrieved on July 25, 2023 from https://phys.org/news/2023-07-ai-racial-bias-hiring-inclusive.html
This document is subject to copyright. Except for any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. Content is provided for informational purposes only.