The Potential Biases caused by Machine Learning and AI Algorithms

The advancements made in machine learning and AI algorithms have made life easier for a number of people working in different sectors of our economy.

Car manufacturers have been able to utilize machines to build cars much faster and efficiently than their human counterparts, while software companies have used the power of chatbots to converse with their customers when they’re unable to.

Another field that has been busy with machine learning and AI is recruitment. Many platforms offer software tools with algorithms that work to: scan resumes, rank applicants based on assessment scores, search through LinkedIn’s data to find a strong candidate, etc.

Although these tools promise to strengthen your recruiting practices, unfortunately, there is the tendency for them to show bias in their inner workings.

How are Machine Learning and AI Algorithms bias?

According to this piece detailing the bias found in many algorithms, they lack in these 6 areas:

These areas show why brands should be hesitant to rely on machine learning and other AI algorithms as the basis for their recruiting and how it can lead to error and many headaches down the line.

How is it possible for AI to be biased towards certain candidates?

How is it possible for a machine to be biased towards certain candidates

Although the thought of an algorithm being racist, sexist, etc. may be confusing, a good example of how it works can be seen in this experiment where recruiters reviewed resumes with both “white-sounding” names and “black-sounding” names.

Despite that, the resumes all had similar qualifications, the ones with “white-sounding” names received callbacks far more often than the ones with “black-sounding” names.

Now understand that the people behind making these biased decisions are also the same people that are creating the recruiting policy in decision-making algorithms.

Competence and Qualifications

Competence and Qualifications

Another area where algorithms can set candidates up for failure is in the context of meeting qualifications and having the competence to complete a job successfully.

This article details how Enterprise, the car rental company, uses tools provided by a software firm called iCIMS to check if candidates meet minimum requirements like a bachelor’s degree and a past leadership position.

Although this software is effective in carrying out its operations, it can seriously limit opportunities for candidates who don’t have the requirements but would do the job well regardless.

So Enterprise is actually severely limiting its talent pool with the use of these recruitment algorithms.

Biases in Facial Analysis

Another example can be seen in the video interviewing platform HireVue, where they apply a combination of facial analysis and AI to measure things like word choice, gestures, and voice inflections.

Now if the engineers behind these algorithms made any errors or oversights in getting the data to build the AI, the unintentional bias would be a direct result of their mistakes, which could cost the careers of countless candidates.

Also, a similar issue that arises from facial analysis software specifically is that it can perpetuate the historical bias that has existed against overweight candidates as well as giving an advantage to more physically attractive candidates.

The worst part is that as long as these systems continue to churn a profit for these companies, they will be less likely to acknowledge the bias and try to change their systems of recruitment. 

Are AI algorithms ignoring entire pools of candidates?

Are AI algorithms ignoring entire pools of candidates

Let’s say your company set out to hire more veterans, so you try to configure your software to specifically look for candidates that have a veteran status. The only problem is, you have to reach out to the developers directly to run tests to ensure that their systems aren’t skipping over valuable applicants.

This is because algorithms that rely on resume content and responses to interviews have the potential to completely ignore veterans due to the fact that some of their resumes may not have the term “veteran” clearly stated. 

The Tech Industry isn’t perfect

To get a better understanding of how biased algorithms can arise, let’s take a deeper look at the industry behind these algorithms.

Despite the history the tech industry has in promoting immigrant entrepreneurs and bringing in foreign workers, it has been criticized for its lack of growth in female and African-American workers. The tech field has seen a backlash from its lack of female, black, and Hispanic workers in its mathematics, computing, and engineering occupations.

Now understand that the same people that are part of this industry which displays biased tendencies towards certain groups are also the same people that are creating the algorithms for recruitment software that is sold all over the world.

To make it worse, these tech companies are training their algorithms using the same historical data that led to the biased environments in the first place, thus setting their clients up to repeat the cycle.

Closing thoughts

Software firms in Silicon Valley that are working to reinvent the recruiting process for their clients are hoping to build systems that combat prejudice and work to find the best person for the position.

The problem is, their faulty practices when building these algorithms not only come up short in combating biased hiring practices, but they actually help in normalizing and spreading the habit of faulty hiring due to their use of historical data that puts female workers and people of color at a disadvantage.