Person shaking hands with a digital arm coming out of a laptop, symbolising unconscious bias in technology

Recognising unconscious biases in L&D technology

Unconscious bias in technology has caused many problems, leaving L&D at a crossroads. We’ll look at the growth of AI, before taking you step-by-step through unconscious biases in technology and finishing with lessons for L&D. 
h
Dom Murray, Content Writer
2021-03-10

In popular culture, robots have often been portrayed as emotionless models of objectivity. You can probably picture it now. From their monotonous, metallic voices to their calculating, unfeeling outlook, everything about these on-screen robots screams objective decision-makers, free from human error and bias. Unfortunately, the closer we get to actual, real-life robots, the more we realise that this isn’t accurate. 

Far from being perfect models of objectivity, we’re now discovering that algorithms and artificial intelligence (AI) are full of the same unconscious biases as humans. While a biased algorithm may sound counterintuitive at first, it makes sense. After all, we’re the ones who created these technologies, so it should be no surprise that they inherited human biases. 

Recently, we looked at Unlearning unconscious biases in L&D. If you haven’t read it already, that article serves as an excellent primer for what we’re about to discuss. 

While that article focused on unconscious bias in humans, today, we’ll take the next step and examine unconscious biases in that one thing we’ve all come to rely on: technology. We’ll start by looking at the growth of AI, before taking you step-by-step through unconscious biases in technology and finishing with lessons for L&D. 

The growth of AI

For a long time, AI was thought of as part of an exciting, faraway future. Well, that future has arrived, as many industries now rely on AI and time-saving algorithms in some capacity. 

According to PwC, automation and AI could replace 38% of jobs by the 2030s. What’s more, the global market for AI is expected to reach $267 billion by 2027, while the number of businesses that use AI has grown by 270% over the last four years.

Additionally, 90% of leading businesses have ongoing investments in AI, with a recent survey finding that 76% of respondents cite AI as fundamental to the success of their organisation's strategy.  

Nevertheless, according to Adobe, AI still has plenty of room to grow. Only 15% of businesses currently use AI, with a further 31% saying AI is on their agenda for the next 12 months. 

Perhaps most alarmingly, Adobe’s findings indicate that you might already be using AI a lot more than you realise. While only 33% of people think they use AI-powered technology, in reality, 77% regularly use an AI-powered service or device. 

The bottom line is that AI and machine learning (ML) are big business. No longer part of a speculative future, these technologies are here to stay, leaving L&D with many important questions to grapple with. 

As these statistics show, AI will be a critical strategic enabler in the future of many businesses. As always, L&D will have a role to play. Whether through the use of hiring algorithms, eLearning technologies, or day-to-day communication tools, technology will shape L&D’s future. Given this, we must be vigilant about unconscious biases in learning technologies and know how to respond when these biases arise. 

Understanding unconscious bias in technology

When we think of unconscious biases, we tend to focus on humans. If you need a quick refresher, we defined unconscious bias as “attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious way, making them difficult to control” in our recent blog on unlearning unconscious biases in L&D

In humans, unconscious biases commonly manifest in areas such as race, gender identity, sexual orientation, age, and many more. However, as mentioned, technology is not exempt from unconscious biases. Given our growing reliance on technology — especially in L&D — it is important to know how technological biases operate and the appropriate responses when they arise. 

Examples of unconscious bias in technology

Data scientist Daphne Koller provides an example of unconscious technological bias in an interview with the New York Times. She explains that an algorithm designed to recognise fractures from X-rays instead recognised which hospital had generated the X-ray image. This mishap occurred because the algorithm latched onto unimportant data, thereby reinforcing an unforeseen human blindspot. 

While this turned out to be a relatively harmless example, there are more serious instances of unconscious technological bias. For example, according to Ideal, resume-scanning algorithms prefer resumes with English-sounding names 40% more often than identical resumes with Chinese, Indian, or Pakistani names. Many studies have found similar results, dating as far back as 1988, showing that this is not a recent phenomenon. 

Dr Michael Yurushkin, the founder of AI consultancy company BroutonLab, explains that these results are due to algorithms being “as racially biased as the data sets and proxies that humans expose it to.” He elaborates, saying that if 80% of the people that a company previously hired were white males, “the algorithm will automatically filter out females or non-white-sounding names... any bias contained in the data used to train AI will affect how the system performs.” 

Similar issues have arisen in facial recognition technologies. For instance, McKinsey found that an image search for “CEO” returned only 11% images of women, despite women making up 27% of all CEOs at the time. Further, in 2018, a facial recognition tool used by the police misidentified 35% of dark-skinned women as men. In contrast, the misidentification rate for light-skinned men was only 0.8%, as most of the people who created the tool came from this demographic. 

Even when programmers remove overt physical indicators of gender identity, race, or sexual orientation, algorithms are still not immune to unconscious biases, showing that positive intentions do not always yield positive results. In 2018, Amazon stopped using a hiring algorithm because it favoured resumes that used terms like “executed” or “captured”. They later discovered that these terms are found on men’s resumes more frequently than women's resumes. 

Worse, the algorithm downgraded resumes that mentioned the word “women”, as in “women’s chess club champion”. As Logically explains, “the algorithm had been programmed to replicate existing hiring practices, meaning it also replicated their biases.” As such, while algorithms may appear to be objective and non-biased, they are only as reliable as the data and existing practices that they are trained to follow — which will almost inevitably contain some degree of unconscious human bias. 

Inc summarises this issue while also offering a solution. They say, “we need to remember that human beings are developing technology like AI, each with unconscious biases that impact the solutions they design. Not only is it essential that diverse teams (of humans) work well together to develop those algorithms — it is imperative that we continue discussing how to manage the potential for problems caused by stereotypes and unconscious biases.”

Lessons for L&D

As these statistics show, unconscious biases in technology are far-reaching, spanning factors such as gender identity, race, and sexual orientation across vital fields like hiring, healthcare, and policing. It would be short-sighted to think, ‘this is happening to other industries, what does it have to do with L&D?’ 

While the above examples come primarily from hiring and policing, there is no reason to think that eLearning technologies and even daily communication tools are not susceptible to the same unconscious biases. No technology is inherently immune to this problem. If humans created it, it is likely to have inherited some unconscious human biases, meaning we all have to be vigilant. The stakes are high, and L&D has a crucial role to play.

Not only must L&D be at the forefront of training and awareness around unconscious technological biases, but we must also respond when these biases arise in eLearning technologies to foster a more inclusive and sustainable environment.

Spreading awareness of unconscious biases

There are several lessons that L&D can take from this emerging issue. Luckily, awareness is an excellent first step. 

Many people take for granted that science and technology are inherently objective. As Timnit Gebru, a research scientist at Google on the ethical AI team, explains, “we need to change the way we educate people about science and technology. Science currently is taught as some objective view from nowhere, from no one’s point of view.” Better Programming elaborates, saying, “the notion that mathematics and science are purely objective is false. Algorithms are our opinions written in code.” 

As such, L&D teams must spread awareness so that unconscious technological biases are not merely taken for granted, but rather actively challenged whenever they arise. 

Awareness of unconscious technological biases should encompass the tools that L&D teams use daily, everyday communication tools and social media platforms, and tools that are regularly used by teams that we train. Most importantly, we must remember that no technology is inherently immune to unconscious bias. 

Spreading this simple message will make people more likely to recognise unconscious technological biases when they arise and equip them with the tools to respond appropriately. 

Stronger together

Another important lesson for L&D is the benefits of diverse teams. Most businesses should already be aware of the power of diversity, however, unconscious technological biases show the dangers of homogenous teams. 

As Olga Russakovsky, co-founder of the AI4ALL foundation, explains in an interview with the New York Times, “AI researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities. We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues. There are a lot of opportunities to diversify this pool, and as diversity grows, the AI systems themselves will become less biased.” 

In other words, unconscious technological biases are more likely to arise when a team comes from similar backgrounds and perspectives. We can reduce unconscious technological biases by promoting diverse teams. 

Improving unconscious bias training

Finally, L&D must take care when developing unconscious bias training. As Forbes notes, not all unconscious bias training is equal. In fact, research suggests that only 25% of unconscious bias training truly moves the needle. 

Women’s Leadership Scholar Susan Masden distinguishes between unconscious bias training to tick a box and unconscious bias training that truly changes behaviours. She explains that “training can be effective if carefully and strategically designed using research-based teaching and training pedagogies.” While this is easier said than done, she provides a comprehensive outline for planning and implementing unconscious bias training in an interview with Forbes.

Dr Pragya Agarwal makes a similar point, explaining that rather than being a day-long course or hour-long seminar, unconscious bias training must be “an ongoing process of educating ourselves and watching ourselves... We have to try to neutralise our stereotypes by making sure that we don’t fall back on them.” 

In other words, unconscious bias training must be a continuous process wherein participants actively challenge both their unconscious biases and unconscious technological biases. 

Unconscious bias in technology and AI is an issue that L&D (and society-at-large) will continue to grapple with for years to come. Awareness is an excellent first step, as is recognising these biases when they manifest. Now, it’s L&D’s responsibility to move the needle forward and create a more inclusive, sustainable future.

For more insights, be sure to subscribe to the Go1 newsletter to stay on top of all the latest L&D trends. Or, you can book a demo today to find out how Go1 can help with your team’s learning needs.

Go1 helps millions of people in thousands of organizations engage in learning that is relevant, effective and inspiring.
Latest stories and insights