diversity

As the world continues to integrate machine learning technology rapidly, AI is becoming more common and familiar. It has already become part of our daily lives, from social media algorithms to voice assistants such as Siri and Alexa.

A 2019 study suggests that this trend isn’t slowing down any time soon, with the number of businesses adopting AI technology growing by 270 per cent within a four-year period.

As AI technology becomes ubiquitous, its impact on individuals’ well-being becomes all the more critical.

In February 2019, for instance, an African-American man was falsely arrested in New Jersey on charges of shoplifting and assault due to a mistake made by a facial recognition system.

However, upon further investigation and according to a study by the National Institute of Standards and Technology published in December 2019, it was found that its ability to recognise and identify African-American people accurately was fairly low. 

And this was not due to a simple bug in the system. This incident may suggest that similar systems have made incorrect assumptions and judgments in the past and may have wrongfully punished people for crimes they did not commit. Due to this, there are possibly more people in jail because of this bias in AI

Other AI systems have exhibited biases against the poor or ethnic minorities and those outside the developed nations where most AI companies are usually based.

For example, object recognition algorithms sold by tech giants have been shown to produce inaccuracies or errors when identifying items from a household with a US$50 monthly income in countries such as Somalia and Burkina Faso.

Also Read: How can startups factor in the unpredictability of COVID-19 in their Machine Learning models?

Meanwhile, these algorithms posed fewer errors when asked to identify items from an American household with a monthly income of more than US$3,500.

Bias in AI can also be seen in video-conferencing tools’ transcription features, which automatically transcribe virtual meetings.

The problem is that the feature usually only works well for English—more specifically, American English. Hence, the quality of the experience is significantly higher for those who come from English-speaking countries. 

To understand why this happens, we must first understand that AI is not yet completely intelligent and is still prone to many inaccuracies. It does not understand everything; it’s just good at finding and recognizing learned patterns based on the training data given. 

There lies the root of the problem: AI doesn’t think in the way humans do—as it currently stands, most machine learning applications produce results based on the data they are fed.

This boils down to the individuals and teams training AI. The more diverse machine learning teams and data labellers are, the more inclusive the technology will be. 

An NYU study showed that 80 per cent of AI professors are men, only 18 per cent of peer-reviewed AI publications are authored by women, and Black people only represent 13 per cent of the workforce in most US technology companies.

Moreover, of the tech companies mentioned in the study, many facial recognition models were primarily trained using data collected from white men. As a result, their technologies may not be exactly as good in identifying populations that don’t fit the profile.

This overall lack of diversity in the industry results in a similar lack of awareness and knowledge about minorities’ issues. Having a diverse team introduces a wider range of perspectives and ideas, reducing the risk of bias and diversity issues from the get-go.

Solutions to these diversity biases

By promoting diversity within teams developing AI applications, as well as in the curation of diverse training sets, enterprises can fight against bias in their models. 

In developing speech-based AI such as text-to-speech or automatic speech recognition (ASR) systems, developers are tasked with collecting real speech samples from a diverse crowd.

When you train models using only the voices of native English-speaking men aged 20 to 40 years old, for example, your model will be very good at recognising 20- to 40-year-old men. However, more than likely, it will poorly recognise women, young children, or non-native English speakers. 

Also read: Ethics and Artificial Intelligence: Is the technology only as good as the human behind it?

The same rings true for facial recognition systems. If they are trained with facial image data from white males, it is likely that the accuracy of the AI will be lower for persons of colour. 

Human and tech partnership

Clearly, AI developers need to actively address this problem of bias by promoting diversity within their respective teams as well as in the training data they collect. Otherwise, their clients will surely take notice, negatively impacting operations and relationships. 

Tech giants, specifically AI organisations, should build, launch, and operate unbiased AI applications. End-to-end data partners like TaskUs should be able to engage diverse teams to collect, annotate, and validate data to ensure that all end-users receive the same experience, regardless of their language, race, or gender. 

Diverse representation is needed in machine learning data itself and the people leading the development of AI technology. The more we can incorporate diversity and inclusion into the process, the better our chances are of building intelligent systems that are unbiased and just.

For AI to be truly successful, organizations need to ensure that everybody feels like they are involved in the world we’re creating.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic

Join our e27 Telegram group, FB community, or like the e27 Facebook page

Image credit: ryanking999

The post AI has diversity and inclusion problems. Here’s how we can address it appeared first on e27.



content first appear on e27

Leave a Reply

Your email address will not be published. Required fields are marked *