Technology

#How we taught Google Translate to stop being sexist

#How we taught Google Translate to stop being sexist

Online translation tools have helped us learn new languages, communicate across linguistic borders, and view foreign websites in our native tongue. But the artificial intelligence (AI) behind them is far from perfect, often replicating rather than rejecting the biases that exist within a language or a society.

Such tools are especially vulnerable to gender stereotyping because some languages (such as English) don’t tend to gender nouns, while others (such as German) do. When translating from English to German, translation tools have to decide which gender to assign English words like “cleaner.” Overwhelmingly, the tools conform to the stereotype, opting for the feminine word in German.

Biases are human: they’re part of who we are. But when left unchallenged, biases can emerge in the form of concrete negative attitudes towards others. Now, our team has found a way to retrain the AI behind translation tools, using targeted training to help it to avoid gender stereotyping. Our method could be used in other fields of AI to help the technology reject, rather than replicate, biases within society.

Biased algorithms

To the dismay of their creators, AI algorithms often develop racist or sexist traits. Google Translate has been accused of stereotyping based on gender, such as its translations presupposing that all doctors are male and all nurses are female. Meanwhile, the AI language generator GPT-3 – which wrote an entire article for the Guardian in 2020 – recently showed that it was also shockingly good at producing harmful content and misinformation.

These AI failures aren’t necessarily the fault of their creators. Academics and activists recently drew attention to gender bias in the Oxford English Dictionary, where sexist synonyms of “woman” – such as “bitch” or “maid” – show how even a constantly revised, academically edited catalog of words can contain biases that reinforce stereotypes and perpetuate everyday sexism.

AI learns bias because it isn’t built in a vacuum: it learns how to think and act by reading, analyzing, and categorizing existing data – like that contained in the Oxford English Dictionary. In the case of translation AI, we expose its algorithm to billions of words of textual data and ask it to recognize and learn from the patterns it detects. We call this process machine learning, and along the way patterns of bias are learned as well as those of grammar and syntax.

Ideally, the textual data we show AI won’t contain bias. But there’s an ongoing trend in the field towards building bigger systems trained on ever-growing data sets. We’re talking hundreds of billions of words. These are obtained from the internet by using undiscriminating text-scraping tools like Common Crawl and WebText2, which maraud across the web, gobbling up every word they come across.

The sheer size of the resultant data makes it impossible for any human to actually know what’s in it. But we do know that some of it comes from platforms like Reddit, which has made headlines for featuring offensive, false or conspiratorial information in users’ posts.

A magnifying glass over the Reddit logo on a web browser
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!