Can robots be racist?
How can we make sure AI does not reproduce human biases? AI is advancing at an accelerating pace, but researchers and tech workers have raised concerns that a lack of diversity in the AI community is creating a technology that discriminates against minorities, the New York Times reports. Researchers who have attempted to highlight the discriminatory nature of AI systems have faced backlash from companies, with Google dismissing two of its key AI researchers last year after they spoke up about the issue. In response to the ensuing backlash, Google promised it would change its research process.
How exactly can AI be biased? In 2015, Google was widely criticized for its online photo service after it auto-tagged and organized pictures of a black man into a folder labelled “gorillas.” Soon after, Google apologized and confirmed that the terms were removed from searches and image tags. Even more shocking, an investigation revealed that a computer program used by the judiciary and penal system in the US to calculate a criminal’s likelihood of reoffending is biased against black people, rating them twice as likely as white people to commit another crime. Predictive policing software, which uses AI to predict where crimes may occur and allocates police staff accordingly, has also been accused of over-policing predominantly black communities. Even a computer program designed to generate text called GPT-3 was found to have a low opinion of Black people, and to display sexism and other biases.
Why is this happening? AI systems rely on “neural networks,” and learn their skills by analyzing large amounts of digital data. If the engineers choosing the data when training these systems are biased (or narrow-visioned, or lazy) they could unknowingly feed the system material that is skewed in one direction, reproducing their own biases in the machines meant to liberate us from bias.
AI facial recognition comes with ethical challenges: A content moderation system built by US firm Clarifai was meant to automatically remove [redacted] explicit content from images posted to social networks, and engineers had been training the system to recognize the difference between those and G‑rated images (those suitable for a general audience). As the G‑rated images were dominated by white people, the system erroneously learned to identify images of black people as [redacted] explicit. Although an intern at the company reported the problem, the company continued using the model. Similarly, a black computer scientist reported being unable to get a detection system to identify her face until she put on a white plastic Halloween mask.
Some companies are wising up to the ethical issues: Last year, IBM, Microsoft and Amazon decided not to let police use their facial recognition technology, and IBM announced its intention to get out of the facial recognition business altogether amid concerns over bias, and until ethical safeguards for the technology are put in place.
What’s to be done? In one camp, some researchers say AI should be left to learn on its own until it eventually catches up to society, while others think human intervention should take place at the code level to try to filter out these biases. But shielding AI from the worst of human nature could necessitate the censorship of historical texts, songs, and other material, which would be a costly, and controversial effort.
Our take: Racist robots are not what we bargained for when we imagined our ideal techno-utopia, and if diversity in the AI field can make some headway rectifying that, then that is definitely something worth investing time and money into.