Back to the complete issue
Sunday, 25 July 2021

Can robots be racist?

How can we make sure AI does not reproduce human biases? AI is advancing at an accelerating pace, but researchers and tech workers have raised concerns that a lack of diversity in the AI community is creating a technology that discriminates against minorities, the New York Times reports. Researchers who have attempted to highlight the discriminatory nature of AI systems have faced backlash from companies, with Google dismissing two of its key AI researchers last year after they spoke up about the issue. In response to the ensuing backlash, Google promised it would change its research process.

How exactly can AI be biased? In 2015, Google was widely criticized for its online photo service after it auto-tagged and organized pictures of a black man into a folder labelled “gorillas.” Soon after, Google apologized and confirmed that the terms were removed from searches and image tags. Even more shocking, an investigation revealed that a computer program used by the judiciary and penal system in the US to calculate a criminal’s likelihood of reoffending is biased against black people, rating them twice as likely as white people to commit another crime. Predictive policing software, which uses AI to predict where crimes may occur and allocates police staff accordingly, has also been accused of over-policing predominantly black communities. Even a computer program designed to generate text called GPT-3 was found to have a low opinion of Black people, and to display sexism and other biases.

Why is this happening? AI systems rely on “neural networks,” and learn their skills by analyzing large amounts of digital data. If the engineers choosing the data when training these systems are biased (or narrow-visioned, or lazy) they could unknowingly feed the system material that is skewed in one direction, reproducing their own biases in the machines meant to liberate us from bias.

AI facial recognition comes with ethical challenges: A content moderation system built by US firm Clarifai was meant to automatically remove [redacted] explicit content from images posted to social networks, and engineers had been training the system to recognize the difference between those and G‑rated images (those suitable for a general audience). As the G‑rated images were dominated by white people, the system erroneously learned to identify images of black people as [redacted] explicit. Although an intern at the company reported the problem, the company continued using the model. Similarly, a black computer scientist reported being unable to get a detection system to identify her face until she put on a white plastic Halloween mask.

Some companies are wising up to the ethical issues: Last year, IBM, Microsoft and Amazon decided not to let police use their facial recognition technology, and IBM announced its intention to get out of the facial recognition business altogether amid concerns over bias, and until ethical safeguards for the technology are put in place.

What’s to be done? In one camp, some researchers say AI should be left to learn on its own until it eventually catches up to society, while others think human intervention should take place at the code level to try to filter out these biases. But shielding AI from the worst of human nature could necessitate the censorship of historical texts, songs, and other material, which would be a costly, and controversial effort.

Our take: Racist robots are not what we bargained for when we imagined our ideal techno-utopia, and if diversity in the AI field can make some headway rectifying that, then that is definitely something worth investing time and money into.

Enterprise is a daily publication of Enterprise Ventures LLC, an Egyptian limited liability company (commercial register 83594), and a subsidiary of Inktank Communications. Summaries are intended for guidance only and are provided on an as-is basis; kindly refer to the source article in its original language prior to undertaking any action. Neither Enterprise Ventures nor its staff assume any responsibility or liability for the accuracy of the information contained in this publication, whether in the form of summaries or analysis. © 2022 Enterprise Ventures LLC.

Enterprise is available without charge thanks to the generous support of HSBC Egypt (tax ID: 204-901-715), the leading corporate and retail lender in Egypt; EFG Hermes (tax ID: 200-178-385), the leading financial services corporation in frontier emerging markets; SODIC (tax ID: 212-168-002), a leading Egyptian real estate developer; SomaBay (tax ID: 204-903-300), our Red Sea holiday partner; Infinity (tax ID: 474-939-359), the ultimate way to power cities, industries, and homes directly from nature right here in Egypt; CIRA (tax ID: 200-069-608), the leading providers of K-12 and higher level education in Egypt; Orascom Construction (tax ID: 229-988-806), the leading construction and engineering company building infrastructure in Egypt and abroad; Moharram & Partners (tax ID: 616-112-459), the leading public policy and government affairs partner; Palm Hills Developments (tax ID: 432-737-014), a leading developer of commercial and residential properties; Mashreq (tax ID: 204-898-862), the MENA region’s leading homegrown personal and digital bank; Industrial Development Group (IDG) (tax ID:266-965-253), the leading builder of industrial parks in Egypt; Hassan Allam Properties (tax ID:  553-096-567), one of Egypt’s most prominent and leading builders; and Saleh, Barsoum & Abdel Aziz (tax ID: 220-002-827), the leading audit, tax and accounting firm in Egypt.