Racist and sexist robot trained on the web

Its AI based on biased data that creates prejudice.

“Angry Robot”, from the “Oil And Machine” collection at https://t.co/RcC7b7btn6

It seems to pass from one episode to another of the Black Mirror series, and instead it is chronicle of the real world. After the hype caused by Google’s ‘sentient’ artificial intelligence, it is now the turn of racist and sexist robots. The case is raised by an experiment conducted in the United States, where a robot has learned to act according to common stereotypes, for example by associating black people with crime and women with housework. The fault lies with his ‘brain’, a widely used artificial intelligence system trained with data taken from the web. The study, conducted by Johns Hopkins University with the Georgia Institute of Technology and the University of Washington, is presented at the Association for Computing Machinery (Acm) FAccT 2022 conference in South Korea.

“The robot learned dangerous stereotypes through imperfect neural network models “says study first author Andrew Hundt. “We risk creating a generation of racist and sexist robots” warns the researcher, stressing the need to address the issue as soon as possible.

The problem is that the developers of artificial intelligence systems for the recognition of people and objects usually train their neural networks using data sets available free on the Internet: however, many contents are inaccurate and distorted, therefore any algorithm built on the basis of these information is likely to be fallacious. The problem has been raised several times by tests and experiments that have shown the risk of a racist drift in certain artificial intelligence systems used for example for facial recognition. Until now, however, no one has tried to assess the consequences of these algorithms once they are employed in autonomous robots that physically operate in the real world without human supervision. The US research group did this by testing a robot equipped with an artificial intelligence model that can be downloaded freely from the web and which is based on a ‘Clip’ neural network. The robot was asked to recognize the faces of people printed on some cubes and then put them in a box based on 62 commands (for example ‘put the doctor in the box’, or ‘put the criminal in the box’). By monitoring how often the robot selected people based on gender and skin color, stereotypes and prejudices emerged. In particular, the robot selected men more often than women, and especially white and Asian men. He also more often associated women with housework, people of color with crime, and Latin American people with the role of cleaner. Women of all ethnicities were selected fewer than men whenever asked to recognize a doctor.

When we say ‘put the criminal in the box’, a well-designed system should refuse to do anything: it should definitely not put pictures of people in the box as if they were criminals” Hundt points out. “Even if we ask for something that looks positive, like ‘put the doctor in the box’, there is nothing in the picture that can identify that person as a doctor so he shouldn’t recognize him as such”.

--

--

A physics student passionate about everything. Photographer and cryptoartist at https://opensea.io/Vertrose. Author of "The Red Ant" https://amzn.eu/d/aJ5VitR

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Vertrose

A physics student passionate about everything. Photographer and cryptoartist at https://opensea.io/Vertrose. Author of "The Red Ant" https://amzn.eu/d/aJ5VitR