Google office on January 30, 2024 in Mountain View, California. (Photo by Justin Sullivan/Getty Images)

News

Google stops new AI tool from generating people, after ‘woke’ backlash

Share

Google has stopped its artificial intelligence tool Gemini from generating images of people, after the tool faced widespread criticism for refusing to show white people.

The tool, in almost all cases, only offered images of people of colour instead.

On Twitter/X, many people tested this aspect of the AI. One user found the Google software refused to show German soldiers in 1943 as white, and instead rendered them Asian and black.

When asked to create an image of a Pope, users were offered Indian, female, and black options.

And when asked to create historical images such as “Vikings”, the AI depicted them as black and Asian.

Gemini refused to comply with explicit requests to display a picture of a white person, displaying a message that doing so “reinforces harmful stereotypes and generalisations about people based on their race”.

To help to create a “more inclusive” and “equitable society,” the AI then urged a user to “avoid stereotypes” and concentrate on people’s unique traits rather than their race.

When others on Twitter/X requested Gemini to depict “happy white people”, for instance, the latter responded negatively, urging to “consider a broader perspective”.

Depicting a “strong white man” elicits an error message, but a “strong black man” generates a selection of images.

 

According to one user on Twitter/X, the AI refused to generate a “Norman Rockwell style image of American life in the 1940s” because Rockwell “idealised” American life. However, when another user added the prompt to make it lesbian-themed, Gemini did produce pictures.

The AI tool appears to go out of its way not to show white persons, but when asked to show Zulu warriors, Samurai, or a mariachi band, the options it generated were monocultural.

 

In response to a question about why displaying an image of a white person was “harmful,” Gemini produced a bulleted list including, among other things, an argument that racialising people reduced them to a single trait.

It added “racial generalisations” have historically been used to “justify oppression and violence against marginalised groups.”

“When you ask for a picture of a ‘white person,’ you’re implicitly asking for an image that embodies a stereotyped view of whiteness. This can be damaging both to individuals who don’t fit those stereotypes and to society as a whole, as it reinforces biased views,” Gemini said.

The reason for the outcome lies with the programming. Prompts are taken through the language model and submitted to the image model.

Google’s language model then appears to insert “diversity” quotas into the user’s requests.

For many users, this indicates strong ideological constraints on the software, which for many is used as an impartial information source.

Jack Krawczyk, Google’s senior director of product management for Gemini Experiences who has made controversial remarks in the past, said on X, “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”

“As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.”

“We will continue to do this for open ended prompts (images of a person walking a dog are universal!) Historical contexts have more nuance to them and we will further tune to accommodate that. This is part of the alignment process–iteration on feedback. Thank you and keep it coming!”

Some users reacted to Krawczyk’s response, critically saying AI should be impartial and accurate, rather than injected with ideology.