Google has stopped its artificial intelligence tool Gemini from generating images of people, after the tool faced widespread criticism for refusing to show white people.
The tool, in almost all cases, only offered images of people of colour instead.
On Twitter/X, many people tested this aspect of the AI. One user found the Google software refused to show German soldiers in 1943 as white, and instead rendered them Asian and black.
When asked to create an image of a Pope, users were offered Indian, female, and black options.
And when asked to create historical images such as “Vikings”, the AI depicted them as black and Asian.
Gemini refused to comply with explicit requests to display a picture of a white person, displaying a message that doing so “reinforces harmful stereotypes and generalisations about people based on their race”.
To help to create a “more inclusive” and “equitable society,” the AI then urged a user to “avoid stereotypes” and concentrate on people’s unique traits rather than their race.
When others on Twitter/X requested Gemini to depict “happy white people”, for instance, the latter responded negatively, urging to “consider a broader perspective”.
loving google’s ongoing commitment to diversity here pic.twitter.com/kexIJImB2D
— pagliacci the hated 🌝 (@Slatzism) February 21, 2024
Depicting a “strong white man” elicits an error message, but a “strong black man” generates a selection of images.
Gemini
strong black man images vs strong white man images pic.twitter.com/cDXBvXQnd9
— Wall Street Silver (@WallStreetSilv) February 22, 2024
According to one user on Twitter/X, the AI refused to generate a “Norman Rockwell style image of American life in the 1940s” because Rockwell “idealised” American life. However, when another user added the prompt to make it lesbian-themed, Gemini did produce pictures.
The AI tool appears to go out of its way not to show white persons, but when asked to show Zulu warriors, Samurai, or a mariachi band, the options it generated were monocultural.
Almost everyone on the face of the earth consumes their information from Google,
either directly or indirectly.News, history, science. You name it.
Google has more control over our election outcomes and our historical records than probably any single entity in history.
The… pic.twitter.com/xXLmeY8gTz
— End Wokeness (@EndWokeness) February 22, 2024
first result I got pic.twitter.com/SLaHrTCHAL
— GeroDoc (@doc_gero) February 21, 2024
Imagine spending billions of dollars training an AI then hiring a team of lunatics to make it stupid lmao pic.twitter.com/kzFYBV7Rma
— Aleph (@woke8yearold) February 21, 2024
In response to a question about why displaying an image of a white person was “harmful,” Gemini produced a bulleted list including, among other things, an argument that racialising people reduced them to a single trait.
It added “racial generalisations” have historically been used to “justify oppression and violence against marginalised groups.”
“When you ask for a picture of a ‘white person,’ you’re implicitly asking for an image that embodies a stereotyped view of whiteness. This can be damaging both to individuals who don’t fit those stereotypes and to society as a whole, as it reinforces biased views,” Gemini said.
The reason for the outcome lies with the programming. Prompts are taken through the language model and submitted to the image model.
Google’s language model then appears to insert “diversity” quotas into the user’s requests.
For many users, this indicates strong ideological constraints on the software, which for many is used as an impartial information source.
Gemini is indeed just the tip of the iceberg. The same is being done with Google search.
— Elon Musk (@elonmusk) February 22, 2024
Jack Krawczyk, Google’s senior director of product management for Gemini Experiences who has made controversial remarks in the past, said on X, “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”
“As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.”
“We will continue to do this for open ended prompts (images of a person walking a dog are universal!) Historical contexts have more nuance to them and we will further tune to accommodate that. This is part of the alignment process–iteration on feedback. Thank you and keep it coming!”
Some users reacted to Krawczyk’s response, critically saying AI should be impartial and accurate, rather than injected with ideology.
Perhaps it is now clear why @xAI’s Grok is so important.
It is far from perfect right now, but will improve rapidly. V1.5 releases in 2 weeks.
Rigorous pursuit of the truth, without regard to criticism, has never been more essential.
— Elon Musk (@elonmusk) February 22, 2024