Google put its new artificial intelligence tool under review after a barrage of accusations of discrimination. Until further notice, users will not be able to ask Gemini to create images of human beings.
"Some of the images generated are inaccurate or even offensive," acknowledged the technology company's Senior Vice President Prabhakar Raghavan, who also appreciated "users’ feedback" and apologized for the flaws in the system.
We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ
— Google Communications (@Google_Comms) February 22, 2024
Raghavan on Friday identified two causes behind the problem. "First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly 'not' show a range," he explained. For example, when asking for images of heterosexual couples, the A.I. also responded with illustrations of homosexual couples. If white couples were requested, black couples appeared.
Second, "the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive." In this case, Gemini refused to create images of white people.
Another frequent criticism was that in seeking diversity, the A.I. generated historically inaccurate results, such as black founding fathers or diverse Nazi soldiers:
This is not good. #googlegemini pic.twitter.com/LFjKbSSaG2
— LINK IN BIO (@__Link_In_Bio__) February 20, 2024
While acknowledging that the errors led to the creation of "embarrassing and wrong" images, Raghavan did not address criticism that the bias appeared to target certain specific groups, such as those mentioned above: white people and heterosexual couples. Nor did he refer to the accusations that Gemini did nothing more than replicate the prejudices of its creators, both in its design and in the information it was fed.
"I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results — but I can promise that we will continue to take action whenever we identify an issue," the senior executive said.
Antisemitism?
To those accusations of discrimination in their brand new A.I. product, others of antisemitism have been added. Although they did not specify which results they were referring to, company spokespersons acknowledged the problem and promised that they will refer anyone who asks about the war in Israel to the Google search engine.
Users criticized the company for Gemini responses, such as saying that the events of Oct. 7 are "disputed " or questioning whether there were murders during those attacks, suggesting the words "casualties" instead and calling into question the death toll.
According to Google's woke AI Gemini:
1) October 7 is "disputed"
2) The 1200 number of murdered Israelis includes murdered Palestinians (completely false)
3) we should consider both sides because Hamas wants us to
4) October 7 had no independent investigation conducted, so… pic.twitter.com/7QFy9mF2t7
— Marina Medvin 🇺🇸 (@MarinaMedvin) February 22, 2024
We were told to believe all females! That there id a rape culture! But not if you’re a Jewish woman apparently! @Google is a disgusting woke company. #StandWithIsrael #Israel #Antisemitism #Google pic.twitter.com/0CuFjvoJu5
— jonboy (@jonboy79788314) February 22, 2024
The accusations of antisemitism do not end there. The Daily Wire reported a series of internal emails this Friday that demonstrate anti-Jewish biases, such as the words "Free Palestine Kill All Jews" handwritten on a flyer in the technology company's offices in New York and an attack on an employee who tried to photograph a person who was delivering leaflets in favor of the Palestinians inside the company's London offices.
A company spokesperson consulted by The Daily Wire acknowledged the incidents and assured that in recent months they had "taken action over the last few months against people who’ve violated our workplace policies." However, an employee quoted in the same report stated that he did not feel safe and criticized the fact that the results of the investigation were not made public.