Google CEO Sundar Pichai calls Gemini AI photo diversity scandal ‘unacceptable’; full statement here
After Google's Gemini AI engine sparked outrage with historically false images of racially diverse Nazis, CEO Sundar Pichai called the mistake “unacceptable”.
After Google's Gemini AI engine sparked outrage with historically false images of racially “diverse” Nazis, including black and Asian soldiers in Wehrmacht uniforms, CEO Sundar Pichai called the mistake “unacceptable”, admitting that it has “offended our users and shown bias".
"I know that some of its responses have offended our users and shown bias — to be clear, that's completely unacceptable, and we got it wrong," Pichai stated in a mail sent to his staff, directing them to work “around the clock” to fix the issues.
In the memo, first accessed by Semafor, Pichai said: “No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes.”
Last week, users chastised Gemini and accused it of anti-white bias for producing diverse images in unfitting historical contexts.
Also Read: Vivek Ramaswamy lambasts Big Tech AI creators amid outcry over Gemini's racial bias
Take a look at Pichai's full statement
"Hi everyone
I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.
Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.
Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging Al products.
We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust."
Google's parent company Alphabet's shares slump by 4.5 percent
Alphabet, Google's parent company, reportedly lost over $90 billion in market value on Monday, February 26, due to a debate surrounding its generative artificial intelligence product. Alphabet shares slumped 4.5% to $138.75, according to Forbes. This was its lowest price since January 5 and the second-largest daily drop in the past year.
Earlier, Tesla CEO Elon Musk lambasted the AI chatbot by calling it "insane" and “racist”. “I'm glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all,” he wrote in a post on X.
Jack Krawczyk, senior director of Gemini Experiences, acknowledged the problem, adding that while the tool generates a diverse range of people worldwide, it was "missing the mark" in historical circumstances. "We're working to improve these kinds of depictions immediately," he said.
Meanwhile, Google has paused the tool's capacity to generate photos of people while they seek to fix the errors.
This is not the first time AI has faltered while dealing with real-world diversity issues. Google faced backlash nearly a decade ago when its pictures app incorrectly labeled a shot of a black couple as "gorillas."