Skip to content

Gemini faces backlash for generating misleading historical images

  • by
  • 2 min read

Gemini chatbot, formerly known as Bard, is being actively scrutinised for generating illustrations that inaccurately represent historical figures and groups regarding race. The controversy emerged when a former Google employee tweeted images of women of colour generated by Gemini.

The tweet triggered a wave of similar complaints, primarily from several right-wing figures, who shared images depicting America’s founding fathers and Catholic Church popes as people of colour.

The Verge conducted an experiment where the chatbot similarly portrayed Nazis.

In response to the growing criticism, Google issued a statement on X acknowledging the inaccuracies in the historical image generated by Gemini.

“We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here,” said Google.

To avoid losing the AI race against its rival, OpenAI, Google launched Bard in March 2023 and then renamed it Gemini in February this year.

Gemini product lead Jack Krawczyk tweeted on the issue, saying “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”

Jack assured the users that the company was taking representation and bias seriously. He also said that the problem lies when a user writes a more specific prompt like the one with a historical context. For open-ended prompts, Gemini continues to generate racially diverse pictures.

“This is part of the alignment process – iteration on feedback. Thank you and keep it coming!” noted Jack.

Image generation like Gemini are trained on a vast assortment of pictures and it is natural for these platforms to generate some racially inaccurate pictures. For example, in an investigation by The Washington Post, it was revealed that the prompts for ‘a productive person’ generated images of white and overwhelmingly male figures.

The incident highlights the ongoing challenges in training AI chatbots and the complexities involved in the process.

In the News: Indian government directs X to block accounts critical of its policies

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here:

Exit mobile version