Credit: Jonathan Raa/NurPhoto via Getty Images

Google has temporarily halted its Gemini AI tool’s capability to produce images of people following reports of generating inaccurate historical illustrations. The company issued an apology after the tool depicted Nazi soldiers as people of colour.


Google announces Gemini 1.5, a flashy upgrade to its flagship AI model

Multiple prompts triggered the AI to respond with historically inaccurate visuals. People of colour were mistakenly shown in depictions related to German soldiers in 1943 and U.S. Senators in the 1800s.

“We’re actively addressing the recent issues with Gemini’s image generation feature,” stated Google. “During this time, we will pause the generation of images of people and plan to release an enhanced version soon.”

Previously, Google acknowledged that Gemini’s AI image generation depicted a diverse range of individuals globally.

“While this is generally positive as it serves users worldwide, in this instance, it missed the mark,” Google explained.

AI image generators have been criticized for perpetuating racial stereotypes, and one of Google’s AI principles is to “prevent creating or reinforcing unfair bias.” Jack Krawczyk, from the Gemini team, highlighted that “historical contexts contain more complexity.”

“Adhering to our AI principles, we design our image generation capabilities to represent our diverse user base globally, taking representation and bias seriously,” Krawczyk emphasized. “We are committed to maintaining this approach for open-ended prompts.”

Apps & Software Artificial Intelligence Google

Mashable Image

Sam Haysom

Sam Haysom is the Deputy UK Editor for Mashable. He covers entertainment and online culture, and writes horror fiction in his spare time.


Leave a Reply

Your email address will not be published. Required fields are marked *