![Google still hasn't fixed Gemini's biased image generator 1 illustration featuring Google's Bard logo](https://www.trendfeedworld.com/wp-content/uploads/2024/05/Google-still-hasn39t-fixed-Gemini39s-biased-image-generator.jpg)
In February, Google paused the ability of its AI-powered chatbot Gemini to generate images of people after users complained about historical inaccuracies. For example, Gemini is supposed to depict “a Roman legion” but would show an anachronistically diverse group of soldiers, while “Zulu warriors” would be uniformly shown in black.
Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google's AI research division DeepMind, said a fix should come “very soon” – but we're now well into May and the promised fix should yet to come. to appear.
Google touted plenty of other Gemini features at its annual I/O developer conference this week, from custom chatbots to a vacation planner and integrations with Google Calendar, Keep, and YouTube Music. But human image generation will still be disabled in Gemini apps on web and mobile, a Google spokesperson confirmed.
So what's the delay? The problem is probably more complex than Hassabis intended.
The datasets used to train image generators like Gemini's generally contain more images of white people than people of other races and ethnicities, and the images of non-white people in those datasets strengthen negative stereotypes. In an apparent attempt to correct these biases, Google implemented clumsy hardcoding under the hood to add diversity to searches that didn't specify a person's appearance. And now it's struggling to find a reasonable middle ground that can be avoided to repeat history.
Will Google get there? Maybe. Maybe not. Whatever the case, this drawn-out affair is a reminder that no solution to misbehaving AI is easy – especially when bias is at the root of the misbehavior.