Skip to main content

Fake AI images are showing up in Google search — and it’s a problem

AI-generated images showing up in Google search.
Image used with permission by copyright holder

Right now, if you type “Israel Kamakawiwoʻole” into Google search, you don’t see one of the singer’s famous album covers, or an image of him performing one of his songs on his iconic ukulele. What you see first is an image of a man sitting on a beach with a smile on his face — but not a photo of the man himself taken with a camera. This is fake photo generated by AI. In fact, when you click on the image, it takes you to the Midjourney subreddit, where the series of images were initially posted.

I saw this first posted by Ethan Mollick on X (formerly known as Twitter), a professor at Wharton who is studying AI.

Looking at the photo up close, it’s not hard to see all the traces of AI left behind in it. The fake depth of field effect is applied unevenly, the texture on his shirt is garbled, and of course, he’s missing a finger on his left hand. But none of that is surprising. As good as AI-generated images have become over the past year, they’re still pretty easy to spot when you look closely.

The real problem, though, is that these images are showing up as the first result for a famous, known figure without any watermarks or indications that it is AI-generated. Google has never guaranteed the authenticity of the results of its image search, but there’s something that feels very troubling about this.

Now, there are some possible explanations for why this happened in this particular case. The Hawaiian singer, commonly known as Iz, passed away in 1997 — and Google always wants to feed the latest information to users. But given that not a lot of new articles or discussion is happening about Iz since then, it’s not hard to see why the algorithm picked this up. And while it doesn’t feel particularly consequential for Iz — it’s not hard to imagine some examples that would be much more problematic.

Even if we don’t continue see this happen at scale in search results, it’s a prime example of why Google needs to have rules around this. At the very least, it seems like AI-generated images should be marked clearly in some way before things get out of hand. If nothing else, at least give us a way to automatically filter out AI images. Given Google’s own interest in AI-generated content, however, there are reasons to think it may want to find ways to sneak AI-created content into its results, and not clearly mark it.

Editors' Recommendations

Luke Larsen
Senior Editor, Computing
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
Apple may finally beef up Siri with AI smarts next year
The Siri activation animation on an iPhone running iOS 14.

As the world has been taken over by generative artificial intelligence (AI) tools like ChatGPT, Apple has stayed almost entirely out of the game. That could all change soon, though, as a new report claims the company is about to bring its own AI -- dubbed “Apple GPT” -- to a massive range of products and services.

That’s all according to reporter Mark Gurman, who has a strong track record when it comes to Apple leaks and rumors. In his latest Power On newsletter, Gurman alleges that “Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time.”

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more