Itâs harder than ever to tell AI-generated images from real photographs and illustrations produced by flesh-and-blood human beings. And in recent years, the fakery produced by AI models has become a lot more realistic and a lot more convincing. Weâre now firmly past the uncanny valley.
However, that doesnât mean itâs impossible to spot AI pictures: There are still signs to watch out for, checks you can make, and tools you can use to distinguish the genuine from the synthetic. As is the case with AI-generated video, you donât have to give up just yet.
You may not be able to definitively determine this one way or the other each and every time, but in a lot of cases you can make a pretty educated guess. And in an age of disinformation and AI slop, being able to make the distinction is a skill thatâs worth honing.
Some chatbots are now putting hidden watermarks into their image outputs, identifying them as AI-generated. While these watermarks arenât difficult to removeâa simple screenshot of the image will do itâtheyâre a good place to start when it comes to trying to tell if an image has been made by AI.
Anything produced by Google Gemini, for example, will have whatâs called a SynthID watermark embedded somewhere in it. To test the authenticity of an image, you can upload a picture to Gemini on the web, and simply ask âwas this image made by AI?â. Gemini will be able to find the SynthID watermark, if itâs there.
Thereâs another standard way of labeling AI images, which is developed by the Coalition for Content Provenance and Authenticity (C2PA): the labeling itself is called C2PA, and itâs supported by companies including OpenAI, Adobe, and Google. If you head to a C2PA checking website such as Content Credentials, you can upload an image and get it analyzed for evidence of AI creation.
If an image passes these checks, itâs not a guarantee that itâs genuineâbut itâs worth running through them anyway, because they will catch some AI generations, and even tell you which model was used to make the picture in many cases. If youâre still not sure, you can move on to looking at the context around an image.
Check the Context
No image is an island: It will have come from somewhere, and been shared by someone. You can rely on respected publications (such as the one youâre reading) to honestly label images that have been generated by AI, and properly attribute other images that havenât. Youâll know exactly what youâre looking at.
On the wilds of social media of course, the lines are much more blurred. Here, content is posted and reposted without context or attribution, and itâs much more likely that something on Facebook or X has been faked. Thatâs especially true if the picture is designed to attract engagement, through controversy or cuteness or any of the other emotional levers that get pulled.

Another trick you can try, especially when it comes to images associated with news stories, is to look for complementary pictures taken from different angles. Are the pictures consistent? Do the details match up from different viewpoints and across different time periods? For illustrations and graphic art, you can again check to see if any credits have been applied: See if what youâre looking at has a link back to the artist and their portfolio.
A reverse image search can sometimes reveal where an image has come from, and help you find other copies on the web: TinEye is perhaps the best resource for this. If there are no other matches, that points towards AIâespecially if itâs been posted without context on social media, and especially via an account trying to monetize or sell something.
Look for the Signs
We know AI bots arenât actually taking any photographs or sketching any pictures: Theyâre producing approximations of images based on prompts and their training data (which is vast amounts of creative work done by people). That approach can lead to a certain generic sheen that gives away a lot of AI-generated content.
Anime characters look like generic anime characters, trees look like generic trees, and city streets look like generic city streets. Thereâs even a recognizable ChatGPT font that the AI bot reverts to whenever you ask for some text without any specific styleâlike an average of all the fonts ever createdâand youâll recognize it if you try and generate a few pictures with text in ChatGPT.

Physics is still a problem, though the errors arenât as egregious as they used to be. Try rendering a view of a castle or a vast office block interior in an AI bot and youâll notice turrets appear in pointless places, staircases lead to nowhere, and elevator doors donât actually lead to elevators. There are often logical inconsistencies, because AI doesnât really understand buildings or interior space, just how to create a decent simulation of them in visual form.
We may be past the point of six fingers on hands, but faces and limbs regularly look squished and unnatural, and details are often fuzzy and blurred. Sometimes these problems will be easier to spot than others, but with a little practice and a few test renders of your own, you should get better at being able to identify them.
