Google says that it plans to roll out changes to Google Search to make clearer which images in results were AI generated — or edited by AI tools.
In the next few months, Google will begin to flag AI-generated and -edited images in the “About this image” window on Search, Google Lens, and the Circle to Search feature on Android. Similar disclosures may make their way to other Google properties, like YouTube, in the future; Google says it’ll have more to share on that later this year.
Crucially, only images containing “C2PA metadata” will be flagged as AI-manipulated in Search. C2PA, short for Coalition for Content Provenance and Authenticity, is a group developing technical standards to trace an image’s history, including the equipment and software used to capture and/or create it.
Companies, including Google, Amazon, Microsoft, OpenAI, and Adobe, back C2PA. But the coalition’s standards haven’t seen widespread adoption. As The Verge noted in a recent piece, the C2PA faces plenty of adoption and interoperability challenges; only a handful of generative AI tools and cameras from Leica and Sony support the group’s specs.
Moreover, C2PA metadata — like any metadata — can be removed or scrubbed, or become corrupted to the point where it’s unreadable. And images from some of the more popular generative AI tools, like Flux, which xAI’s Grok chatbot uses for image generation, don’t have C2PA metadata attached to them in part because their creators haven’t agreed to back the standard.
Some measures are better than none, granted, as deepfakes continue to rapidly spread. According to one estimate, there was a 245% increase in scams involving AI-generated content from 2023 to 2024. Deloitte projects that deepfake-related losses will soar from $12.3 billion in 2023 to $40 billion by 2027.
Surveys show that the majority of people are concerned about being fooled by a deepfake and about AI’s potential to promote the spread of propaganda.