Google to join industry mechanism to discern AI-generated content
Google is in the process of joining a cross-industry collaboration that can identify and watermark online content generated using AI or created synthetically
Google is in the process of joining a cross-industry collaboration that can identify and watermark online content generated using artificial intelligence (AI) or created synthetically, a top Google executive said on Wednesday.

The efforts come at a time of growing unease with the potential of synthetic content such as deepfakes, which can lead to a wide variety of harms. Markham Erickson, vice president, government affairs and public policy at Google’s Centers for Excellence, disclosed the efforts during an interview with HT on the sidelines of the Global Technology Summit.
“We are working in cross industry conversations to have interoperability so that other companies that are also providing [AI] tools and content that is created synthetically and [when such content] is coming across our systems, that our users will be able to have the same kind of transparency, there be same awareness [that it is synthetic media]. And so, there’ll be some interoperability,” Erickson said.
In November, YouTube announced that all content that is created using YouTube’s generative AI products would be labelled as altered or synthetic. A cross-industry consortium means that YouTube will be able to identify and label synthetic content created using non-YouTube tools.
What about content from entities that are not a part of such a consortium? “There is going to have to be efforts for those entities that don’t want to participate in that kind of sharing to identify synthetically created content. And that technology is developing and is going to have to develop further,” he said.
Earlier in 2023, the company announced it was working on classifiers to identify AI-generated synthetic media that will also be part of its effort to combat political misinformation. One such effort uses AI itself to identify AI-generated audio – in 2022, Google’s researchers reported 99% accuracy in detecting AI-generated audio through a classifier model that had created.
Detecting synthetic media and deepfake has become an increasingly difficult challenge. Recent versions of AI art programmes Midjourney, Dall-E and Stable Diffusion, for instance, have yielded viral, realistic but fake images, such as those of the Pope disc-jockeying, while new photo editing features by Adobe and Google make what used to be challenging photoshopping techniques — such as removing backgrounds or objects — easy.
The company’s policies require advertisers who post elections ads on Google and YouTube to also disclose if their ads include material that has been digitally altered or generated.
The video sharing platform also made it obligatory for all creators to disclose when they have created any altered or synthetic content that is realistic, including using AI tools, especially for sensitive topics such as elections, ongoing conflicts and public health crises, or public officials.
It is not clear how effective such disclosure-based labelling is. According to YouTube’s policy, content creators who consistently default on this stand the risk of having their content removed, suspension from the YouTube Partner Programme (and thus loss of revenue), etc.
Misinformation has emerged as an increasingly endemic problem for most of social media. On YouTube in particular, channels that systematically spread misinformation, on being fact checked publicly, often change channel names and URLs while still being able to retain subscribers and views. As a result, they evade further scrutiny and continue to earn from YouTube ads.
“YouTube has policies [for] content creators that constantly violate their terms of service, the three-strike policy through which action is taken against such content creators. You rightly point out that it has to be appropriately executed. And hopefully, we’re staying one step ahead of the people that are trying to game the system,” Erickson said.
When people try to game YouTube’s systems in such ways and are eventually identified, it strengthens the detection systems, he added.
“AI is a great way for us to better train a lot of our models, so that we can actually do better content moderation. The more content that we see and when we see folks trying to manipulate different types of media, etc., it makes our systems better and stronger,” YouTube’s director and global head of responsibility Timothy Katz said on November 30.