Connect with us

General

Google tests invisible watermark to detect Ai-generated images

Published

Google tests invisible watermark to detect Ai-generated images



Google is testing a digital watermarking system called SynthID, developed by DeepMind, to identify AI-generated images and combat disinformation. 

The system subtly alters pixels in images, making watermarks invisible to humans but detectable by computers. 

This seeks to address the challenge of distinguishing real images from AI-generated ones. Unlike visible watermarks, DeepMind’s solution remains effective even after editing. 

Pushmeet Kohli, head of research at DeepMind, stated, “You can change the colour, you can change the contrast, you can even resize it… [and DeepMind] will still be able to see that it is AI-generated.” He emphasized that this is a preliminary launch, acknowledging the need for user feedback and real-world testing to assess its robustness.

The technology aims to differentiate images created with Google’s internal generator, Imagen. Other tech companies like Microsoft and Amazon have also committed to watermarking AI-generated content. 

This move toward watermarking reflects the ongoing efforts to create a more accountable and trustworthy landscape for AI-generated content. 

China has already taken steps in this direction by mandating watermarks for AI-generated images, with companies like Alibaba adopting the practice to enhance transparency within their AI-powered creative tools.

Advertisement
Comments



Trending