General
Google tests invisible watermark to detect Ai-generated images
Google is testing a digital watermarking system called SynthID, developed by DeepMind, to identify AI-generated images and combat disinformation.
The system subtly alters pixels in images, making watermarks invisible to humans but detectable by computers.
This seeks to address the challenge of distinguishing real images from AI-generated ones. Unlike visible watermarks, DeepMind’s solution remains effective even after editing.
Pushmeet Kohli, head of research at DeepMind, stated, “You can change the colour, you can change the contrast, you can even resize it… [and DeepMind] will still be able to see that it is AI-generated.” He emphasized that this is a preliminary launch, acknowledging the need for user feedback and real-world testing to assess its robustness.
The technology aims to differentiate images created with Google’s internal generator, Imagen. Other tech companies like Microsoft and Amazon have also committed to watermarking AI-generated content.
This move toward watermarking reflects the ongoing efforts to create a more accountable and trustworthy landscape for AI-generated content.
China has already taken steps in this direction by mandating watermarks for AI-generated images, with companies like Alibaba adopting the practice to enhance transparency within their AI-powered creative tools.
-
News21 hours ago
EFCC: Abuja American School to refund Yahaya Bello’s $760k children’s fees
-
News21 hours ago
Nigerian woman breaks GWR for 55-hour marathon interview
-
Entertainment20 hours ago
Tems samples Seyi Sodimu’s classic ‘Love Me Jeje’
-
News24 hours ago
Mark Zuckerberg loses $18b as Meta stock drops
-
News19 hours ago
Pathologist interprets MohBad’s toxicology result
-
News22 hours ago
Beware of fake students’ loan website, NELFUND warns Nigerians
-
News21 hours ago
Burkina Faso Junta bans BBC, VOA two weeks
-
News17 hours ago
Nollywood veteran Ogunjimi is dead