Thursday, November 21, 2024

Google DeepMind is making its AI text watermark open source

SynthID introduces additional information at the point of generation by changing the probability that tokens will be generated, explains Kohli. 

To detect the watermark and determine whether text has been generated by an AI tool, SynthID compares the expected probability scores for words in watermarked and unwatermarked text. 

Google DeepMind found that using the SynthID watermark did not compromise the quality, accuracy, creativity, or speed of generated text. That conclusion was drawn from a massive live experiment of SynthID’s performance after the watermark was deployed in its Gemini products and used by millions of people. Gemini allows users to rank the quality of the AI model’s responses with a thumbs-up or a thumbs-down. 

Kohli and his team analyzed the scores for around 20 million watermarked and unwatermarked chatbot responses. They found that users did not notice a difference in quality and usefulness between the two. The results of this experiment are detailed in a paper published in Nature today. Currently SynthID for text only works on content generated by Google’s models, but the hope is that open-sourcing it will expand the range of tools it’s compatible with. 

SynthID does have other limitations. The watermark was resistant to some tampering, such as cropping text and light editing or rewriting, but it was less reliable when AI-generated text had been rewritten or translated from one language into another. It is also less reliable in responses to prompts asking for factual information, such as the capital city of France. This is because there are fewer opportunities to adjust the likelihood of the next possible word in a sentence without changing facts. 

Related Articles

Latest Articles