You are viewing a single comment's thread from:

RE: LeoThread 2024-10-24 11:17

Google Has a Tool for Tagging and Detecting AI-Generated Text. It’s a Nice Concept, but There’s Still a Problem With It

  • The company has recently made its SynthID watermarking technology available to all developers and platforms.

  • The tool is beneficial for identifying the surge of AI-generated content.

  • However, the main issue lies in the existence of various options. What we truly need is a universal standard.

#google #ai #images #deepfake

Sort:  

AI should label its creations. Just like authors sign their written works and painters sign their paintings, generative AI systems should mark the content they produce as AI-generated. Google, which has previously explored this concept, has recently made significant strides in this area. However, the problem still persists: We need a universal standard.

SynthID. Google and DeepMind have been working toward tackling this issue. Although they introduced SynthID over a year ago, the text watermarking tool is now available for free to developers and businesses. The goal is to provide generative AI platforms with a method to sign the content they create, making it easier to identify AI-generated works.

How it works. According to DeepMind, SynthID can tag text, music, images, and videos generated by AI. For example, when an AI generates text, it does so by using tokens. Each token can represent a single letter, a word, and part of a sentence. The model predicts the next token based on the preceding context by assigning scores to each token. It ultimately generates recognizable patterns of scores that can be compared to any text, helping to determine whether it was generated by AI.