The arms race continues between those attempting to detect GenAI-created content and those who want to keep their origins concealed. For example, detecting if ChatGPT was employed to write content, such as academic papers. According to reports, OpenAI has built a subtle watermarking system, based upon words chosen by its own ChatGPT system, that is an embedded indicator for AI generation. Although highly accurate, it only works on OpenAI’s ChatGPT system and not on AI-generated content created from other systems. It also can be intentionally circumvented by running the content through other systems or filters.
We have seen many GenAI detection systems come and go. They emerge with promise, only to be undermined quickly. This is not the first AI text detector that OpenAI has created. The previous version was withdrawn due to a rapid decline in accuracy.
With the rise of deepfakes, there has been more focus on consistently detecting fabricated content, but nothing long-lasting has emerged.
AI is a great tool to create, but knowing what is authentic is getting very difficult indeed!