An image similar to a watermark on a bank note.


Watermarking AI content is now a widespread regulatory requirement around the world. The idea is to confirm, through imperceptible marks, if content was AI generated and therefore help reduce misinformation.

For example, in physical cash watermarks are used to help people check that their cash is not fraudulent.

IF thinks this pattern has limited usefulness mainly because watermarks are so easy to circumvent through removal or false positives. It also raises questions about where society should have boundaries around synthetic content, because lots of content today is made by humans and AI. For example, most smartphone photographs are created by a combination of human eye and AI technology.


  • May provide protection from bad actors creating harmful or misleading content.
  • Helps establish authenticity.
  • Visually discreet so as not to interfere with the content itself.
  • It can be both human and machine readable.


  • The level of confidence in the accuracy of a watermark will vary by context, for example confidence will be higher on content generated with AI tools within a platform and lower for externally-generated content.
  • Watermarking may depend on creators voluntarily revealing the processes behind their work.
  • Watermarks can be visually removed in editing processes.
  • Other needs for information about content - for example to identify political advertising, or to explain to users why they are seeing a piece of content - creates contesting demands for limited screen space.
  • It does not allow users to contest the decision to use AI.
  • Over time, will watermarks be ignored by people?