A message saying "You're seeing this because you liked posts related to motocross in the past", motocross is highlighted for more information and there is an option to "forget".


Show, in the interface, what an AI system has learned and what it has been told to forget by a user. This gives people the ability to change what an AI system uses for its training data, based on their changing needs or preferences, and increases confidence that their decisions have been respected.

For example, a service could highlight the information that is able to be forgotten or deleted at the point of use.

IF thinks this pattern could be used to help people navigate through difficult moments in their lives, like divorce or death. For example, people could ask an AI system to “forget” the name of their divorced partner in autocomplete. In addition, it is a digital right granted under GDPR as “the right to erasure”.

Showing that AI systems can forget things is important. That feels distinctly different to how computers work today and reinforces a sense of human agency over AI systems.


  • Organisations could use this technique to remove biased, proprietary or copyright-infringing training data.
  • Helps models to be updated as legislation and data standards progress.
  • Supports people as their lives inevitably change over time. Enables people to use their legal right to erasure.


  • Technical limitations may mean that some material is not forgotten, instead it is replaced or suppressed. Making this understandable to users may be challenging.
  • Forgetting could be used by people as a privacy feature. However, forgetting still leaves a trail and so could create more risk for the user if used in this way.
  • Outcomes created by the AI could still be influenced by learning from the “forgotten” data before it was forgotten.