Skip links

Fruits of Deception: Model Destruction as an Enforcement Tool

While ethical development of AI systems poses various challenges regarding fairness, transparency, privacy, and security, regulating AI often involves an array of other challenges, such as a perceived trade-off between innovation vs. enforcement, ill-fitted or non-existent remedies, or a lack of public resources. Despite these challenges, the recent regulatory activity from US Federal Trade Commission (FTC) shows that the US watchdog will scrutinize AI systems, even if it means ordering their destruction entirely.

FTC is an independent government agency overseeing public enforcement of several laws, notably those concerning antitrust and consumer protection, including the Children’s Online Privacy Protection Act (COPPA). In a recent settlement agreement with WW International and its subsidiary, Kurbo, FTC ordered (i) deletion of the personal information that their weight loss application illegally collected from children under 13, (ii) payment of US$1.5 million in penalties, and (iii) destruction of any “affected work product” meaning models and algorithms developed in whole or in part using personal information collected from children.

While the first two are significant; it is the last tool in FTC’s arsenal which requires further attention. Dubbed algorithmic disgorgement, this is a newsworthy enforcement mechanism inspired by monetary disgorgement – a legal remedy which involves repayment of ill-gotten gains, in other words, relinquishing the “tainted” fruits of one’s wrongdoing. FTC Chairwoman Rebecca Slaughter, along with Janice Kopec and Mohamad Batal, explain in their article:

“One innovative remedy that the FTC has recently deployed is algorithmic disgorgement. The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it.”

This is a pioneering approach when considered in relation to regulating AI, where the regulator orders not only deletion of the personal data, but a disposal of the machine learning model trained on that data. For developers of AI systems, model destruction poses a significant regulatory risk and therefore a potent deterrent against unfair and deceptive data practices.

In fact, this was not the first time that FTC has used model destruction as a remedy. First, in 2019, FTC ordered Cambridge Analytica to delete any algorithms or equations which originated from the data collected illegally from Facebook users. Again in 2021, Everalbum, Inc. was ordered to destroy the illegally personal data obtained through their photo album application, along with the facial recognition models which were trained on such data. Considering the increase in the number of AI regulations, the updates from the FTC suggest that algorithmic disgorgement is here to stay. It remains a strong incentive to invest in good data practices early on, such as adopting appropriate policies for data lineage, data retention, and to incorporate privacy-preserving technologies when building machine learning models.

That said, model destruction is neither without its own challenges, nor is it a one-size-fits-all remedy for all types of non-ethical data practices.

  • First, decommissioning models or decoupling data is never as easy as it sounds. This is arguably an operational issue, but it remains a central problem as to the effectiveness of algorithmic disgorgement as a remedy. Simply put, the same “tainted” dataset may be used in training and testing multiple models, which interact with each other in complex ways. It can be equally difficult to isolate this dataset from its other uses upstream or downstream, or to delete it in a meaningful way, especially if the organization does not have robust governance controls in place. The result of a haphazard approach often means a significant setback for the organization, rendering model destruction a business continuity risk.
  • Second point concerns semantics. FTC includes “algorithms” in its definition of “affected work products” subject to destruction. This is interesting because it may undermine FTC’s intent, which is to disallow any gains incurred from data collected and used in an unfair and deceptive manner. An algorithm is but a set of instructions, which run the training data to achieve the objective, such as solving a problem. The outcome is the machine learning model used to achieve this objective. Therefore, the imprint of the tainted data is on the training (or testing) dataset and on the model. Algorithms are generally more disposable than these two and can be replaced relatively easily.
  • Third, model destruction is simply not an effective remedy in certain aspects. In a forthcoming paper, Tiffany C. Li argues that while more effective than data deletion, model destruction still does not correct the privacy harms incurred by the victims. Similarly, model destruction does not replace effective regulation of AI system development, deployment, or use. While FTC has released a blog with general guidance for organizations with AI capabilities, more work is needed on this front.

All in all, algorithmic disgorgement emerges as a compelling enforcement mechanism in need of clearer regulator guidance in terms of its actual application.

Photo: David Man & Tristan Ferne / Better Images of AI / Trees / CC-BY 4.0

Read more about the writer here