[Home]   [down2  News]   [down2  Events]   [down2  People]   [down2  Research]   [down2  Education]   [Visitor Info]   [UCSD Only]   [Admin]
Home > Events > CCoM > Abstract

The Measure of Deception: An Analysis of Data Forging in Machine Unlearning

Rishabh Dixit
UCSD

Abstract:

Motivated by privacy regulations and the need to mitigate the effects of harmful data, machine unlearning seeks to modify trained models so that they effectively ``forget'' designated data. A key challenge in verifying unlearning is forging—adversarially crafting data that mimics the gradient of a target point, thereby creating the appearance of unlearning without actually removing information. To capture this phenomenon, we consider the collection of data points whose gradients approximate a target gradient within tolerance $\epsilon$ ---which we call an $\epsilon$-forging set--- and develop a framework for its analysis. For linear regression and one-layer neural networks, we show that the Lebesgue measure of this set is small. It scales on the order of $\epsilon$, and when $\epsilon$ is small enough, $\epsilon^d$. More generally, under mild regularity assumptions, we prove that the forging set measure decays as $\epsilon^{(d-r)/2}$, where $d$ is the data dimension and $r < d$ is the dimension of vector space of right singular vectors corresponding to ``small" singular values of a variation matrix defined by the model gradients. Extensions to batch SGD and almost-everywhere smooth loss functions yield the same asymptotic scaling. In addition, we establish probability bounds showing that, under non-degenerate data distributions, the likelihood of randomly sampling a forging point is vanishingly small. These results provide evidence that adversarial forging is fundamentally limited and that false unlearning claims can, in principle, be detected.

Tuesday, May 12, 2026
11:00AM AP&M 2402 and Zoom ID 964 2834 3800