Can overfitted deep neural networks in adversarial training generalize? – An approximation viewpoint

"Analysis"

In this talk, I will discuss whether overfitted DNNs in adversarial training can generalize from an approximation viewpoint. We prove by construction the existence of infinitely many adversarial training classifiers on over-parameterized DNNs that obtain arbitrarily small adversarial training error (overfitting), whereas achieving good robust generalization error under certain conditions concerning the data quality, well separated, and perturbation level. This construction is optimal and thus points out the fundamental limits of DNNs under adversarial training with statistical guarantees. Part of this talk comes from our recent work.

Loading countdown...

Release Date

March 1, 2024

Status

Released

Original Title

Can overfitted deep neural networks in adversarial training generalize? – An approximation viewpoint

Runtime

52min

Budget

Revenue

Language

Production Companies

University of Warwick