Approximate and partial symmetries in Deep Learning

Speaker: Mircea Petrache

Pontificia Universidad Católica de Chile

Date: Tuesday, November 21st, 2023 at 12 Santiago time

Abstract:The image of a rotated cat still represents a cat: while this simple rule seems obvious to a human, it is not obvious to neural networks, which separately “learn” each new rotation of the same image. This applies to different groups of symmetries for images, graphs, texts, and other types of data. Implementing “equivariant” neural networks that respect symmetries, reduces the number of learned parameters, and helps improve their generalization properties outside the training set. On the other hand, in networks that “identify too much”, that is, where we impose too many symmetries, the error begins to increase, due to not respecting the data. In work with S. Trivedi, (NeurIPS 2023), we quantify this tradeoff, which allows to define the optimal amount of symmetries in learning models. I will give an introduction to classical learning theory bounds, and our extension of the ideas to the study of “partial/approximate equivariance”. In passing, I’ll describe some possible directions for working with partial symmetries in specific tasks.

Venue:  Sala John Von Neumann, 7th floor, Beauchef 851 / Online via Zoom
Chair: Ricardo Freire