Operator Learning for PDEs: Function Space Theory Meets Machine Learning

Margaret Trautner, California Institute of Technology

Photo of Margaret Trautner

Bringing the power of machine learning to bear on scientific problems requires a different perspective than the usual finite-dimensional feed-forward neural networks mapping an input vector to an output vector. Specifically, equations describing physical phenomena are often partial differential equations (PDEs) whose input and solution take the form of functions. A naive approach discretizes functions and uses the resulting finite-dimensional vectors as input and output data for neural networks. However, it is desirable to possess a model that is discretization-invariant; any choice of discretization yields a prediction with low error. From a mathematical perspective, functions live in function spaces, which are infinite-dimensional analogs of finite vector spaces. In this talk, the fundamental principles of function spaces are presented and an argument is made for why this mathematical perspective is natural for scientific machine learning. Some illuminating examples are discussed, including learning in the setting of constitutive models of solid mechanics and testing model discretization-invariance.