

Last update:
11.05.2026
|
Program
The lectures will be prepared with a broad multidisciplinary audience in mind,
and at each school a broad scope, ranging from modeling to scientific computing,
will be covered. The four main speakers will deliver a series of three
70-minutes lectures. Ample time within the school is allocated for the promotion of
informal scientific discussions among the participants.
Speakers of the 2027 school:
Plenary speakers
 |
Matthew Colbrook
University of Cambridge
DAMTP, Centre for Mathematical Sciences
Wilberforce Road
Cambridge CB3 0WA, UK
|
|
| Spectra Beyond Matrices: Reliable Computation in Infinite Dimensions |
|
Spectral problems lie at the heart of partial differential equations, dynamical systems, quantum mechanics, control, and data-driven modelling. Yet many familiar intuitions from finite matrices break down for infinite-dimensional operators: standard discretisations may introduce spurious spectral points, miss parts of the spectrum, or obscure continuous spectral behaviour. These lectures will give an accessible introduction to the foundations and algorithms of reliable infinite-dimensional spectral computation. The first lecture will discuss how spectra of infinite-dimensional operators can be computed, and why naive finite-dimensional truncations may fail. Topics will include spectral pollution, pseudospectra, spectral invisibility, resolvent-based algorithms, finite sections, and verified enclosures. The second lecture will turn to spectral measures and functional calculus, explaining how they encode evolution, long-time behaviour, and computable approximations of functions of operators. The third lecture will focus on Koopman operators for nonlinear dynamical systems, including the computation of Koopman spectra and spectral measures, data-driven approximation, forecasting, and the limits of what can be inferred from trajectory data. The aim is to provide participants with a conceptual and practical toolkit for turning infinite-dimensional spectral questions into finite computations while retaining mathematical reliability. No specialised background beyond linear algebra, calculus, differential equations, and basic analysis will be assumed.
|
|
 |
Alex Townsend
Cornell University
Malott Hall 535
Ithaca
NY 14853, USA
|
|
| A Mathematical Guide to Operator Learning |
|
Operator learning is an emerging research topic in numerical analysis, scientific computing, and machine learning. Its goal is to learn maps between infinite-dimensional objects, such as the solution operator that takes a forcing term, coefficient field, initial condition, or geometry to the corresponding solution of a differential equation. These lectures will give a mathematical introduction to operator learning. The first lecture will introduce operators between function spaces, solution maps for differential equations, and the distinction between learning functions and learning operators. The second lecture will survey the main practical approaches, including DeepONets, Fourier neural operators, and graph-based neural operators with an emphasis on the most important applications. The third lecture will discuss the theoretical foundations of operator learning, and why the entire framework is on shaky ground. We will answer what operator learning is, how it connects to classical ideas in numerical analysis, where current methods succeed, and what theoretical questions remain open. No specialized knowledge of deep learning will be assumed beyond basic familiarity with linear algebra, calculus, and differential equations.
|
|
|