Theoretical Foundations of Deep Learning (SPP2298)
Parallel to the impressive success of deep learning in real-world applications ranging from autonomous driving to gaming intelligence and healthcare, deep learning-based methods are now also making a strong impact in science, replacing or complementing state-of-the-art classical model-based methods in solving mathematical problems such as inverse problems or partial differential equations. However, despite the outstanding successes, most of the research on deep neural networks is empirically driven and their theoretical-mathematical foundations are largely lacking. The main goal of this priority program is to develop a comprehensive theoretical foundation of deep learning. Research within the program will be structured along three complementary viewpoints, namely
- the statistical perspective, which views neural network training as a statistical learning problem and investigates expressivity, learning, optimization, and generalization,
- the application perspective, which focuses on security, robustness, interpretability, and fairness, and
- the mathematical-methodological perspective, which develops and theoretically analyzes novel Deep Learning-based approaches to solving inverse problems and partial differential equations.
The research questions to be addressed in this priority program are to a large extent interdisciplinary in nature and can only be solved by a joint effort of mathematics and computer science. Mathematical methods and concepts from all areas of mathematics are required, including algebraic geometry, analysis, stochastics, approximation theory, differential geometry, discrete mathematics, functional analysis, optimal control, optimization, and topology. Statistics and theoretical computer science also play a fundamental role. In this sense, methods from mathematics, statistics and computer science form the core of this priority program.