Theoretical Foundations of Deep Learning SPP2289
print


Breadcrumb Navigation


Content

Sitemap

People

Steering Committee

Principal Investigators

PhD Students/PostDocs

Projects

Adaptive Neural Tensor Networks for parametric PDEs

Assessment of Deep Learning through Meanfield Theory

Combinatorial and implicit approaches to deep learning

Curse-of-dimensionality-free nonlinear optimal feedback control with deep neural networks

Deep assignment flows for structured data labeling: design, learning and prediction performance

Deep-Learning Based Regularization of Inverse Problems

Deep learning for non-local partial differential equations

Deep neural networks overcome the curse of dimensionality in the numerical approximation of stochastic control problems and of semilinear Poisson equations

Foundations of Supervised Deep Learning for Inverse Problems

Globally Optimal Neural Network Training

Implicit Bias and Low Complexity Networks (iLOCO)

Multilevel Architectures and Algorithms in Deep Learning

Multi-Phase Probabilistic Optimizers for Deep Learning

Multiscale Dynamics of Neural Nets via Stochastic Graphops

On the Convergence of Variational Deep Learning to Sums of Entropies

Provable Robustness Certification of Graph Neural Networks

Solving linear inverse problems with end-to-end neural networks: expressivity, generalization, and robustness

Statistical Foundations of Unsupervised and Semi-supervised Deep Learning

Structure-preserving deep neural networks to accelerate the solution of the Boltzmann equation

The Data-dependency Gap: A New Problem in the Learning Theory of Convolutional Neural Networks

Towards a Statistical Analysis of DNN Training Trajectories

Towards everywhere reliable classification - A joint framework for adversarial robustness and out-of-distribution detection

Understanding Invertible Neural Networks for Solving Inverse Problems

Events

Virtual Kick-off Meeting

Contact