When: 19/11 at 12:15
Who: Giovanni Giorgis
What: Optimal Function Approximation with Deep Neural Networks: A Mathematical Perspective
Where: ETH HG G5
Deep neural networks have become state-of-the-art tools across numerous fields, demonstrating exceptional performance in tasks ranging from image classification to signal processing. But what drives their success? This presentation explores the mathematical principles underpinning their effectiveness, focusing on the Kolmogorov-Donoho rate-distortion theory. We’ll examine how neural networks achieve optimal approximation for diverse function classes, revealing why they excel as approximators. Through this framework, we’ll uncover how neural networks align with optimal function classes and extend these insights to explain the success of other architectures, such as transformers and recurrent neural networks, in their respective domains.
Date | Speaker | Title |
---|---|---|
08/10 | Noel Friedrich | Oops, I accidentally made a Bitcoin Miner (How Bitcoin works) |
15/10 | Advait Dhingra | Studying the building blocks of nature in the Large Hadron Collider |
22/10 | Nils Assmus | Why your Rolex is crap - Frequency Metrology in the 21st Century |
29/10 | ||
05/11 | Luis Wirth | Lean4 and the Curry-Howard Isomorphism: The Deep Connection Between Logic and Programming Through Type Theory |
12/11 | Sergey Ermakov | Magnetic Reconnection - From basic plasma physics to novel computational methods |
19/11 | Giovanni Giorgis | Optimal Function Approximation with Deep Neural Networks: A Mathematical Perspective |
26/11 | Aparna Jeyakumar | A Leisurely Introduction to Simplicial Sets |
03/12 | Anna Bickel | What medical physics is, what we as physicists can do in the field of medicine. |
10/12 | Andrea Piccirilli | Deformation Quantization |
17/12 | Yannis Müller | TBC |