Date | Feb 03, 2023 |
---|---|
Speaker | 양홍석 |
Dept. | KAIST |
Room | 선택 |
Time | 15:00-17:00 |
[서울대학교 수리과학부 10-10 집중강연] 2/3 (금), 2/8 (수), 2/10 (금), 15:00PM - 17:00PM
장소: Zoom 강의실
https://snu-ac-kr.zoom.us/j/99324881376?pwd=NXR2MGZPWGVwbEM4TXgzOGFUb1VRQT09
회의 ID: 993 2488 1376
암호: 120745
초록 : While deep learning has many remarkable success stories, finding a satisfactory mathematical explanation on why it is so effective is still considered an open challenge. One recent promising direction for this challenge is to analyse the mathematical properties of neural networks in the limit where the widths of hidden layers of the networks go to infinity. Researchers were able to prove highly-nontrivial properties of such infinitely-wide neural networks, such as the gradient-based training achieving the zero training error (so that it finds a global optimum), and the typical random initialisation of those infinitely-wide networks making them so called Gaussian processes, which are well-studied random objects in machine learning, statistics, and probability theory. These theoretical findings also led to new algorithms based on so called kernels, which sometime outperform existing kernel-based algorithms.
The purpose of this lecture series is to go through these recent theoretical results on infinitely wide neural networks. Our plan is to pick a few important results in this domain, and to go deep into those results, so that participants of the series can reuse the mathematical tools behind these results for analysing their own neural networks and training algorithms in the infinite-width limit.