Multiscale Training of Convolutional Neural Networks > 세미나

본문 바로가기
사이트 내 전체검색


세미나

모드선택 :              
세미나 신청은 모드에서 세미나실 사용여부를 먼저 확인하세요

Multiscale Training of Convolutional Neural Networks

김수현 0 2310
구분 초청강연
일정 2026-03-24 16:00 ~ 17:00
강연자 Eldad Haber (University of British Columbia)
기타
담당교수 홍영준

Convolutional Neural Networks (CNNs) are the backbone of many deep learning methods, but optimizing them remains computationally expensive. To address this, we explore multiscale training frameworks and mathematically identify key challenges, particularly when dealing with noisy inputs. Our analysis reveals that in the presence of noise, the gradient of standard CNNs in multiscale training may fail to converge as the mesh-size approaches to 0, undermining the optimization process. This insight drives the development of Mesh-Free Convolutions (MFCs), which are independent of input scale and avoid the pitfalls of traditional convolution kernels. We demonstrate that MFCs, with their robust gradient behavior, ensure convergence even with noisy inputs, enabling more efficient neural network optimization in multiscale settings. To validate the generality and effectiveness of our multiscale training approach, we show that (i) MFCs can theoretically deliver substantial computational speedups without sacrificing performance in practice, and (ii) standard convolutions benefit from our multiscale training framework in practice.

세미나명

   

상단으로

Research Institute of Mathematics
서울특별시 관악구 대학동 서울대학교 자연과학대학 129동 305호
Tel. 02-880-6562 / Fax. 02-877-6541 su305@snu.ac.kr

COPYRIGHT ⓒ 자연과학대학 수학연구소 ALL RIGHT RESERVED.