An Interplay between Reinforcement Learning and the Theory of Hamilton--Jacobi Equation > 세미나

본문 바로가기
사이트 내 전체검색


세미나

모드선택 :              
세미나 신청은 모드에서 세미나실 사용여부를 먼저 확인하세요

An Interplay between Reinforcement Learning and the Theory of Hamilton…

박형빈 0 71
구분 금융수학
일정 2025-06-05 14:00 ~ 15:00
강연자 김연응 (서울과학기술대학교)
기타 금융수학
담당교수 박형빈



This talk presents three complementary perspectives on the interplay between the theory of Hamilton--Jacobi (HJ) equations and modern computational approaches to optimal control and reinforcement learning. First, we examine the stability and convergence properties of value functions arising from Lipschitz-constrained control problems, and interpret them through the lens of viscosity solutions to Hamilton--Jacobi--Bellman (HJB) equations, providing a theoretical foundation for continuous-time reinforcement learning. Second, we explore the eradication time problem in controlled epidemic models, where the minimum-time solution emerges as the viscosity solution to a static HJB equation. This structure naturally lends itself to a physics-informed neural network (PINN) framework, enabling mesh-free approximation of both the value function and optimal bang-bang control. Lastly, we introduce a DeepONet-based policy iteration method that integrates operator learning with the Hamilton--Jacobi formulation to solve high-dimensional control problems efficiently, even in the absence of discretization. Through these case studies, we illustrate how HJ theory serves as a unifying backbone that connects control-theoretic objectives with modern machine learning methodologies.



세미나명

   

상단으로

Research Institute of Mathematics
서울특별시 관악구 대학동 서울대학교 자연과학대학 129동 305호
Tel. 02-880-6562 / Fax. 02-877-6541 su305@snu.ac.kr

COPYRIGHT ⓒ 자연과학대학 수학연구소 ALL RIGHT RESERVED.