Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines > 세미나

본문 바로가기
사이트 내 전체검색


세미나

모드선택 :              
세미나 신청은 모드에서 세미나실 사용여부를 먼저 확인하세요

Quantizing Convolutional Neural Networks for Low-Power High-Throughput…

김동훈 0 668
구분 수치해석 및 응용수학 세미나
일정 2018-05-30 15:00 ~ 18:00
강연자 Sean O. Settle (Xilinx, Inc.)
기타
담당교수 신동우
강연시간: 4시-5시 Deep learning as a means to inferencing has proliferated thanks to its versatility and ability to approach or exceed human-level accuracy. These computational models have seemingly insatiable appetites for computational resources not only while training, but also when deployed at scales ranging from data centers all the way down to embedded devices. As such, increasing consideration is being made to maximize the computational efficiency given limited hardware and energy resources and, as a result, inferencing with reduced precision has emerged as a viable alternative to the IEEE 754 Standard for Floating-Point Arithmetic. We propose a quantization scheme that allows inferencing to be carried out using arithmetic that is fundamentally more efficient when compared to even half-precision floating-point. Our quantization procedure is significant in that we determine our quantization scheme parameters by calibrating against its reference floating-point model using a single inference batch rather than (re)training and achieve end-to-end post quantization accuracies comparable to the reference model.

세미나명

   

상단으로

Research Institute of Mathematics
서울특별시 관악구 대학동 서울대학교 자연과학대학 129동 305호
Tel. 02-880-6562 / Fax. 02-877-6541 su305@snu.ac.kr

COPYRIGHT ⓒ 자연과학대학 수학연구소 ALL RIGHT RESERVED.