Konferenzen und Rahmenprogramm
Accelerating Next Generation Deep Learning Algorithms – How to Choose FPGA or GPU ?
Deep learning algorithms plays a vital role in extracting purposeful information out of the enormous bytes of data collected every day. For some implementations, the objective is to scrutinize and understand the data to identify trends (e.g. surveillance,data analysis,AI applications), On the other hand for other implementations, the intention is to take swift action based the data (e.g. self-driving cars,smart Internet of Things,robotics/drones). For many of these applications, local processing near the data is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the local processing end (Inference engines) there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many implementations, deep learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this talk, Author will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture of GPU and FPGA's and how to choose algorithms and appropriate hardware (IC's) GPU or FPGA for efficient training or efficient inference, in Deep learning.
--- Datum: 26.02.2019 Uhrzeit: 16:00 - 16:30 Uhr Ort: Conference Counter NCC Ost
Sprecher
Dr. Severine Habert
Intel Corporation