Conferences and supporting programme
Accelerating Neural Networks for Autonomous Systems via FPGAs
The performance and accuracy of Convolutional Neural Networks (CNN) for visual recognition has reached the point where researchers generally consider them to be more accurate than traditional algorithmic approaches. Initial CNNs were implemented on general purpose computers using floating point operations. However, general purpose processors, while they contain floating point units, are not optimized for the massive number of floating point operations required by CNNs. As a result accelerated implementations of neural networks have moved to integer based implementations typically via GPUs, ASICS or FPGA’s. FPGA’s provide the ability to generate optimized neural networks based on application needs without a spin of an ASIC. Compared to GPUs, FPGA’s also have the unique ability to be configured for reduced integer precision down to a Binary Neural Network (BNN). Reduced precision enables lower logic utilization, less table memory, increased performance, and lower power consumption comparted to traditional floating point and integer implementations. In this session we will examine implementations of a Binary Neural Network (BNN) on an FPGA demonstrating four orders of magnitude greater performance than a software implementation on an embedded processor. We will start with the basic concepts of Convolutional Neural Networks. Next, we will examine why FPGAs provide the necessary flexibility to accommodate network precision as well as varying number of neurons and layers. We will then examine multiple BNN implementations inside of FPGAs showing high accuracy and dramatic acceleration when compared against general purpose embedded processors. Finally, we will provide a detailed example of a traffic sign recognition system including real-time camera input, sign identification and recognition.
--- Date: 28.02.2018 Time: 4:00 PM - 4:30 PM Location: Conference Counter NCC Ost