Conferences and supporting programme
Hardware Implementation of Deep Neural Networks – A Comparison Between FPGA and GPU
Research has shown that implementations of deep neural networks on FPGAs are promising in terms of accuracy and speed. In this project the application possibilities and limitations of Binary Neural Networks (BNN) were investigated. The obtained results are compared with state of the art implementations of Deep Neural Networks on a GPU. For this purpose, different networks were implemented and tested for different applications. These include pretrained networks for digit recognition (MNIST) and object detection (Cifar10), as well as a separate implementation for autonomous driving. The FINN framework presented by Gambardella (2016) was used as a basis for the hardware implementation. In parallel to the hardware implementation, the networks were also implemented in CUDA to compare them with regard to runtime and accuracy. For example, the FPGA is more than 2 times faster on the Cifar10 network with similar accuracy. But when it comes to converting regression problems into classification problems, in the worst case the accuracy reduces down to 50%. As platforms, ZYNQ-7000 FPGA and a computer with a Quadcore i7-6700k running at 4.0 GHz being accelerated by the GTX1080-GPU were used.
--- Date: 26.02.2019 Time: 16:30 - 17:00 Location: Conference Counter NCC Ost