Conferences and supporting programme
Accelerating Embedded Inferencing
Machine learning algorithms are highly compute intensive. Training sessions can be performed in data-centers where racks of server class processing power can be employed. However, inferencing often needs to be performed by edge nodes, by embedded processors in some cases powered by weight constrained batteries and in difficult to cool enclosures. Certain applications such as real-time video image processing may stress the capabilities of even the fastest embedded processors. One way to address this problem is to move parts of the inferencing algorithms into accelerators implemented in hardware – specifically FPGA programmable hardware fabric or an ASIC implementation. This session will explore the use of high-level synthesis (HLS) to create machine learning accelerators specific to an implementation. HLS enables for greater trade-offs between power, area, latency, and throughput needed to meet demanding power and performance goals. HLS allows implementation to be done much more quickly than traditional hardware design methodologies, which is essential in fields like machine learning where algorithms are continually evolving. This session will also cover integrating the accelerators into an embedded system’s hardware and software.
--- Date: 27.02.2019 Time: 3:00 PM - 3:30 PM Location: Conference Counter NCC Ost