This website uses cookies to make the content more user-friendly and effective. By using this website, you agree to the use of cookies. You can find additonal information about the use of cookies and the possibility of objecting to the use of cookies here.

26 - 28 February 2019 // Nuremberg, Germany

Conferences and supporting programme

back to day overview
Session 30 - Multicore Systems

Running Machine Learning Optimally on Heterogeneous, Low-Power Platforms Vortragssprache Englisch

Neural network frameworks such as Caffe and TensorFlow have revolutionized machine learning and computer vision on desktop PCs and on servers in the cloud, and are poised to do the same at the edge. But running these frameworks optimally on low-power devices provides one of the biggest challenges yet for developers. To help, modern system-on-chips offer a variety of processor core types – CPUs, GPUs, accelerators, DSPs, etc. – each suited to different parts of typical machine learning pipelines. But mapping these frameworks to run seamlessly across these cores, whilst minimizing power-sapping operations such as memory copies, can be complex and time-consuming to implement. Optimizing for one platform is often challenge enough, but with such a huge variety of potential target platforms, the prospect of optimizing for each one limits the feasibility of write-once applications that run optimally across multiple devices. This talk will look at some of the tools and techniques available today that aim to take some of the pain away including the recently announced Project Trilium: Arm’s new suite of machine learning IP with the Arm NN software platform. It will examine how key middleware can act as a pinch-point, interfacing with multiple high-level machine learning frameworks, providing the developer with an optimal route to these multiple processor cores. The talk will also show how optimization of this middleware can reduce the developer’s problem when distributing machine learning workloads. This will be illustrated with examples, highlighting some of the work Arm is doing to enable machine learning wherever compute happens. Finally, the talk will look ahead to a time when these frameworks can be used to automate the choice between cores and, ultimately, do this dynamically, making decisions based on current load and power before scheduling nodes of the pipeline in one direction or the other. What will the implication to the world of machine learning be once these tools and frameworks have evolved, enabling this exciting set of new use cases across embedded devices of all shapes and sizes?

--- Date: 01.03.2018 Time: 3:30 PM - 4:00 PM Location: Conference Counter NCC Ost

Speakers

 Robert Elliott

Robert Elliott

arm Limited

top

The selected entry has been placed in your favourites!

If you register you can save your favourites permanently and access all entries even when underway – via laptop or tablet.

You can register an account here to save your settings in the Exhibitors and Products Database and as well as in the Supporting Programme.The registration is not for the TicketShop and ExhibitorShop.

Register now

Your advantages at a glance:

  • Advantage Save your favourites permanently. Use the instant access – mobile too, anytime and anywhere – incl. memo function.
  • Advantage The optional newsletter gives you regular up-to-date information about new exhibitors and products – matched to your interests.
  • Advantage Call up your favourites mobile too! Simply log in and access them at anytime.