This website uses cookies to make the content more user-friendly and effective. By using this website, you agree to the use of cookies. You can find additonal information about the use of cookies and the possibility of objecting to the use of cookies here.

26 - 28 February 2019 // Nuremberg, Germany

Conferences and supporting programme

back to day overview
Session 10 I - Autonomous Systems I / Architectures & Applications

Implementing Artificial Neural Networks, Unleashing New Possibilities of Edge Intelligence Vortragssprache Englisch

The rapidly-growing area of artificial intelligence (AI), neural networks (NN) and machine learning offers tremendous promise as developers attempt to bring higher levels of intelligence to their systems. NNs are a paradigm that engineers have been using to implement systems that can continuously learn and infer, based on the learning. Computational requirements for such a system vary widely depending upon application. Traditionally deep learning techniques are used in the data center and typically rely on large, high performance GPUs to meet those demanding computational requirements. Designers extending the advantages of AI to the network edge don’t have the luxury of power-hungry GPUs. At the edge, the deep learning techniques that use floating-point computation in the data center are impractical. Designers must develop computationally efficient solutions that not only meet accuracy targets, but also comply with the power, size and cost constraints of the consumer market. This session will highlight “on-device Artificial Intelligence (AI),” which uses the NN models to compare new incoming data and infer intelligence. On-device AI provides improved user privacy, since mass amounts of personal data is not sent to the cloud as processing occurs locally, on-premise. We will evaluate technologies such as FPGAs that make edge computing possible and are needed to maximize parallel computing, as well as explain the intelligence that these low power solutions bring to battery powered applications. We will also discuss how building AI into an FPGA running open source RISC-V processor with some accelerators does not only cut power consumption, but also accelerate response time in addition to keeping processing local, improving security and privacy. By the end of this session, designers will learn how to bring the advantages of AI, neural networks and machine learning to resource-constrained, power-optimized network edge devices.

--- Date: 27.02.2018 Time: 9:30 AM - 10:00 AM Location: Conference Counter NCC Ost

Speakers

man

Hussein Osman

Lattice Semiconductor

top

The selected entry has been placed in your favourites!

If you register you can save your favourites permanently and access all entries even when underway – via laptop or tablet.

You can register an account here to save your settings in the Exhibitors and Products Database and as well as in the Supporting Programme.The registration is not for the TicketShop and ExhibitorShop.

Register now

Your advantages at a glance:

  • Advantage Save your favourites permanently. Use the instant access – mobile too, anytime and anywhere – incl. memo function.
  • Advantage The optional newsletter gives you regular up-to-date information about new exhibitors and products – matched to your interests.
  • Advantage Call up your favourites mobile too! Simply log in and access them at anytime.