Overview of the presentation
The world is on the edge of something big. It’s artificial intelligence, and it’s transforming everyday life. In Healthcare, AI is driving helping doctors make more accurate diagnoses. In automotive, AI is powering driver-assist and enhancing safety in cars. In the industrial space, AI is used in robotics and vision systems. And for embedded systems, AI is changing how chips are designed, tested, and debugged.
As exciting as these innovations are, humans are only just beginning to explore what’s possible with AI. To unlock its full potential and make it pervasive, an adaptive and heterogeneous approach is a must: one that combines CPUs, GPUs, and programmable logic with open software, distributing workloads from cloud to edge and endpoints. The solutions also need to be adaptable, providing the ability to update or reconfigure AI hardware with the latest models – even years after deployment.
Most discrete devices today are not optimized for power efficiency. But since 2020, improvements in processor efficiency, and AI Engines on adaptive SoCs are helping to deliver power savings for AI. During this keynote presentation, attendees will learn about:
- The growing compute and power requirements needed to drive AI
- Techniques to drive power-efficient AI with heterogeneous computing solutions
- How to achieve compute efficiency through workload optimization with adaptive computing