Conferences and supporting programme
Implementing Monocular Visual SLAM for Augmented Reality in Low-Power Embedded Vision Systems
Simultaneous localization and mapping (SLAM) is a deep learning technique that gathers visual data from the physical world to create 3D maps of the environment. Monocular visual SLAM relies on a single camera, like the one in mobile phones. SLAM executes computationally intensive tasks, such as feature extraction to identify landmarks, feature matching to determine the changing position of the camera, and loop detection and closure to estimate camera motion. Implementing these tasks on low-power devices like mobile phones requires computationally efficient and memory optimized solutions to reduce power consumption while keeping performance and latency at target levels. This presentation will explain the challenges AR designers face when implementing SLAM in AR applications, offer solutions to reduce system power consumption, and provide a case study that describes how to combine deep learning and evolving SLAM techniques in low-power systems.
--- Date: 26.02.2019 Time: 4:30 PM - 5:00 PM Location: Conference Counter NCC Ost