Conferences and supporting programme
High-Resolution Multi-Camera Tool to Develop an Autonomous Vision System Solution
Recent advances in embedded vision have evolved from passive video capturing devices to fully autonomous vision systems. Self-driving cars, drones, or autonomous guided robots require real-time parallel processing, low-latency, and in some cases low current consumption. Multiple camera modules provide surround view and sensor fusion improves the overall vision system, while artificial intelligence and machine learning herald tremendous improvements for recognition and learning tasks in autonomous vision systems. It is clear that building autonomous vision systems grows increasingly challenging, requiring multidisciplinary expertise in optics, image sensors, computer vision and deep learning. Selecting the right development platform and design methodology is crucial for a successful implementation. Avnet offers embedded vision solutions bundled with tools and reference designs to minimize development time in creating autonomous vision systems. This paper describes a multi-camera development platform for autonomous vision systems supporting six camera modules with up to 4K UHD resolution. The core of the solution is a Xilinx Zynq® UltraScale+™ Zynq Ultrascale+ MPSoC combining a 64-bit processing system and programmable logic. By leveraging the processing system’s quad-core ARM® Cortex-A53 to run traditional software tasks coupled with hardware-accelerated functions executing in programmable logic, system designers can achieve performance gains orders of magnitude higher than traditional software-based computer vision systems. A design methodology based on the Xilinx reVISION stack is presented, with performance benchmarks for hardware-accelerated OpenCV algorithms commonly used in ADAS.
--- Date: 28.02.2018 Time: 2:00 PM - 2:30 PM Location: Conference Counter NCC Ost