Conferences and supporting programme
Understanding the Safe Move to Intended Functionality in Autonomy
Autonomous driving and mobility services are massively disrupting the automotive market. These trends involve big numbers from lines of software code to development costs, to projected market size. Also looming is the liability, particularly the hand-off from human to machine and AI control of vehicles. The hand-off hinges on advances in AI and neural networking, leading to powerful computer vision applications. AI and machine learning are fundamentally about the rise of non-determinism, where the same input (vehicle sensor data) can lead to different probable outputs (like driving behavior). The new era of autonomous systems that can learn and improve across a range of metrics, including safety, will ultimately be a boon for society. Though in the near term the rise of non-determinism is introducing new challenges to traditional notions of verification, validation, quality, performance, and the definition of safe enough, in the engineering community and beyond. So increasingly, assuring the safety of the intended functionality means getting a handle on all the subsystems that together comprise a safe self-driving car. This paper will discuss the rise of the ‘driver’ and AI as a key to eventual Level 4/5 autonomous vehicles in broad deployment. Also briefly covered: a survey of the current regulatory and legal environment for autonomous vehicles, and the relationship between functional safety, cyber security, and artificial intelligence.
--- Date: 27.02.2019 Time: 12:00 PM - 12:30 PM Location: Conference Counter NCC Ost