The robotics and autonomous systems communities have recently seen a significant and rapid increase in both the development of robots for commercial use and in interest in using robots in a wide range of novel applications. As these robotic systems, vehicles, and even embedded devices move towards much greater autonomy, we will require techniques for verification that provide much higher confidence than usual. Consequently, the analysis and test processes used for traditional systems must be significantly enhanced to provide increased confidence in this next wave of autonomous systems. The need for well understood and effective verification techniques will become vital as we move to commercial applications such as “driverless cars”, incorporate complex AI technologies, and utilize these systems in safety-critical scenarios.
There are a growing number of research developments concerning the verification of complex systems that can all impact upon this problem. These are clearly of relevance for designing, constructing and deploying autonomous systems but also have importance to Psychology (e.g. social robotics), Philosophy (e.g. machine ethics), and Law (e.g. certification). Furthermore, constructing autonomous systems without strong behavioral guarantees can lead to serious outcomes, and may consequently hold back the widespread adoption of these systems. As the research is fragmented and often not well publicized, this TC will coalesce this activity, drive the research agenda forward, and instill the necessity for verification firmly within industry, government, and the public.
This technical committee is concerned with the development of tools and techniques to verify autonomous systems.
Topics of Interest
- ·Tools and techniques for verification at design time
- ·Tools and techniques to support the specification of autonomous systems, and their tasks and behaviors, such as logics, languages, mathematical frameworks, and combinations of all these
- ·Tools and techniques for verification at development stage
- ·Tools and techniques for testing, modeling and simulating autonomous systems, both on their own and within their environment. For example, dedicated automated or interactive computer programs, mathematical and heuristic procedures, and best practices on modelling concerning behavior of autonomous systems and their environment for analysis
- ·Verification standards and certification processes for autonomous systems
- ·Tools and techniques for verification at run-time, such as sensing and reacting feedback loops with hardware and software, mathematical and heuristic procedures, qualitative and quantitative analyses frameworks, and best practices
- ·Tools and techniques for rigorous analysis of system properties such as safety, reliability, security, and ethical constraints. For example, software testing, system testing (hardware-software-environment), simulations, experiments in the lab, user evaluation studies, and combinations of all of these with real and simulated elements
Goals & Objectives
We aim to
- •Link researchers and practitioners in the field of Verification of Autonomous Systems together
- •Publicize events, initiatives, researchers and resources that target the Verification of Autonomous Systems worldwide
- •Provide a detailed roadmap of existing resources and research as well as future areas that need to be tackled (which can then impact on funding organizations worldwide)
- •Develop and promote leading workshops and international conferences focused on this key topic
July 2019 - Our Technical Committee has been officially approved!!
We will be organizing exciting workshops and events for next year. Please contact us for more information or register in the emailing list.