The FOCETA project focuses on the complex scenario of autonomous vehicles in a dense urban environment, where it is necessary to address a series of critical technical objectives.
Thus, FOCETA does not aim to simply introduce “yet-another” rigorous design method that is supposed to cope with the mentioned problems. FOCETA intends to establish the principle that future autonomous systems with LECs should guarantee safety while maximizing performance.
The new techniques for ensuring this principle will be integrated into a prototype tool and their efficiency and practical impact will be validated through industrially relevant feasibility studies.
The breakthrough targeted by FOCETA is a practical and demonstrably effective methodological framework for the formal modelling and verification of dependable learning-enabled components, and for the rigorous design of learning-enabled autonomous systems. The new method will combine the advantages of data-based and model-based techniques towards ensuring Safety, Security and improving Performance, at lower costs and in shorter time.
The FOCETA vision is realized through ten concrete technical objectives, as described in the following:
1. Introduce essential dependability and performance requirements for learning-enabled autonomous systems.
The high-priority dependability and performance requirements will set the ground for supporting their verification in demanding use case scenarios: autonomous vehicles in dense urban environment. It is therefore important that the requirements will be verifiable and will be associated with metrics that allow comparison between alternatives of system design.
2. Enable derivation of test cases and evaluation parameters from the requirements.
The capability to derive test cases from the requirements is essential to ensure that all requirements are traceable to test cases and that at the end only those requirements that were initially approved are tested. Test cases will be coupled to evaluation criteria, for validating the end-result in quantitative terms as well as with respect to important qualitative aspects (usability, user acceptance etc).
3. Introduce component-based modeling techniques for training and validation purposes.
Modelling will enable simulation for training and validation purposes. We advocate a component-based approach (method, language, tools) to open prospects for: (i) a generic decomposition of the autonomic function based on standard and open interfaces, (ii) mixing model-based and data-based components, (iii) integrating multiple modeling domains and (iv) enable modelling in diverse system scales and boundaries.
4. Model and evaluate the fundamental uncertainty and noise in perception, understanding and predicting dynamic events.
In autonomous systems, the sensors like GPS, cameras, etc. exhibit uncertainty in their estimates. In autonomous driving, it is wise to be wary of an aggressive driver in the next lane. Since the decision-making depends on the perception and understanding of the environment to predict future events, we need techniques for efficiently characterizing/quantifying uncertainty and integrating it into our model-based design. To this end, we intend to introduce probabilistic modelling techniques for autonomous agents, as well as for systems of autonomous agents, thus addressing the challenge of uncertainty in multiple modelling scales.
5. Develop compositional CPS simulators, and generate simulation scenarios and decision-making strategies.
A compositional approach to building simulators of autonomous CPS systems will allow combining heterogeneous models for the system’s environment, the perception modules etc. with emulated systems, thus avoiding the lock-in to existing monolithic approaches that fail to scale in realistic use cases. We need to enable simulation in multiple scales, as well as the automated generation of scenarios for simulation and mass virtual testing of subsystems.
6. Develop techniques for multi-scale virtual testing, systematic coverage, accelerated and accurate testing of autonomous CPS systems.
We need techniques for the exploration of operational space of the system using criteria of simulation coverage and via synthesis of multi-scale virtual testing environments. Our aim is to enable assessment of safety and performance aspects at reduced demands in terms of cost and time, while including corner cases that may exhibit safety issues.
7. Develop scalable verification techniques for LECs, in particular deep neural networks that are widely used in perception and decision-making.
New symbolic and statistical verification techniques will be developed for LECs coupled with systematic testing methods that are essential to increase confidence for users and developers. We are particularly interested in deep neural networks that are commonly used for the perception and decision-making functions of autonomous systems. Important priorities for the new techniques are their practical applicability and their capacity to scale to real-world scenarios.
8. Enhance system-level testing with AI-enabled ML techniques, for discovering many more critical problems within a shortest time frame.
Comprehensive validation of autonomous systems may be overly time consuming and therefore we seek to employ learning techniques to tune random testing methods towards finding rare events that may cause errors. In the autonomous driving context, this corresponds to identifying tricky scenarios that are possible in the everyday life of human drivers.
9. Propose a unified approach that combines learning from data and synthesis from specification.
For LECs, we look for techniques that can guarantee safety through the synthesis of runtime enforcement modules from formal specifications. These new techniques will be combined with novel approaches for optimizing performance and robustness of LECs, while retaining their safety guarantees (e.g. through refinement by deep learning).
10. Monitor the faithfulness of decision-making at runtime.
There are LECs without obvious safety specifications. For these components, it is important to monitor the faithfulness of decision-making at runtime in order to appropriately adjust their impact to the system.