RESIST II - Resilienzbewertung von Wahrnehmungs- und Planungsansätzen in kooperativ interagierenden Automobilen bei unerwarteten Störungen
Functional safety of fully automated and autonomous vehicles is one of the main challenges of the upcoming years. A fully automated vehicle must not only remain in a safe driving state under ideal conditions, but also in the event of unforeseen situations. The use of cooperatively interacting strategies further complicates ensuring sufficient resilience against these unforeseeable situations and unexpected disturbances. To qualify a vehicle with fully automated driving functions in accordance with ISO 26262, it currently has to complete one billion test kilometers on the road.
The aim of this project proposal is to advance a significant portion of the application to a simulation-based verification process instead of real test drives in order to enable early resilience evaluation. The advantages over real driving is the possibility of exploring a wide variety of environmental conditions in their variety of parameters in order to uncover specific borderline situations in addition to considerable time and cost savings. Cooperative perception methods and their resilience evaluation under varying environmental conditions and sensor influences are to be researched.
The cooperative perception methods should compensate the reduced recognition rates of different algorithms and sensor types under different environmental conditions. In addition to the camera sensors, radar sensors are to be included in the investigations and extended by cross-vehicle object tracking.
The resilience evaluation is supplemented by further environmental conditions that are difficult to model. New approaches are to be researched to model and simulate the variety of parameters of spray, snowfall and fog. In addition, the effects of different environmental conditions on perception, prediction and planning algorithms are investigated under different driving scenarios.
Subsequently, suitable metrics are examined to evaluate the procedures. On the one hand, the quality of the cross-vehicle fusion is evaluated, and on the other hand, a safety metric is developed that not only addresses the precision of the detection, but also how dangerous a detected object could become for the vehicle itself.
A further goal is the sustainable improvement of the training of learning-based perception procedures under varying environmental conditions, since currently the training data sets are usually recordings under ideal conditions and sunshine cannot be assumed in real operation. Therefore, we want to systematically extend the data sets and train the corresponding neural networks with these larger data sets with the help of the procedures mentioned above.