Embedded Systems

Model-based Verification - Fundamentals and Industrial Practice

Dozenten Thomas Kropf
Jürgen Ruf
Übungsleitung Jürgen Ruf
Sebastian Burg
Alumni
Sebastian Burg

Hanno Eichelberger
Stefan Huster
Jo Laufenberg
Researcher
Jo Laufenberg

Übungsgruppe 16.04.2015 17:00 c.t. (Raum: A104)
Umfang 2 SWS / 4 LP
Eintrag im Kurskatalog Campus

Beschreibung

Reliability, security, accuracy, and robustness of software become more and more important. Mistakes in system development are made, including critical ones, due to mental-logic errors in the specification and/or during implementation of the software itself. Thereby, compilers and used programming languages play an important role. To avoid errors, the used programming languages are often limited to specific language constructs. This is done to avoid dynamic malfunction (memory overflows, memory leaks etc.). Another reason for that is to simplify the analysis and verification process of these software components. The verification techniques range from static code analysis of programs and their specifications to combinations of automated proof systems and model-checkers. Besides error prevention and fault detection, error tolerance (e.g. due to redundancy) in software is an interesting approach to increase reliability of safety critical systems. To do so, techniques such as testing, runtime verification, program observation, monitoring, and consistency checking come into action. Another aspect is quality assurance of software. An example would be the certification of safety-related systems that relies on the compliance of software and its specified properties regarding safety standards like ISO 26262 and IEC 61508. In this context, libraries, tools, compilers, system components and third-party software play an important role.Themen

  • Fuzz Testing: Fuzz Testing or fuzzing is a blackbox testing method, where the system under test (SUT) is executed with inputs that are not foreseen by the SUT’s specification. The SUT is then checked for the presence of generally undesired behaviors, such as memory access violations. The inputs used for fuzzing must be semi-valid, i.e. inputs that are not valid according to the specification, but not entirely invalid either, so that the software processes the incoming data instead of rejecting it as wrong input. Based on the approach to define a coverage criterion for fuzz testing, an overview about fuzzing, its benefits and limitations should be presented.
  • Model-based testing: The idea of model-based testing is to compare the I/O behavior of an explicit behavior model with that of a system under test. This requires the model to be valid. If the model is a simplification of the SUT, then it is easier to check the model and use it for subsequent test case generation than to directly check the SUT. In this case, the different levels of abstraction must be bridged. Not surprisingly, experience shows that choosing the right level of abstraction is crucial to the success of model-based testing. The presentation shall include an overview about the classification of the different abstraction levels, their benefits and disadvantages.
  • Survey about test case generation methods from different sources: Generating test cases automatically is one of the most important challenges in testing area. Depending on the given sources for the generation of test cases, we can check different requirements of the system under test (SUT). This presentation should give an overview of the different sources which allow a (semi-)automatically extraction of test cases and the state of the art methods are used for.
  • Test case selection/reduction: Automatically generated test suites usually have redundant test cases and they are usually impractical due to the huge number of test cases that can be generated. One major challenge in this context is the reduction of the test suite by avoiding the reduction of their efficiency. One approach is to use a similarity function. The presentation should include an overview about this method and a comparison to random and other selection strategies.
  • Fault Localization Metrics: Fault localization metrics are applied to localize bugs in software. They analyze and compare specific features of executed software. They compare test runs without bugs with test runs which cause a bug. For example, they monitor the coverage of executed source lines of different runs. Source lines which occur in the failing run, but not in any passing run, may contain the root-cause of a failure. The goal of this topic is to present and compare different fault localization metrics.
  • Monitoring Frameworks: For detecting root-causes of failures in embedded software it is required to monitor the software during execution. The monitoring results in a trace file, which can be automatically analyzed (e.g. using fault localization metrics) to detect anomalies or root-causes of failures. Different monitoring techniques and tools are available in the state of the art. They can be categorized as follows: source code instrumentation, binary code instrumentation or debugging tools. The goal of this topic is to present different monitoring tools and compare them considering their practical application to embedded software.
  • Multi-Threaded Replay Debugging for Embedded Software: Replay Debugging traces software during operation in real world. The traced execution which causes a bug can be replayed in the laboratory for debugging. Therefore, the I/O inputs of the software can be captured during operation and injected during replay. Nevertheless, multi-threaded software can cause different executions during replay, caused by mismatched thread schedules. This presentation should present different approaches for the replay of thread schedules which can be applied to embedded software
  • SAT Topics:* A DPLL(T) Theory Solver for a Theory of Strings and Regular Expressions Turbo-Charging Lemmas on Demand with Don’t Care Reasoning
  • Additional Topics:* Improving Search-based Test Suite Generation with Dynamic Symbolic Execution An Overview of Recent Trends in Software Testing

Bemerkung

Vorbesprechung

16.04.2015 17:00 c.t. (Raum: A104)

Blockveranstaltung

Die Vorträge werden am 30. Juli 2015 16:00-19:00 Uhr (Raum: A104) stattfinden

Randbedingungen fürs Seminar

Von den Teilnehmerinnen und Teilnehmnern wird eine weitgehend selbstständige Erarbeitung des jeweiligen Themengebiets incl. Literaturbeschaffung und -recherche erwartet. Literaturhinweise werden natürlich gegeben, eine bedarfsgesteuerte Betreuung ist sichergestellt.

Vortragsdauer und Themenaufbau

Der Vortrag soll einen Dauer von 25 Minuten haben und folgenden Aufbau besitzen:

  • Einleitung mit Motivation (ca. 5 Minuten im Vortrag)
  • Theorie zum jeweiligen Thema (ca. 10-15 Minuten im Vortrag)
  • Praktische Anwendung / Fallbeispiele (restliche Zeit, mind. 5. Minuten)
  • Zusammenfassung (2 Minuten im Vortrag)
  • Vortrag muss in englischer Sprache gehalten werden

Bedingungen für den Seminarschein

Um den Schein zu erhalten muss

  • eine Vortragsgliederung 4 Wochen vor dem Vortragstermin mit dem Betreuer durchgesprochen sein
  • eine erste Vortragsversion 2 Wochen vor dem Vortragstermin mit dem Betreuer durchgesprochen sein (vorzugsweise PowerPoint oder PDF)
  • ein Vortrag (25 Minuten + Diskussion) in englischer Sprache gehalten werden
  • eine schriftliche Ausarbeitung in englischer Sprache abgegeben werden (6-8 Seiten in PostScript- oder PDF-Format)
  • bei allen Seminarterminen anwesend sein

Benotung

Die Seminarleistung wird nach folgenden Kriterien benotet:

  • Einhaltung der Termine
  • Qualität der durchgeführten Literaturrecherche
  • Inhaltliche Qualität des Vortrags
  • Qualität der Präsentation incl. Zeiteinhaltung
  • Qualität der Ausarbeitung
  • Grad der Sebstständigkeit (davon unberührt sind natürlich Diskussionen mit dem Betreuer über Auswahl der zu präsentierenden Themen, Eignung zur Präsentation etc. - solche Diskussionen sind sogar explizit erwünscht)