Embedded Systems

Efficient Machine Learning in Hardware

Dozent Oliver Bringmann
Head
Oliver Bringmann

Vorlesung Donnerstag
bis auf weiteres per Videokonferenz (Sand 6/ F122 (Hörsaal 2))
Einführungsveranstaltung: Siehe grauer Kasten unten
Übungsleitung Alexander Jung
Researcher
Alexander Jung

Adrian Frischknecht
Alumni
Adrian Frischknecht

Christoph Gerum
Researcher
Christoph Gerum

Evgenia Rusak
Researcher
Evgenia Rusak

Konstantin Lübeck
Researcher
Konstantin Lübeck

Paul Palomero Bernardo
Researcher
Paul Palomero Bernardo

Simon Garhofer
Researcher
Simon Garhofer

Umfang 2 SWS / 3 LP
Kursart V (3 LP)
Modulnummer ML4420
Eintrag im Kurskatalog Alma
Lernplattform Link

Informationen zu den Zugängen der Online-Vorlesung finden Sie im entsprechenden ILIAS-Kurs und auf der Übersichtsseite im ILIAS.

Information about access to the online lecture can be found in the corresponding ILIAS course and on this overview page in ILIAS.

Topic

The recent breakthroughs in using deep neural networks for a large variety of machine learning applications have been strongly influenced by the availability of high performance computing platforms. In contrast to its biological origin, however, high performance of artificial neural networks critically relies on much higher energy demands. While the average energy consumption of the entire human brain is comparable to that of a Laptop computer (i.e. 20W), artificial intelligence often resorts to large HPCs with several orders of magnitude higher energy demand. This lecture will discuss this problem and show solution how to build energy and resource efficient architectures for machine learning in hardware. In this context, the following topics will be addressed:

  • Hardware architectures for machine learning: GPUs, FPGAs, overlay architectures, SIMD architectures, domain-specific architectures, custom accelerators, in/near memory computing, architectures for training vs. architectures for inference
  • Energy-efficient machine learning
  • Optimized mapping of deep neural networks to hardware and pipelining techniques
  • Word length optimization (binary, ternary, integer, floating point)
  • Scalable application specific architectures
  • New switching devices to implement neural networks (Memristors, PCM)
  • Neuromorphic computing

Students gain in-depth knowledge about the challenges associated with energy-efficient machine learning hardware and respective state-of-the-art solutions. Different hardware architectures will be compared regarding the trade-off between their energy consumption, complexity, computational speed and the specificity of their applicability.

The main goals of the course are learning what kinds of hardware architectures are used for machine learning, understanding the reasons why a particular architecture is suitable for a particular application and how to efficiently implement machine learning algorithms in hardware.