Embedded Systems

Hardware Accelerator and Neural Network Co-Optimization for Ultra-Low-Power Audio Processing Devices

The paper “Hard­ware Ac­cel­er­a­tor and Neural Net­work Co-Op­ti­miza­tion for Ul­tra-Low-Power Audio Pro­cess­ing De­vices” has been ac­cepted at the 25th Eu­romi­cro Con­fer­ence on Dig­i­tal Sys­tem De­sign (DSD), pages 1-8, 2022. Key­words: Ma­chine Learn­ing, Neural Net­works, Au­toML, Neural Ar­chi­tec­ture Search

Ab­stract:

The in­creas­ing spread of ar­ti­fi­cial neural net­works does not stop at ul­tralow-power edge de­vices. How­ever, these very often have high com­pu­ta­tional de­mand and re­quire spe­cial­ized hard­ware ac­cel­er­a­tors to en­sure the de­sign meets power and per­for­mance con­straints. The man­ual op­ti­miza­tion of neural net­works along with the cor­re­spond­ing hard­ware ac­cel­er­a­tors can be very chal­leng­ing. This paper pre­sents HAN­NAH (Hard- ware Ac­cel­er­a­tor and Neural Net­work seArcH), a frame­work for au­to­mated and com­bined hard­ware/soft­ware co-de­sign of deep neural net­works and hard­ware ac­cel­er­a­tors for re­source and power-con­strained edge de­vices. The op­ti­miza­tion ap­proach uses an evo­lu­tion-based search al­go­rithm, a neural net­work tem­plate tech­nique and an­a­lyt­i­cal KPI mod­els for the con­fig­urable Ul­tra- Trail hard­ware ac­cel­er­a­tor tem­plate in order to find an op­ti­mized neural net­work and ac­cel­er­a­tor con­fig­u­ra­tion. We demon­strate that HAN­NAH can find suit­able neural net­works with min­i­mized power con­sump­tion and high ac­cu­racy for dif­fer­ent audio clas- sifi­ca­tion tasks such as sin­gle-class wake-word de­tec­tion, multi- class key­word de­tec­tion and voice ac­tiv­ity de­tec­tion, which are su­pe­rior to the re­lated work.