Master's Thesis
Artificial Neural Networks are a key technology in today’s area of machine learning. They impact various parts in our lives and are therefore important in many ways. Compressing Artificial Neural Networks, in particular the weight matrices, by slightly manipulating them, while maintaining accuracy has been studied in recent research. One of the most promising methods called Deep Compression achieved remarkable compression rates. The reduction in size by Deep Compression should also be beneficial for the execution time. In this work, the benefit was studied with the objective of reducing the execution time on resource-restricted hardware. Furthermore, the benefit of simpler fixed-point arithmetic was investigated in this context. Based on deeper insights and practical evaluations of Deep Compression, this work gives an assessment and guidance towards a minimum execution time of Artificial Neural Networks.
End date | 18. December 2019 |
Supervisor |
Dr.-Ing. Florian Meyer
|