Abstract
In the emerging artificial intelligence (AI) era, efficient hardware accelerator design for deep neural networks (DNNs) is very important to enable real-Time energy-efficient DNN model deployment. To this end, various DNN model compression approaches and the corresponding hardware architectures have been intensively investigated. Recently, PermDNN, as a permuted diagonal structure-imposing model compression approach, was proposed with promising classification performance and hardware performance. However, the existing PermDNN hardware architecture is specifically designed for fully-connected (FC) layer-contained DNN models; while its support for convolutional (CONV) layer is missing. To fill this gap, this article proposes PermCNN, an energy-efficient hardware architecture for permuted diagonal structured convolutional neural networks (CNNs). By fully utilizing the strong structured sparsity in the trained models as well as dedicatedly leveraging the dynamic activation sparsity, PermCNN delivers very high hardware performance for inference tasks on CNN models. A design example with 28 nm CMOS technology shows that, compared the to state-of-The-Art CNN accelerator, PermCNN achieves 3.74× and 3.11× improvement on area and energy efficiency, respectively, on AlexNet workload, and 17.49× and 14.22× improvement on area and energy efficiency, respectively, on VGG model. After including energy consumption incurred by DRAM access, PermCNN achieves 2.60× and 9.62× overall energy consumption improvement on AlexNet and VGG workloads, respectively.
Original language | English (US) |
---|---|
Article number | 9040601 |
Pages (from-to) | 163-173 |
Number of pages | 11 |
Journal | IEEE Transactions on Computers |
Volume | 70 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2021 |
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics
Keywords
- Deep learning
- convolutional neural network
- hardware accelerator
- model compression