Optimizing Deep Learning Inference via Global Analysis and Tensor Expression

Nov 22, 2021·
Ning Lin
,
Xiaoming Chen
Chunwei Xia
Chunwei Xia
,
Jing Ye
,
Xiaowei Li
· 0 min read
Abstract
Although deep neural networks (DNNs) have been widely used, DNN models running on ASIC- or FPGA-based accelerators still lack effective and efficient protection. Once DNN models are stolen by attackers, it will not only infringe the intellectual property of model providers but also lead to security issues. The existing parameter encryption method brings greater power consumption, which is difficult to apply to resource-constrained edge devices. This paper proposes an effective and efficient framework –ChaoPIM to protect the security of DNN models by utilizing the chaotic encryption and the Processing-In-Memory (PIM) technology. Detailed experimental results show that our framework can effectively prevent attackers from using DNN models normally, as the accuracy of stolen models is quite low. Compared with the powerful Cortex-A53, Kryo-280, Intel-i5-8265U CPUs and TITAN V GPU, ChaoPIM achieves considerable performance improvements on various DNN models.
Type
Publication
In *2021 IEEE 30th Asian Test Symposium *