TY - GEN
T1 - Fake Gradient
T2 - 29th ACM International Conference on Multimedia, MM 2021
AU - Feng, Xianglong
AU - Xie, Yi
AU - Ye, Mengmei
AU - Tang, Zhongze
AU - Yuan, Bo
AU - Wei, Sheng
N1 - Funding Information:
We would like to thank the anonymous reviewers for their constructive feedback. This work was partially supported by the National Science Foundation under award 1912593 and the Air Force Research Lab (AFRL) under Grant No. FA87501820058.
Publisher Copyright:
© 2021 ACM.
PY - 2021/10/17
Y1 - 2021/10/17
N2 - Deep neural networks (DNNs) have demonstrated phenomenal success in image classification applications and are widely adopted in multimedia internet of things (IoT) use cases, such as smart home systems. To compensate for the limited resources on the IoT devices, the computation-intensive image classification tasks are often offloaded to remote cloud services. However, the offloading-based image classification could pose significant security and privacy concerns to the user data and the DNN model, leading to effective adversarial attacks that compromise the classification accuracy. The existing defense methods either impact the original functionality or result in high computation or model re-training overhead. In this paper, we develop a novel defense approach, namely Fake Gradient, to protect the privacy of the data and defend against adversarial attacks based on encryption of the output. Fake Gradient can hide the real output information by generating fake classes and further mislead the adversarial perturbation generation based on fake gradient knowledge, which helps maintain a high classification accuracy on the perturbed data. Our evaluations using ImageNet and 7 popular DNN models indicate that Fake Gradient is effective in protecting the privacy and defending against adversarial attacks targeting image classification applications.
AB - Deep neural networks (DNNs) have demonstrated phenomenal success in image classification applications and are widely adopted in multimedia internet of things (IoT) use cases, such as smart home systems. To compensate for the limited resources on the IoT devices, the computation-intensive image classification tasks are often offloaded to remote cloud services. However, the offloading-based image classification could pose significant security and privacy concerns to the user data and the DNN model, leading to effective adversarial attacks that compromise the classification accuracy. The existing defense methods either impact the original functionality or result in high computation or model re-training overhead. In this paper, we develop a novel defense approach, namely Fake Gradient, to protect the privacy of the data and defend against adversarial attacks based on encryption of the output. Fake Gradient can hide the real output information by generating fake classes and further mislead the adversarial perturbation generation based on fake gradient knowledge, which helps maintain a high classification accuracy on the perturbed data. Our evaluations using ImageNet and 7 popular DNN models indicate that Fake Gradient is effective in protecting the privacy and defending against adversarial attacks targeting image classification applications.
KW - adversarial attack
KW - deep neural network
KW - image classification
UR - http://www.scopus.com/inward/record.url?scp=85119382153&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119382153&partnerID=8YFLogxK
U2 - 10.1145/3474085.3475685
DO - 10.1145/3474085.3475685
M3 - Conference contribution
AN - SCOPUS:85119382153
T3 - MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
SP - 5510
EP - 5518
BT - MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 20 October 2021 through 24 October 2021
ER -