Deep Learning Backdoors

Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Scopus citations

Abstract

In this chapter, we will give a comprehensive survey on backdoor attacks, mitigation and challenges and propose some open problems. We first introduce an attack vector that derives from the Deep Neural Network (DNN) model itself. DNN models are trained from gigantic data that may be poisoned by attackers. Different from the traditional poisoning attacks that interfere with the decision boundary, backdoor attacks create a “shortcut” in the model’s decision boundary. Such a “shortcut” can only be activated by a trigger known by the attacker itself, while it performs well on benign inputs without the trigger. We then show several mitigation techniques from the frontend to the backend of the machine learning pipeline. We finally provide avenues for future research. We hope to raise awareness about the severity of the current emerging backdoor attacks in DNNs and attempt to provide a timely solution to fight against them.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Science and Business Media Deutschland GmbH
Pages313-334
Number of pages22
DOIs
StatePublished - 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13049 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Deep Learning Backdoors'. Together they form a unique fingerprint.

Cite this