In this chapter, we will give a comprehensive survey on backdoor attacks, mitigation and challenges and propose some open problems. We first introduce an attack vector that derives from the Deep Neural Network (DNN) model itself. DNN models are trained from gigantic data that may be poisoned by attackers. Different from the traditional poisoning attacks that interfere with the decision boundary, backdoor attacks create a “shortcut” in the model’s decision boundary. Such a “shortcut” can only be activated by a trigger known by the attacker itself, while it performs well on benign inputs without the trigger. We then show several mitigation techniques from the frontend to the backend of the machine learning pipeline. We finally provide avenues for future research. We hope to raise awareness about the severity of the current emerging backdoor attacks in DNNs and attempt to provide a timely solution to fight against them.