TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe

Research output: Contribution to journalArticlepeer-review


Deep neural networks (DNNs), regardless of their impressive performance, are vulnerable to attacks from adversarial inputs and, more recently, Trojans to misguide or hijack the decision of the model. We expose the existence of an intriguing class of <italic>spatially bounded</italic>, physically realizable, adversarial examples&#x2014;<italic>Universal</italic> NaTuralistic adversarial paTches&#x2014;we call TnTs, by exploring the super set of the spatially bounded adversarial example space and the natural input space within generative adversarial networks. Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective&#x2014;achieving high attack success rates, and universal. A TnT is <italic>universal</italic> because any input image captured with a TnT in the scene will: i) misguide a network (untargeted attack); or ii) force the network to make a malicious decision (targeted attack). Interestingly, now, an adversarial patch attacker has the potential to exert a greater level of control&#x2014;the ability to choose a location independent, natural-looking patch as a trigger in contrast to being constrained to noisy perturbations&#x2014;an ability is thus far shown to be only possible with Trojan attack methods needing to interfere with the model building processes to embed a backdoor at the risk discovery; but, still realize a patch <italic>deployable in the physical world</italic>. Through extensive experiments on the <italic>large-scale visual classification task</italic>, ImageNet with evaluations across its <italic>entire validation</italic> set of 50,000 images, we demonstrate the realistic threat from TnTs and the robustness of the attack. We show a generalization of the attack to create patches achieving <italic>higher</italic> attack success rates than existing state-of-the-art methods. Our results show the generalizability of the attack to different visual classification tasks (CIFAR-10, GTSRB, PubFig) and multiple state-of-the-art deep neural networks such as <italic>WideResnet50</italic>, <italic>Inception-V3</italic> and <italic>VGG-16</italic>. We demonstrate physical deployments in multiple videos at

Original languageEnglish (US)
Pages (from-to)1
Number of pages1
JournalIEEE Transactions on Information Forensics and Security
StateAccepted/In press - 2022
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Safety, Risk, Reliability and Quality
  • Computer Networks and Communications


  • Deep learning
  • Generative adversarial networks
  • Neural networks
  • Perturbation methods
  • Task analysis
  • Training
  • Trojan horses


Dive into the research topics of 'TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems'. Together they form a unique fingerprint.

Cite this