Efficient parallelization of tensor network contraction for simulating quantum computation

Cupjin Huang, Fang Zhang, Michael Newman, Xiaotong Ni, Dawei Ding, Junjie Cai, Xun Gao, Tenghui Wang, Feng Wu, Gengyan Zhang, Hsiang Sheng Ku, Zhengxiong Tian, Junyin Wu, Haihong Xu, Huanjun Yu, Bo Yuan, Mario Szegedy, Yaoyun Shi, Hui Hai Zhao, Chunqing DengJianxin Chen

Research output: Contribution to journalArticlepeer-review

43 Scopus citations

Abstract

We develop an algorithmic framework for contracting tensor networks and demonstrate its power by classically simulating quantum computation of sizes previously deemed out of reach. Our main contribution, index slicing, is a method that efficiently parallelizes the contraction by breaking it down into much smaller and identically structured subtasks, which can then be executed in parallel without dependencies. We benchmark our algorithm on a class of random quantum circuits, achieving greater than 105 times acceleration over the original estimate of the simulation cost. We then demonstrate applications of the simulation framework for aiding the development of quantum algorithms and quantum error correction. As tensor networks are widely used in computational science, our simulation framework may find further applications.

Original languageEnglish (US)
Pages (from-to)578-587
Number of pages10
JournalNature Computational Science
Volume1
Issue number9
DOIs
StatePublished - Sep 2021
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Computer Science (miscellaneous)
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Efficient parallelization of tensor network contraction for simulating quantum computation'. Together they form a unique fingerprint.

Cite this