EzLDA: Efficient and Scalable LDA on GPUs

Shilong Wang, Hang Liu, Anil Gaihre, Hengyong Yu

Research output: Contribution to journalArticlepeer-review

Abstract

Latent Dirichlet Allocation (LDA) is a statistical approach for topic modeling with a wide range of applications. Attracted by the exceptional computing and memory throughput capabilities, this work introduces ezLDA which achieves efficient and scalable LDA training on GPUs with the following three contributions: First, ezLDA introduces three-branch sampling method which takes advantage of the convergence heterogeneity of various tokens to reduce the redundant sampling task. Second, to enable sparsity-aware format for both D and W on GPUs with fast sampling and updating, we introduce hybrid format for W along with corresponding token partition to T and inverted index designs. Third, we design a hierarchical workload balancing solution to address the extremely skewed workload imbalance problem on GPU and scale ezLDA across multiple GPUs. Taken together, ezLDA achieves superior performance over the state-of-the-art attempts with lower memory consumption.

Original languageEnglish (US)
Pages (from-to)100165-100179
Number of pages15
JournalIEEE Access
Volume11
DOIs
StatePublished - 2023
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Materials Science
  • General Computer Science

Keywords

  • Bayes methods
  • GPU
  • LDA
  • high performance computing
  • latent dirichlet allocation
  • machine learning
  • parallel algorithms
  • parallel programming
  • unsupervised learning

Fingerprint

Dive into the research topics of 'EzLDA: Efficient and Scalable LDA on GPUs'. Together they form a unique fingerprint.

Cite this