ApproxHadoop: Bringing approximations to MapReduce frameworks

Íñigo Goiri, Ricardo Bianchini, Santosh Nagarakatte, Thu D. Nguyen

Research output: Contribution to journalArticlepeer-review

53 Scopus citations

Abstract

We propose and evaluate a framework for creating and running approximation-enabled MapReduce programs. Specifically, we propose approximation mechanisms that fit naturally into the MapReduce paradigm, including input data sampling, task dropping, and accepting and running a precise and a user-defined approximate version of the MapReduce code. We then show how to leverage statistical theories to compute error bounds for popular classes of MapReduce programs when approximating with input data sampling and/or task dropping. We implement the proposed mechanisms and error bound estimations in a prototype system called ApproxHadoop. Our evaluation uses MapReduce applications from different domains, including data analytics, scientific computing, video encoding, and machine learning. Our results show that ApproxHadoop can significantly reduce application execution time and/or energy consumption when the user is willing to tolerate small errors. For example, ApproxHadoop can reduce runtimes by up to 32X when the user can tolerate an error of 1% with 95% confidence. We conclude that our framework and system can make approximation easily accessible to many application domains using the MapReduce model.

Original languageEnglish (US)
Pages (from-to)383-397
Number of pages15
JournalACM SIGPLAN Notices
Volume50
Issue number4
DOIs
StatePublished - Apr 2015

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Keywords

  • Approximation
  • Extreme value theory
  • MapReduce
  • Multi-stage sampling

Fingerprint Dive into the research topics of 'ApproxHadoop: Bringing approximations to MapReduce frameworks'. Together they form a unique fingerprint.

Cite this