Learning everywhere: Pervasive machine learning for effective high-performance computation

Geoffrey Fox, James Glazier, J. C.S. Kadupitiya, Vikram Jadhao, Minje Kim, Judy Qiu, James P. Sluka, Endre Somogy, Madhav Marathe, Abhijin Adiga, Jiangzhuo Chen, Oliver Beckstein, Shantenu Jha

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the 'Learning Everywhere' paradigm for HPC. We introduce the concept of 'effective performance' that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support the promise of integrating HPC and learning methods, this paper examines specific examples and opportunities across a series of domains. It concludes with a series of open software systems, methods and infrastructure challenges that the Learning Everywhere paradigm presents.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages422-429
Number of pages8
ISBN (Electronic)9781728135106
DOIs
StatePublished - May 1 2019
Event33rd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 - Rio de Janeiro, Brazil
Duration: May 20 2019May 24 2019

Publication series

NameProceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019

Conference

Conference33rd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019
CountryBrazil
CityRio de Janeiro
Period5/20/195/24/19

Fingerprint

Learning systems
Machine Learning
High Performance
Paradigm
Series
Methodology
Open Systems
Software System
Infrastructure
Benchmark
Learning
Machine learning
High performance
Interaction
Simulation
Systems software
Performance improvement
Learning methods

All Science Journal Classification (ASJC) codes

  • Information Systems and Management
  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Control and Optimization

Keywords

  • Effective Performance
  • Machine learning driven HPC

Cite this

Fox, G., Glazier, J., Kadupitiya, J. C. S., Jadhao, V., Kim, M., Qiu, J., ... Jha, S. (2019). Learning everywhere: Pervasive machine learning for effective high-performance computation. In Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019 (pp. 422-429). [8778333] (Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IPDPSW.2019.00081
Fox, Geoffrey ; Glazier, James ; Kadupitiya, J. C.S. ; Jadhao, Vikram ; Kim, Minje ; Qiu, Judy ; Sluka, James P. ; Somogy, Endre ; Marathe, Madhav ; Adiga, Abhijin ; Chen, Jiangzhuo ; Beckstein, Oliver ; Jha, Shantenu. / Learning everywhere : Pervasive machine learning for effective high-performance computation. Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 422-429 (Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019).
@inproceedings{50247bd012f9404a8151e9cfc2ed55e8,
title = "Learning everywhere: Pervasive machine learning for effective high-performance computation",
abstract = "The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the 'Learning Everywhere' paradigm for HPC. We introduce the concept of 'effective performance' that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support the promise of integrating HPC and learning methods, this paper examines specific examples and opportunities across a series of domains. It concludes with a series of open software systems, methods and infrastructure challenges that the Learning Everywhere paradigm presents.",
keywords = "Effective Performance, Machine learning driven HPC",
author = "Geoffrey Fox and James Glazier and Kadupitiya, {J. C.S.} and Vikram Jadhao and Minje Kim and Judy Qiu and Sluka, {James P.} and Endre Somogy and Madhav Marathe and Abhijin Adiga and Jiangzhuo Chen and Oliver Beckstein and Shantenu Jha",
year = "2019",
month = "5",
day = "1",
doi = "10.1109/IPDPSW.2019.00081",
language = "English (US)",
series = "Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "422--429",
booktitle = "Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019",
address = "United States",

}

Fox, G, Glazier, J, Kadupitiya, JCS, Jadhao, V, Kim, M, Qiu, J, Sluka, JP, Somogy, E, Marathe, M, Adiga, A, Chen, J, Beckstein, O & Jha, S 2019, Learning everywhere: Pervasive machine learning for effective high-performance computation. in Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019., 8778333, Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019, Institute of Electrical and Electronics Engineers Inc., pp. 422-429, 33rd IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019, Rio de Janeiro, Brazil, 5/20/19. https://doi.org/10.1109/IPDPSW.2019.00081

Learning everywhere : Pervasive machine learning for effective high-performance computation. / Fox, Geoffrey; Glazier, James; Kadupitiya, J. C.S.; Jadhao, Vikram; Kim, Minje; Qiu, Judy; Sluka, James P.; Somogy, Endre; Marathe, Madhav; Adiga, Abhijin; Chen, Jiangzhuo; Beckstein, Oliver; Jha, Shantenu.

Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 422-429 8778333 (Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Learning everywhere

T2 - Pervasive machine learning for effective high-performance computation

AU - Fox, Geoffrey

AU - Glazier, James

AU - Kadupitiya, J. C.S.

AU - Jadhao, Vikram

AU - Kim, Minje

AU - Qiu, Judy

AU - Sluka, James P.

AU - Somogy, Endre

AU - Marathe, Madhav

AU - Adiga, Abhijin

AU - Chen, Jiangzhuo

AU - Beckstein, Oliver

AU - Jha, Shantenu

PY - 2019/5/1

Y1 - 2019/5/1

N2 - The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the 'Learning Everywhere' paradigm for HPC. We introduce the concept of 'effective performance' that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support the promise of integrating HPC and learning methods, this paper examines specific examples and opportunities across a series of domains. It concludes with a series of open software systems, methods and infrastructure challenges that the Learning Everywhere paradigm presents.

AB - The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the 'Learning Everywhere' paradigm for HPC. We introduce the concept of 'effective performance' that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support the promise of integrating HPC and learning methods, this paper examines specific examples and opportunities across a series of domains. It concludes with a series of open software systems, methods and infrastructure challenges that the Learning Everywhere paradigm presents.

KW - Effective Performance

KW - Machine learning driven HPC

UR - http://www.scopus.com/inward/record.url?scp=85070398979&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070398979&partnerID=8YFLogxK

U2 - 10.1109/IPDPSW.2019.00081

DO - 10.1109/IPDPSW.2019.00081

M3 - Conference contribution

AN - SCOPUS:85070398979

T3 - Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019

SP - 422

EP - 429

BT - Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Fox G, Glazier J, Kadupitiya JCS, Jadhao V, Kim M, Qiu J et al. Learning everywhere: Pervasive machine learning for effective high-performance computation. In Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 422-429. 8778333. (Proceedings - 2019 IEEE 33rd International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2019). https://doi.org/10.1109/IPDPSW.2019.00081