TY - JOUR
T1 - Efficient Redundancy Techniques for Latency Reduction in Cloud Systems
AU - Joshi, Gauri
AU - Soljanin, Emina
AU - Wornell, Gregory
N1 - Funding Information:
This work was supported in part by NSF under grant CCF-1319828, AFOSR under grant FA9550-11-1-0183, and a Schlumberger Faculty for the Future Fellowship. Our work was presented in part at the 2015 Allerton Conference on Communication, Control, and Computing and the 2015 ACM Sigmetrics Mathematical Modeling and Analysis Workshop. G. Joshi was at MIT at the time of this work. Authors’ addresses: G. Joshi, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213; email: gaurij@andrew.cmu.edu; E. Soljanin, Rutgers University, 94 Brett Road, Piscataway NJ 08854; email: emina. soljanin@rutgers.edu; G. Wornell, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge MA 02139; email: gww@mit.edu. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. ©c 2017 ACM 2376-3639/2017/04-ART12 $15.00 DOI: http://dx.doi.org/10.1145/3055281
Publisher Copyright:
© 2017 ACM.
PY - 2017/3/6
Y1 - 2017/3/6
N2 - In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps in understanding when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks and the time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e., log of the tail probability is convex), then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e., log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.
AB - In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps in understanding when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks and the time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e., log of the tail probability is convex), then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e., log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.
KW - Performance modeling
KW - latency-cost analysis
UR - http://www.scopus.com/inward/record.url?scp=85046413248&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046413248&partnerID=8YFLogxK
U2 - 10.1145/3055281
DO - 10.1145/3055281
M3 - Article
AN - SCOPUS:85046413248
SN - 2376-3639
VL - 2
JO - ACM Transactions on Modeling and Performance Evaluation of Computing Systems
JF - ACM Transactions on Modeling and Performance Evaluation of Computing Systems
IS - 2
M1 - 12
ER -