Abstract
Master-worker distributed computing systems use task replication to mitigate the effect of slow workers on job compute time. The master node groups tasks into batches and assigns each batch to one or more workers. We first assume that the batches do not overlap. Using majorization theory, we show that a balanced replication of batches minimizes the average job compute time for a general class of service time distributions. We then show that the balanced assignment of non-overlapping batches achieves a lower average job compute time than the overlapping schemes proposed in the literature. Next, we derive the optimum redundancy level as a function of the task service time distribution. We show that the redundancy level that minimizes the average job compute time may not coincide with the redundancy level that maximizes job compute time predictability. Therefore, there is a trade-off in optimizing the two metrics. By running experiments on Google cluster traces, we observe that redundancy can reduce the job compute time by one order of magnitude. The optimum level of redundancy depends on the distribution of task service time.
Original language | English (US) |
---|---|
Article number | 9385946 |
Pages (from-to) | 1467-1476 |
Number of pages | 10 |
Journal | IEEE/ACM Transactions on Networking |
Volume | 29 |
Issue number | 4 |
DOIs | |
State | Published - Aug 2021 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Electrical and Electronic Engineering
Keywords
- Redundancy
- coefficient of variations
- distributed computing
- distributed systems
- latency
- replication