Parallel slowdown

Phenomenon in parallel computing
A diagram of the program runtime (shown in blue) and program speed-up (shown in red) of a real-world program with sub-optimal parallelization. The dashed lines indicate optimal parallelization–linear increase in speedup and linear decrease in program runtime. Note that eventually the runtime actually increases with more processors (and the speed-up likewise decreases). This is parallel slowdown.

Parallel slowdown is a phenomenon in parallel computing where parallelization of a parallel algorithm beyond a certain point causes the program to run slower (take more time to run to completion).[1]

Parallel slowdown is typically the result of a communications bottleneck. As more processor nodes are added, each processing node spends progressively more time doing communication than useful processing. At some point, the communications overhead created by adding another processing node surpasses the increased processing power that node provides, and parallel slowdown occurs.

Parallel slowdown occurs when the algorithm requires significant communication, particularly of intermediate results. Some problems, known as embarrassingly parallel problems, do not require such communication, and thus are not affected by slowdown.

  • v
  • t
  • e
  • v
  • t
  • e
Parallel computing
General
  • Distributed computing
  • Parallel computing
  • Massively parallel
  • Cloud computing
  • High-performance computing
  • Multiprocessing
  • Manycore processor
  • GPGPU
  • Computer network
  • Systolic array
Levels
  • Bit
  • Instruction
  • Thread
  • Task
  • Data
  • Memory
  • Loop
  • Pipeline
Multithreading
Theory
Elements
Coordination
Programming
Hardware
APIs
Problems
  •  Category: Parallel computing

References

  1. ^ Kukanov, Alexey (2008-03-04). "Why a simple test can get parallel slowdown". Retrieved 2015-02-15.

See also

  • Mythical man month, an analogous situation for a team programmers where productivity is affected by human communication.