Concurrency (computer science)

The "Dining Philosophers", a classic problem involving concurrency and shared resources

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.[1]

Parallelism vs concurrency

Note that in computer science, parallelism and concurrency are two different things: a parallel program uses multiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads) without necessarily completing each one. A program can have both, neither of or a combination of parallelism and concurrency characteristics.[2]

A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the parallel random-access machine model, the actor model and the Reo Coordination Language.

  1. ^ Lamport, Leslie (July 1978). "Time, Clocks, and the Ordering of Events in a Distributed System" (PDF). Communications of the ACM. 21 (7): 558–565. doi:10.1145/359545.359563. S2CID 215822405. Retrieved 4 February 2016.
  2. ^ Parallel and Concurrent Programming in Haskell. O'Reilly Media. 2013. ISBN 9781449335922.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search