Concurrency (computer science)

The "Dining Philosophers", a classic problem involving concurrency and shared resources

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.[1]

According to Rob Pike, concurrency is the composition of independently executing computations,[2] and concurrency is not parallelism: concurrency is about dealing with lots of things at once but parallelism is about doing lots of things at once. Concurrency is about structure, parallelism is about execution, concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable.[3]

A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the parallel random-access machine model, the actor model and the Reo Coordination Language.

  1. ^ Lamport, Leslie (July 1978). "Time, Clocks, and the Ordering of Events in a Distributed System" (PDF). Communications of the ACM. 21 (7): 558–565. doi:10.1145/359545.359563. S2CID 215822405. Retrieved 4 February 2016.
  2. ^ "Go Concurrency Patterns". talks.golang.org. Retrieved 2021-04-08.
  3. ^ "Concurrency is not Parallelism". talks.golang.org. Retrieved 2021-04-08.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search