Lecture
There are several different concepts related to the area of parallel computing.
Each of these terms is strictly defined and has a clear meaning.
Competitiveness (*) (concurrency) is the most general term that says that more than one task is performed simultaneously. For example, you can watch TV and post photos on Facebook at the same time. Winda, even the 95th could (**) simultaneously play music and show pictures.
(*) Unfortunately, I do not know the sane Russian-language term. Wikipedia says that concurrent computing is parallel computing, but then how will parallel computing in Russian?
(**) Yes, I remember the anecdote about Bill Gates and the multitasking of Windows, but theoretically, Windows could do several things at once. Although not any.
Competitive execution is the most general term that does not indicate how this competition will be obtained: by suspending some computational elements and switching them to another task, by truly simultaneous execution, by delegating work to other devices or something else. It does not matter.
Competitive performance suggests that more than one task will be solved over a certain period of time. Point.
Parallel execution (parallel computing) implies the presence of more than one computing device (for example, a processor) that will simultaneously perform several tasks.
Parallel execution is a strict subset of competitive execution. This means that on a computer with one processor, parallel programming is impossible;)
Multithreading is one of the ways to implement competitive execution by highlighting the abstraction of a “worker thread” (worker thread).
Threads "abstract" low-level details from the user and allow you to perform more than one job "in parallel". The operating system, runtime, or library hides the details of whether multithreaded execution will be competitive (when there are more threads than physical processors), or parallel (when the number of threads is less than or equal to the number of processors and several tasks are physically executed simultaneously).
Asynchrony (asynchrony) implies that the operation can be performed by someone on the side: a remote web site, server, or other device outside the current computing device.
The main feature of such operations is that the beginning of such an operation requires significantly less time than the main work. That allows you to perform many asynchronous operations at the same time even on a device with a small number of computing devices.
From the developer's point of view, another important point is the difference between CPU-bound and IO-bound operations. CPU-Bound operations load the computational power of the current device, and IO-Bound allows you to perform a task outside the current hardware.
The difference is important because the number of simultaneous operations depends on which category they belong to. It is quite normal to run hundreds of IO-Bound operations in parallel, and hope that there are enough resources to process all the results. Running parallel to too many CPU-bound operations (more than the number of computing devices) is pointless.
Returning to the original question: it makes no sense to perform the Calc method in 1000 threads if it is CPU-Intensive (it loads the CPU), since this will lead to a drop in the overall computation efficiency. The OS will have to switch several available cores to serve hundreds of threads. And this process is not cheap.
The easiest and most effective way to solve a CPU-Intensive task is to use the Fork-Join idiom: the task (for example, input data) needs to be broken down into a certain number of subtasks that can be performed in parallel. Each subtask should be independent and not access shared variables / memory. Then, you need to collect intermediate results and combine them.
It is on this principle that PLINQ is based. What you can read here: Joseph Albahari. Parallel programming
It looks very interesting:
IEnumerable yourData = GetYourData (); var result = yourData.AsParallel () // start processing in parallel .Select (d => ComputeMD5 (d)) // Calculate in parallel .Where (md5 => IsValid (md5)) .ToArray (); // Return to the synchronous model
In this case, the number of threads will be controlled by the library code in the depths of the CLR / TPL and the ComputeMD5 method will be called parallel N-times on a computer with N-processors (cores).
Comments
To leave a comment
Highly loaded projects. Theory of parallel computing. Supercomputers. Distributed systems
Terms: Highly loaded projects. Theory of parallel computing. Supercomputers. Distributed systems