- To be run using multiple CPUs
- A problem is broken into discrete parts that can be solved concurrently
- Each part is further broken down to a series of instructions
- Instructions from each part execute simultaneously on different CPUs
- The compute resources might be:
- A single computer with multiple processors;
- An arbitrary number of computers connected by a network;
- A combination of both.
- The computational problem should be able to:
- Be broken apart into discrete pieces of work that can be solved simultaneously;
- Execute multiple program instructions at any moment in time;
- Be solved in less time with multiple compute resources than with a single compute resource.
Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence.
Main reason using parallel computing :
- Save time and/or money.
- Solve larger problems.
- Provide concurrency.
- Use of non-local resources.
- Limits to serial computing.
Type of parallel computer based on Flynn's Classical Taxonomy :
- Single Instruction Multiple Data (SIMD)
- Multiple Instruction Single Data (MISD)
- Multiple Instruction Multiple Data (MIMD)
Parallel Computer Memory Architectures :
- Shared memory
- Diatributed memory
- Hybrid distributed-shared memory
Source : Introduction to Parallel Computing, Author: Blaise Barney, Lawrence Livermore National Laboratory
Tidak ada komentar:
Posting Komentar