Tampilkan postingan dengan label Flynn's Classical. Tampilkan semua postingan
Tampilkan postingan dengan label Flynn's Classical. Tampilkan semua postingan

Rabu, 31 Oktober 2012

Parallel Computing

Parallel Computing is the simultaneous use of multiple compute resources to solve a computational problem :
  • To be run using multiple CPUs
  • A problem is broken into discrete parts that can be solved concurrently
  • Each part is further broken down to a series of instructions
  • Instructions from each part execute simultaneously on different CPUs
  • The compute resources might be:
    • A single computer with multiple processors;
    • An arbitrary number of computers connected by a network;
    • A combination of both.
  • The computational problem should be able to:
    • Be broken apart into discrete pieces of work that can be solved simultaneously;
    • Execute multiple program instructions at any moment in time;
    • Be solved in less time with multiple compute resources than with a single compute resource.
Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence.

Main reason using parallel computing :
  1. Save time and/or money.
  2. Solve larger problems.
  3. Provide concurrency.
  4. Use of non-local resources.
  5. Limits to serial computing.
Type of parallel computer based on Flynn's Classical Taxonomy :
  1. Single Instruction Multiple Data (SIMD)
  2. Multiple Instruction Single Data (MISD)
  3. Multiple Instruction Multiple Data (MIMD)
Parallel Computer Memory Architectures :
  1. Shared memory
  2. Diatributed memory
  3. Hybrid distributed-shared memory

Source : Introduction to Parallel Computing, Author: Blaise Barney, Lawrence Livermore National Laboratory