Parallel
computing is collection of processing elements that communicate and co-operate
to solve large problems fast. It is the use of two or more processors
(cores, computers) in combination to solve a single problem.
Examples of Parallel
System:
An example of parallel computing
would be two servers that share the workload of routing mail, managing
connections to an accounting system or database, solving a mathematical problem
etc. Supercomputers are usually placed in parallel system
architecture, Terminals connected to single server.
Advantages of Parallel System:
They provide concurrency, help in taking advantage
of non-local resources, It is cost saving.
It is overcoming memory constraints. It saves time
and money. It has global address space which provides a user-friendly programming
perspective to memory.
Disadvantages of Parallel System:
Primary
disadvantage is the lack of scalability between memory and CPUs.
Programmer
responsibility for synchronization constructs that ensure "correct"
access of global memory.
It becomes
increasingly difficult and expensive to design and produce shared memory
machines with ever increasing numbers of processors.
Types of Parallel Computing:
·
Data-parallel: Same operations on different data. It is also called SIMD.
·
SPMD: Same program, different data.
MIMD: Different programs, different data
_________________________________________________________________________________
Arslan ud Din Shafiq
COMSATS Institute of Information Technology
CS Department
No comments:
Post a Comment