Parallel Computing
          by
               Vikram Singh Slathia

               Dept. Computer Science

               Central University of Rajasthan
Parallel Processing is a term used to denote a
large class of techniques that are used to
provide simultaneous data processing tasks for
the purpose of
  •   Save time and/or money
  •   Solve larger problems
Parallel computing is the simultaneous
use of multiple compute resources to solve a
computational problem
The Universe is Parallel
•   Galaxy formation
•   Planetary movement
•   Weather and ocean patterns
•   Tectonic plate drift
•   Rush hour traffic
•   Automobile assembly line
•   Building a jet
•   Ordering a hamburger
    at the drive through.
Areas of Parallel Computing

• Physics – applied, nuclear, particle, condensed matter, high
  pressure, fusion, photonics
• Bioscience, Biotechnology, Genetics
• Chemistry, Molecular Sciences
• Geology, Seismology
• Mechanical Engineering - from prosthetics to spacecraft
• Electrical Engineering, Circuit Design, Microelectronics
• Computer Science, Mathematics
Why Use Parallel
                       Computing?

• Save time and/or money: In theory, throwing more resources at a task
  will shorten its time to completion, with potential cost savings. Parallel
  computers can be built from cheap, commodity components.
• Solve larger problems: Many problems are so large and/or complex
  that it is impractical or impossible to solve them on a single computer, especially
  given limited computer memory.
• Better response times:            As the computing tasks are engaged by a group
  of processors, the tasks are completed in a smaller amount of time
ways to classify parallel
             computers.
• One of the more widely used classifications, in
  use since 1966, is called Flynn's Taxonomy
 The 4 possible classifications according to Flynn’s
  are :
• Single Instruction, Single Data (SISD)
• Single Instruction, Multiple Data (SIMD)
• Multiple Instruction, Single Data (MISD)
• Multiple Instruction, Multiple Data (MIMD):
Some basic requirements for
            achieving parallel execution

• Operating system capable of managing the
  multiple processors.
• Computer system/servers with built in multiple
  processors and better message facilitation
  among processors.
• Clustered nodes with application software,
  such as Oracle RAC
Conclusion


• Parallel computing is fast.
• Parallel computing is the future of computing.
References
Books
•   The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993
•   Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw
    Hill Higher Education, 2003
•   Introduction to Parallel Computing 2nd Edition , Ananth Grama , Pearson
Links
•   Parallel Processing,
    http://www.dba-oracle.com/real_application_clusters_rac_grid/parallel.html
•   Internet Parallel Computing Archive,
•    wotug.ukc.ac.uk/parallel
•   Introduction to Parallel Computing,
    www.llnl.gov/computing/tutorials/parallel_comp/#Whatis
Thank you

Parallel Computing

  • 1.
    Parallel Computing by Vikram Singh Slathia Dept. Computer Science Central University of Rajasthan
  • 2.
    Parallel Processing isa term used to denote a large class of techniques that are used to provide simultaneous data processing tasks for the purpose of • Save time and/or money • Solve larger problems Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem
  • 3.
    The Universe isParallel • Galaxy formation • Planetary movement • Weather and ocean patterns • Tectonic plate drift • Rush hour traffic • Automobile assembly line • Building a jet • Ordering a hamburger at the drive through.
  • 4.
    Areas of ParallelComputing • Physics – applied, nuclear, particle, condensed matter, high pressure, fusion, photonics • Bioscience, Biotechnology, Genetics • Chemistry, Molecular Sciences • Geology, Seismology • Mechanical Engineering - from prosthetics to spacecraft • Electrical Engineering, Circuit Design, Microelectronics • Computer Science, Mathematics
  • 5.
    Why Use Parallel Computing? • Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components. • Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. • Better response times: As the computing tasks are engaged by a group of processors, the tasks are completed in a smaller amount of time
  • 6.
    ways to classifyparallel computers. • One of the more widely used classifications, in use since 1966, is called Flynn's Taxonomy The 4 possible classifications according to Flynn’s are : • Single Instruction, Single Data (SISD) • Single Instruction, Multiple Data (SIMD) • Multiple Instruction, Single Data (MISD) • Multiple Instruction, Multiple Data (MIMD):
  • 7.
    Some basic requirementsfor achieving parallel execution • Operating system capable of managing the multiple processors. • Computer system/servers with built in multiple processors and better message facilitation among processors. • Clustered nodes with application software, such as Oracle RAC
  • 8.
    Conclusion • Parallel computingis fast. • Parallel computing is the future of computing.
  • 9.
    References Books • The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993 • Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003 • Introduction to Parallel Computing 2nd Edition , Ananth Grama , Pearson Links • Parallel Processing, http://www.dba-oracle.com/real_application_clusters_rac_grid/parallel.html • Internet Parallel Computing Archive, • wotug.ukc.ac.uk/parallel • Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis
  • 10.