An Introduction to Parallel Programming

An Introduction to Parallel Programming

1st Edition - January 7, 2011
There is a Newer Edition Available
  • Author: Peter Pacheco
  • eBook ISBN: 9780080921440

Purchase options

Purchase options
DRM-free (PDF, Mobi, EPub)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

An Introduction to Parallel Programming is the first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture. It explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. The author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP, starting with small programming examples and building progressively to more challenging ones. The text is written for students in undergraduate parallel programming or parallel computing courses designed for the computer science major or as a service course to other departments; professionals with no background in parallel computing.

Key Features

  • Takes a tutorial approach, starting with small programming examples and building progressively to more challenging examples
  • Focuses on designing, debugging and evaluating the performance of distributed and shared-memory programs
  • Explains how to develop parallel programs using MPI, Pthreads, and OpenMP programming models

Readership

Students in undergraduate parallel programming or parallel computing courses designed for the computer science major or as a service course to other departments; professionals with no background in parallel computing

Table of Contents

  • 1 Why Parallel Computing

    1.1 Why We Need Ever-Increasing Performance

    1.2 Why We’re Building Parallel Systems

    1.3 Why We Need to Write Parallel Programs

    1.4 How Do We Write Parallel Programs?

    1.5 What We’ll Be Doing

    1.6 Concurrent, Parallel, Distributed

    1.7 The Rest of the Book

    1.8 A Word of Warning

    1.9 Typographical Conventions

    1.10 Summary

    1.11 Exercises

    2 Parallel Hardware and Parallel Software

    2.1 Some Background

    2.2 Modifications to the von Neumann Model

    2.3 Parallel Hardware

    2.4 Parallel Software

    2.5 Input and Output

    2.6 Performance

    2.7 Parallel Program Design

    2.8 Writing and Running Parallel Programs

    2.9 Assumptions

    2.10 Summary

    2.11 Exercises

    3 Distributed Memory Programming with MPI

    3.1 Getting Started

    3.2 The Trapezoidal Rule in MPI

    3.3 Dealing with I/O

    3.4 Collective Communication

    3.5 MPI Derived Datatypes

    3.7 A Parallel Sorting Algorithm

    3.8 Summary

    3.9 Exercises

    3.10 Programming Assignments

    4 Shared Memory Programming with Pthreads

    4.1 Processes, Threads and Pthreads

    4.2 Hello, World

    4.3 Matrix-Vector Multiplication

    4.4 Critical Sections

    4.5 Busy-Waiting

    4.6 Mutexes

    4.7 Producer-Consumer Synchronization and Semaphores

    4.8 Barriers and Condition Variables

    4.9 Read-Write Locks

    4.10 Caches, Cache-Coherence, and False Sharing

    4.11 Thread-Safety

    4.12 Summary

    4.13 Exercises

    4.14 Programming Assignments

    5 Shared Memory Programming with OpenMP

    5.1 Getting Started

    5.2 The Trapezoidal Rule

    5.3 Scope of Variables

    5.4 The Reduction Clause

    5.5 The Parallel For Directive

    5.6 More About Loops in OpenMP: Sorting

    5.7 Scheduling Loops

    5.8 Producers and Consumers

    5.9 Caches, Cache-Coherence, and False Sharing

    5.10 Thread-Safety

    5.11 Summary

    5.12 Exercises

    5.13 Programming Assignments

    6 Parallel Program Development

    6.1 Two N-Body Solvers

    6.2 Tree Search

    6.3 A Word of Caution

    6.4 Which API?

    6.5 Summary

    6.6 Exercises

    6.7 Programming Assignments

    7 Where to Go from Here

Product details

  • No. of pages: 392
  • Language: English
  • Copyright: © Morgan Kaufmann 2011
  • Published: January 7, 2011
  • Imprint: Morgan Kaufmann
  • eBook ISBN: 9780080921440

About the Author

Peter Pacheco

Peter Pacheco received a PhD in mathematics from Florida State University. After completing graduate school, he became one of the first professors in UCLA’s “Program in Computing,” which teaches basic computer science to students at the College of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of the University of San Francisco. At USF Peter has served as chair of the computer science department and is currently chair of the mathematics department.

His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers.

Affiliations and Expertise

University of San Francisco, USA