Description

Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. User-friendly exercises teach students how to compile, run and modify example programs.

Key Features

Key features:
  • Takes a tutorial approach, starting with small programming examples and building progressively to more challenging examples
  • Focuses on designing, debugging and evaluating the performance of distributed and shared-memory programs
  • Explains how to develop parallel programs using MPI, Pthreads, and OpenMP programming models
  • Readership

    Students in undergraduate parallel programming or parallel computing courses designed for the computer science major or as a service course to other departments; professionals with no background in parallel computing.

    Table of Contents

    1 Why Parallel Computing

    1.1 Why We Need Ever-Increasing Performance

    1.2 Why We’re Building Parallel Systems

    1.3 Why We Need to Write Parallel Programs

    1.4 How Do We Write Parallel Programs?

    1.5 What We’ll Be Doing

    1.6 Concurrent, Parallel, Distributed

    1.7 The Rest of the Book

    1.8 A Word of Warning

    1.9 Typographical Conventions

    1.10 Summary

    1.11 Exercises

    2 Parallel Hardware and Parallel Software

    2.1 Some Background

    2.2 Modifications to the von Neumann Model

    2.3 Parallel Hardware

    2.4 Parallel Software

    2.5 Input and Output

    2.6 Performance

    2.7 Parallel Program Design

    2.8 Writing and Running Parallel Programs

    2.9 Assumptions

    2.10 Summary

    2.11 Exercises

    3 Distributed Memory Programming with MPI

    3.1 Getting Started

    3.2 The Trapezoidal Rule in MPI

    3.3 Dealing with I/O

    3.4 Collective Communication

    3.5 MPI Derived Datatypes

    3.7 A Parallel Sorting Algorithm

    3.8 Summary

    3.9 Exercises

    3.10 Programming Assignments

    4 Shared Memory Programming with Pthreads

    4.1 Processes, Threads and Pthreads

    4.2 Hello, World

    4.3 Matrix-Vector Multiplication

    4.4 Critical Sections

    4.5 Busy-Waiting

    4.6 Mutexes

    4.7 Producer-Consumer Synchronization and Semaphores

    4.8 Barriers and Condition Variables

    4.9 Read-Write Locks

    4.10 Caches, Cache-Coherence, and False Sharing

    4.11 Thread-Safety

    4.12 Summary

    4.13 Exercises

    4.14 Programming Assignments

    5 Shared Memory Programming with OpenMP

    5.1 Getting Started

    5.2 The Trapezoidal Rule

    5.3 Scope of Variables

    5.4 The Reduction Clause

    5.5 The Parallel For Directive

    5.6 More About L

    Details

    No. of pages:
    392
    Language:
    English
    Copyright:
    © 2011
    Published:
    Imprint:
    Morgan Kaufmann
    Electronic ISBN:
    9780080921440
    Print ISBN:
    9780123742605
    Print ISBN:
    9780128103821
    Print ISBN:

    About the author

    Peter Pacheco

    Peter Pacheco received a PhD in mathematics from Florida State University. After completing graduate school, he became one of the first professors in UCLA’s “Program in Computing,” which teaches basic computer science to students at the College of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of the University of San Francisco. At USF Peter has served as chair of the computer science department and is currently chair of the mathematics department. His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers.

    Awards

    Intel Recommended Reading List for Developers, 1st Half 2013 – Books for Software Developers, Intel
    Intel Recommended Reading List for Developers, 2nd Half 2013 – Books for Software Developers, Intel
    Intel Recommended Reading List for Developers, 1st Half 2014 – Books for Software Developers, Intel

    Reviews

    "Pacheco succeeds in introducing the reader to the key issues and considerations in parallel programming. The simplicity of the examples allows the reader to focus on parallel programming aspects rather than application logic. Including both MPI and Pthreads/OpenMP is a good way to illustrate the differences between message passing and shared-memory programming models. The discussions about analyzing the scalability and efficiency of the resulting parallel programs present a key aspect of developing real parallel programs. Finally, working through the same examples using all three facilities helps make this even more concrete."--W. Hu, ComputingReviews.com

    "[T]his is a well-written book, appropriately targeted at junior undergraduates. Being easily digestible, it makes the difficult task of parallel programming come across a lot less daunting than I have seen in other texts. Admittedly, it is light on theory; however, the most memorable lessons in parallel programming are those learned from mistakes made. With over 100 programming exercises, learning opportunities abound."--Bernard Kuc, ACM’s Computing Reviews.com

    With the coming of multicore processors and the cloud, parallel computing is most certainly not a niche area off in a corner of the computing world. Parallelism has become central to the efficient use of resources, and this new textbook by Peter Pacheco will go a long way toward introducing students early in their academic careers to both the art and practice of parallel computing.

    Duncan Buell
    Department of Computer Science and Engineering
    University of South Carolina

    An Introduction to Parallel Programming illustrates fundamental programming principles in the increasingly important area of shared memory programming using Pthreads and OpenMP and distributed memory programming using MPI. More importantly, it emphasizes good programmi