Parallel Programming with MPI - 1st Edition - ISBN: 9781558603394, 9780080513546

Parallel Programming with MPI

1st Edition

Authors: Peter Pacheco
Paperback ISBN: 9781558603394
eBook ISBN: 9780080513546
Imprint: Morgan Kaufmann
Published Date: 1st October 1996
Page Count: 500

Institutional Access


Description

A hands-on introduction to parallel programming based on the Message-Passing Interface (MPI) standard, the de-facto industry standard adopted by major vendors of commercial parallel systems. This textbook/tutorial, based on the C language, contains many fully-developed examples and exercises. The complete source code for the examples is available in both C and Fortran 77. Students and professionals will find that the portability of MPI, combined with a thorough grounding in parallel programming principles, will allow them to program any parallel system, from a network of workstations to a parallel supercomputer.

Key Features

  • Proceeds from basic blocking sends and receives to the most esoteric aspects of MPI.
  • Includes extensive coverage of performance and debugging.
  • Discusses a variety of approaches to the problem of basic I/O on parallel machines.
  • Provides exercises and programming assignments.

Table of Contents

Foreword Preface

Chapter 1 Introduction 1.1 The Need for More Computational Power 1.2 The Need for Parallel Computing 1.3 The Bad News 1.4 MPI 1.5 The Rest of the Book 1.6 Typographic Conventions

Chapter 2 An Overview of Parallel Computing 2.1 Hardware 2.1.1 Flynn's Taxonomy 2.1.2 The Classical von Neumann Machine 2.1.3 Pipeline and Vector Architectures 2.1.4 SIMD Systems 2.1.5 General MIMD Systems 2.1.6 Shared-Memory MIMD 2.1.7 Distributed-Memory MIMD 2.1.8 Communication and Routing 2.2 Software Issues 2.2.1 Shared-Memory Programming 2.2.2 Message Passing 2.2.3 Data-Parallel Languages 2.2.4 RPC and Active Messages 2.2.5 Data Mapping 2.3 Summary 2.4 References 2.5 Exercises

Chapter 3 Greetings! 3.1 The Program 3.2 Execution 3.3 MPI 3.3.1 General MPI Programs 3.3.2 Finding Out about the Rest of the World 3.3.3 Message: Data + Envelope 3.3.4 Sending Messages 3.4 Summary 3.5 References 3.6 Exercises 3.7 Programming Assignment

Chapter 4 An Application: Numerical Integration 4.1 The Trapezoidal Rule 4.2 Parallelizing the Trapezoidal Rule 4.3 I/O on Parallel Systems 4.4 Summary 4.5 References 4.6 Exercises 4.7 Programming Assignments

Chapter 5 Collective Communication 5.1 Tree-Structured Communication 5.2 Broadcast<B

Details

No. of pages:
500
Language:
English
Copyright:
© Morgan Kaufmann 1996
Published:
Imprint:
Morgan Kaufmann
eBook ISBN:
9780080513546
Paperback ISBN:
9781558603394

About the Author

Peter Pacheco

Peter Pacheco received a PhD in mathematics from Florida State University. After completing graduate school, he became one of the first professors in UCLA’s “Program in Computing,” which teaches basic computer science to students at the College of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of the University of San Francisco. At USF Peter has served as chair of the computer science department and is currently chair of the mathematics department. His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers.

Affiliations and Expertise

University of San Francisco, USA

Awards

Intel Recommended Reading List for Developers, 1st Half 2013 – Books for Software Developers, Intel
Intel Recommended Reading List for Developers, 2nd Half 2013 – Books for Software Developers, Intel
Intel Recommended Reading List for Developers, 1st Half 2014 – Books for Software Developers, Intel

Reviews

"...the detailed discussion of many complex and confusing issues makes the book an important information source for programmers developing large applications using MPI." -—L.M. Liebrock, ACM Computing Reviews