Parallel Programming in OpenMP

By

  • Rohit Chandra, NARUS, Inc.
  • Ramesh Menon, NARUS, Inc.
  • Leo Dagum, Silicon Graphics
  • David Kohr, NARUS, Inc.
  • Dror Maydan, Tensilica, Inc.
  • Jeff McDonald, SolidFX

The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in government laboratories find they need to parallelize huge volumes of code in a portable fashion. OpenMP, developed jointly by several parallel computing vendors to address these issues, is an industry-wide standard for programming shared-memory and distributed shared-memory multiprocessors. It consists of a set of compiler directives and library routines that extend FORTRAN, C, and C++ codes to express shared-memory parallelism.

Parallel Programming in OpenMP is the first book to teach both the novice and expert parallel programmers how to program using this new standard. The authors, who helped design and implement OpenMP while at SGI, bring a depth and breadth to the book as compiler writers, application developers, and performance engineers.

View full description

Audience

Novice and expert programmers who want to understand this new standard. Developers of technical and scientific applications.

 

Book information

  • Published: October 2000
  • Imprint: MORGAN KAUFMANN
  • ISBN: 978-1-55860-671-5

Reviews

"This book will provide a valuable resource for the OpenMP community."
—Timothy G. Mattson, Intel Corporation


"This book has an important role to play in the HPC community-both for introducing practicing professionals to OpenMP and for educating students and professionals about parallel programming. I'm happy to see that the authors have put together such a complete OpenMP presentation."
—Mary E. Zozel, Lawrence Livermore National Laboratory



Table of Contents

Foreword Preface Chapter 1: Introduction Chapter 2 Getting started with OpenMP Chapter 3: Exploiting loop-level parallelism Chapter 4: Beyond loop-level parallelism: Parallel Regions Chapter 5: Synchronization Chapter 6: Performance Glossary References Index