Topics in Parallel and Distributed Computing

Topics in Parallel and Distributed Computing

Introducing Concurrency in Undergraduate Courses

1st Edition - August 21, 2015

Write a review

  • Editors: Sushil Prasad, Anshul Gupta, Arnold Rosenberg, Alan Sussman, Charles Weems, Jr.
  • Paperback ISBN: 9780128038994
  • eBook ISBN: 9780128039380

Purchase options

Purchase options
DRM-free (Mobi, PDF, EPub)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order


Topics in Parallel and Distributed Computing provides resources and guidance for those learning PDC as well as those teaching students new to the discipline. The pervasiveness of computing devices containing multicore CPUs and GPUs, including home and office PCs, laptops, and mobile devices, is making even common users dependent on parallel processing. Certainly, it is no longer sufficient for even basic programmers to acquire only the traditional sequential programming skills. The preceding trends point to the need for imparting a broad-based skill set in PDC technology. However, the rapid changes in computing hardware platforms and devices, languages, supporting programming environments, and research advances, poses a challenge both for newcomers and seasoned computer scientists. This edited collection has been developed over the past several years in conjunction with the IEEE technical committee on parallel processing (TCPP), which held several workshops and discussions on learning parallel computing and integrating parallel concepts into courses throughout computer science curricula.

Key Features

  • Contributed and developed by the leading minds in parallel computing research and instruction
  • Provides resources and guidance for those learning PDC as well as those teaching students new to the discipline
  • Succinctly addresses a range of parallel and distributed computing topics
  • Pedagogically designed to ensure understanding by experienced engineers and newcomers
  • Developed over the past several years in conjunction with the IEEE technical committee on parallel processing (TCPP), which held several workshops and discussions on learning parallel computing and integrating parallel concepts


Professional engineers and computer scientists, and students in parallel computing.

Table of Contents

    • Editor and author biographical sketches
      • Editors
      • Authors
    • Symbol or phrase
    • Chapter 1: Editors’ introduction and road map
      • Abstract
      • 1.1 Why this book?
      • 1.2 Chapter introductions
      • 1.3 How to find a topic or material for a course
      • 1.4 Invitation to write for volume 2
    • Part 1: For Instructors
      • Chapter 2: Hands-on parallelism with no prerequisites and little time using Scratch
        • Abstract
        • 2.1 Contexts for application
        • 2.2 Introduction to scratch
        • 2.3 Parallel computing and scratch
        • 2.4 Conclusion
      • Chapter 3: Parallelism in Python for novices
        • Abstract
        • 3.1 Introduction
        • 3.2 Background
        • 3.3 Student prerequisites
        • 3.4 General approach: parallelism as a medium
        • 3.5 Course materials
        • 3.6 Processes
        • 3.7 Communication
        • 3.8 Speedup
        • 3.9 Further examples using the Pool/map paradigm
        • 3.10 Conclusion
      • Chapter 4: Modules for introducing threads
        • Abstract
        • 4.1 Introduction
        • 4.2 Prime counting
        • 4.3 Mandelbrot
      • Chapter 5: Introducing parallel and distributed computing concepts in digital logic
        • Abstract
        • 5.1 Number representation
        • 5.2 Logic gates
        • 5.3 Combinational logic synthesis and analysis
        • 5.4 Combinational building blocks
        • 5.5 Counters and registers
        • 5.6 Other digital logic topics
      • Chapter 6: Networks and MPI for cluster computing
        • Abstract
        • 6.1 Why message passing/MPI?
        • 6.2 The message passing concept
        • 6.3 High-performance networks
        • 6.4 Advanced concepts
    • Part 2: For Students
      • Chapter 7: Fork-join parallelism with a data-structures focus
        • Abstract
        • Acknowledgments
        • 7.1 Meta-introduction: an instructor’s view of this material
        • 7.2 Introduction
        • 7.3 Basic fork-join parallelism
        • 7.4 Analyzing fork-join algorithms
        • 7.5 Fancier fork-join algorithms: prefix, pack, sort
      • Chapter 8: Shared-memory concurrency control with a data-structures focus
        • Abstract
        • 8.1 Introduction
        • 8.2 The programming model
        • 8.3 Synchronization with locks
        • 8.4 Race conditions: bad interleavings and data races
        • 8.5 Concurrency programming guidelines
        • 8.6 Deadlock
        • 8.7 Additional synchronization primitives
        • Acknowledgments
      • Chapter 9: Parallel computing in a Python-based computer science course
        • Abstract
        • 9.1 Parallel programming
        • 9.2 Parallel reduction
        • 9.3 Parallel scanning
        • 9.4 Copy-scans
        • 9.5 Partitioning in parallel
        • 9.6 Parallel quicksort
        • 9.7 How to perform segmented scans and reductions
        • 9.8 Comparing sequential and parallel running times
      • Chapter 10: Parallel programming illustrated through Conway’s Game of Life
        • Abstract
        • 10.1 Introduction
        • 10.2 Parallel variants
        • 10.3 Advanced topics
        • 10.4 Summary
    • Appendix A: Chapters and topics
    • Index

Product details

  • No. of pages: 360
  • Language: English
  • Copyright: © Morgan Kaufmann 2015
  • Published: August 21, 2015
  • Imprint: Morgan Kaufmann
  • Paperback ISBN: 9780128038994
  • eBook ISBN: 9780128039380

About the Editors

Sushil Prasad

Sushil K. Prasad (BTech'85 IIT Kharagpur, MS'86 Washington State, Pullman; PhD'90 Central Florida, Orlando - all in Computer Science/Engineering) is a Professor of Computer Science at Georgia State University and Director of Distributed and Mobile Systems (DiMoS) Lab. He has carried out theoretical as well as experimental research in parallel and distributed computing, resulting in 140+ refereed publications, several patent applications, and about $3M in external research funds as principal investigator and over $6M overall (NSF/NIH/GRA/Industry).

Sushil has been honored as an ACM Distinguished Scientist in Fall 2013 for his research on parallel data structures and applications. He was the elected chair of IEEE Technical Committee on Parallel Processing for two terms (2007-11), and received its highest honors in 2012 - IEEE TCPP Outstanding Service Award. Currently, he is leading the NSF-supported IEEE-TCPP curriculum initiative on parallel and distributed computing with a vision to ensure that all computer science and engineering graduates are well-prepared in parallelism through their core courses in this era of multi- and many-cores desktops and handhelds. His current research interests are in Parallel Data Structures and Algorithms, and Computation over Geo-Spatiotemporal Datasets over Cloud, GPU and Multicore Platforms. His homepage is

Affiliations and Expertise

Georgia State University, USA

Anshul Gupta

Anshul Gupta is a Principal Research Staff Member in Mathematical Sciences department at IBM T.J. Watson Research Center. His research interests include sparse matrix computations and their applications in optimization and computational sciences, parallel algorithms, and graph/combinatorial algorithms for scientific computing. He has coauthored several journal articles and conference papers on these topics and a textbook titled "Introduction to Parallel Computing." He is the primary author of Watson Sparse Matrix Package (WSMP), one of the most robust and scalable parallel direct solvers for large sparse systems of linear equations.

Affiliations and Expertise

Principal RSM, IBM T.J. Watson Research Center, USA

Arnold Rosenberg

Arnold L. Rosenberg is a Research Professor in the Computer Science Department at Northeastern University; he also holds the rank of Distinguished University Professor Emeritus in the Computer Science Department at the University of Massachusetts Amherst. Prior to joining UMass, Rosenberg was a Professor of Computer Science at Duke University from1981 to 1986, and a Research Sta_ Member at the IBM Watson Research Center from 1965 to 1981. He has held visiting positions at Yale University and the University of Toronto. He was a Lady Davis Visiting Professor at the Technion (Israel Institute of Technology) in 1994, and a Fulbright Senior Research Scholar at the University of Paris-South in 2000. Rosenberg's research focuses on developing algorithmic models and techniques to exploit the new modalities of "collaborative computing" (wherein multiple computers cooperate to solve a computational problem) that result from emerging computing technologies. Rosenberg is the author or coauthor of more than 170 technical papers on these and other topics in theoretical computer science and discrete mathematics. He is the coauthor of the research book "Graph Separators, with Applications" and the author of the textbook "The Pillars of Computation Theory: State, Encoding, Nondeterminism"; additionally, he has served as coeditor of several books. Dr. Rosenberg is a Fellow of the ACM, a Fellow of the IEEE, and a Golden Core member of the IEEE Computer Society. Rosenberg received an A.B. in mathematics at Harvard College and an A.M. and Ph.D. in applied mathematics at Harvard University. More details are available at

Affiliations and Expertise

Northeastern University, USA

Alan Sussman

Alan Sussman is a Professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland. Working with students and other researchers at Maryland and other institutions he has published over 100 conference and journal papers and received several best paper awards in various topics related to software tools for high performance parallel and distributed computing, and has contributed chapters to 6 books. His research interests include peer-to-peer distributed systems, software engineering for high performance computing, and large scale data intensive computing. He is an associate editor for the Journal of Parallel and Distributed Computing, a subject area editor for the Parallel Computing journal, and an associate editor for IEEE Transactions on Services Computing. Software tools he has built have been widely distributed and used in many computational science applications, in areas such as earth science, space science, and medical informatics. He received his Ph.D. in computer science from Carnegie Mellon University.

Affiliations and Expertise

University of Maryland, USA

Charles Weems, Jr.

Charles Weems is co-director of the Architecture and Language Implementation lab at the University of Massachusetts. His current research interests include architectures for media and embedded applications, GPU computing, and high precision arithmetic. Previously he led development of two generations of a heterogeneous parallel processor for machine vision, called the Image Understanding Architecture, and co-directed initial work on the Scale compiler that was eventually used for the TRIPS architecture. He is the author of numerous articles, has served on many program committees, chaired the 1997 IEEE CAMP Workshop, the 1999 IEEE Frontiers Symposium, co-chaired IEEE IPDPS in 1999, 2000, and 2013, was general vice-chair for IPDPS from 2001 through 2005, and co-chairs the LSPP workshop. He has co-authored twenty-six introductory CS texts, and co-edited the book Associative Processing and Processors. He is a member of ACM, Senior Member of IEEE, a member of the Executive Committee of the IEEE TC on Parallel Processing, has been an editor for IEEE TPDS, Elsevier JPDC, and is an editor with Parallel Computing.

Affiliations and Expertise

University of Massachusetts, USA

Ratings and Reviews

Write a review

There are currently no reviews for "Topics in Parallel and Distributed Computing"