
Topics in Parallel and Distributed Computing
Introducing Concurrency in Undergraduate Courses
Description
Key Features
- Contributed and developed by the leading minds in parallel computing research and instruction
- Provides resources and guidance for those learning PDC as well as those teaching students new to the discipline
- Succinctly addresses a range of parallel and distributed computing topics
- Pedagogically designed to ensure understanding by experienced engineers and newcomers
- Developed over the past several years in conjunction with the IEEE technical committee on parallel processing (TCPP), which held several workshops and discussions on learning parallel computing and integrating parallel concepts
Readership
Professional engineers and computer scientists, and students in parallel computing.
Table of Contents
- Editor and author biographical sketches
- Editors
- Authors
- Symbol or phrase
- Chapter 1: Editors’ introduction and road map
- Abstract
- 1.1 Why this book?
- 1.2 Chapter introductions
- 1.3 How to find a topic or material for a course
- 1.4 Invitation to write for volume 2
- Part 1: For Instructors
- Chapter 2: Hands-on parallelism with no prerequisites and little time using Scratch
- Abstract
- 2.1 Contexts for application
- 2.2 Introduction to scratch
- 2.3 Parallel computing and scratch
- 2.4 Conclusion
- Chapter 3: Parallelism in Python for novices
- Abstract
- 3.1 Introduction
- 3.2 Background
- 3.3 Student prerequisites
- 3.4 General approach: parallelism as a medium
- 3.5 Course materials
- 3.6 Processes
- 3.7 Communication
- 3.8 Speedup
- 3.9 Further examples using the Pool/map paradigm
- 3.10 Conclusion
- Chapter 4: Modules for introducing threads
- Abstract
- 4.1 Introduction
- 4.2 Prime counting
- 4.3 Mandelbrot
- Chapter 5: Introducing parallel and distributed computing concepts in digital logic
- Abstract
- 5.1 Number representation
- 5.2 Logic gates
- 5.3 Combinational logic synthesis and analysis
- 5.4 Combinational building blocks
- 5.5 Counters and registers
- 5.6 Other digital logic topics
- Chapter 6: Networks and MPI for cluster computing
- Abstract
- 6.1 Why message passing/MPI?
- 6.2 The message passing concept
- 6.3 High-performance networks
- 6.4 Advanced concepts
- Chapter 2: Hands-on parallelism with no prerequisites and little time using Scratch
- Part 2: For Students
- Chapter 7: Fork-join parallelism with a data-structures focus
- Abstract
- Acknowledgments
- 7.1 Meta-introduction: an instructor’s view of this material
- 7.2 Introduction
- 7.3 Basic fork-join parallelism
- 7.4 Analyzing fork-join algorithms
- 7.5 Fancier fork-join algorithms: prefix, pack, sort
- Chapter 8: Shared-memory concurrency control with a data-structures focus
- Abstract
- 8.1 Introduction
- 8.2 The programming model
- 8.3 Synchronization with locks
- 8.4 Race conditions: bad interleavings and data races
- 8.5 Concurrency programming guidelines
- 8.6 Deadlock
- 8.7 Additional synchronization primitives
- Acknowledgments
- Chapter 9: Parallel computing in a Python-based computer science course
- Abstract
- 9.1 Parallel programming
- 9.2 Parallel reduction
- 9.3 Parallel scanning
- 9.4 Copy-scans
- 9.5 Partitioning in parallel
- 9.6 Parallel quicksort
- 9.7 How to perform segmented scans and reductions
- 9.8 Comparing sequential and parallel running times
- Chapter 10: Parallel programming illustrated through Conway’s Game of Life
- Abstract
- 10.1 Introduction
- 10.2 Parallel variants
- 10.3 Advanced topics
- 10.4 Summary
- Chapter 7: Fork-join parallelism with a data-structures focus
- Appendix A: Chapters and topics
- Index
- Editor and author biographical sketches
Product details
- No. of pages: 360
- Language: English
- Copyright: © Morgan Kaufmann 2015
- Published: August 21, 2015
- Imprint: Morgan Kaufmann
- Paperback ISBN: 9780128038994
- eBook ISBN: 9780128039380
About the Editors
Sushil Prasad
Sushil has been honored as an ACM Distinguished Scientist in Fall 2013 for his research on parallel data structures and applications. He was the elected chair of IEEE Technical Committee on Parallel Processing for two terms (2007-11), and received its highest honors in 2012 - IEEE TCPP Outstanding Service Award. Currently, he is leading the NSF-supported IEEE-TCPP curriculum initiative on parallel and distributed computing with a vision to ensure that all computer science and engineering graduates are well-prepared in parallelism through their core courses in this era of multi- and many-cores desktops and handhelds. His current research interests are in Parallel Data Structures and Algorithms, and Computation over Geo-Spatiotemporal Datasets over Cloud, GPU and Multicore Platforms. His homepage is www.cs.gsu.edu/prasad.
Affiliations and Expertise
Anshul Gupta
Affiliations and Expertise
Arnold Rosenberg
Affiliations and Expertise
Alan Sussman
Affiliations and Expertise
Charles Weems, Jr.
Affiliations and Expertise
Ratings and Reviews
There are currently no reviews for "Topics in Parallel and Distributed Computing"