COVID-19 Update: We are currently shipping orders daily. However, due to transit disruptions in some geographies, deliveries may be delayed. To provide all customers with timely access to content, we are offering 50% off Science and Technology Print & eBook bundle options. Terms & conditions.
Accelerating MATLAB with GPU Computing - 1st Edition - ISBN: 9780124080805, 9780124079168

Accelerating MATLAB with GPU Computing

1st Edition

A Primer with Examples

Authors: Jung Suh Youngmin Kim
Paperback ISBN: 9780124080805
eBook ISBN: 9780124079168
Imprint: Morgan Kaufmann
Published Date: 18th November 2013
Page Count: 258
Sales tax will be calculated at check-out Price includes VAT/GST
Price includes VAT/GST

Institutional Subscription

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.


Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap.

Starting with the basics, setting up MATLAB for CUDA (in Windows, Linux and Mac OS X) and profiling, it then guides users through advanced topics such as CUDA libraries. The authors share their experience developing algorithms using MATLAB, C++ and GPUs for huge datasets, modifying MATLAB codes to better utilize the computational power of GPUs, and integrating them into commercial software products. Throughout the book, they demonstrate many example codes that can be used as templates of C-MEX and CUDA codes for readers’ projects. Download example codes from the publisher's website:

Key Features

  • Shows how to accelerate MATLAB codes through the GPU for parallel processing, with minimal hardware knowledge
  • Explains the related background on hardware, architecture and programming for ease of use
  • Provides simple worked examples of MATLAB and CUDA C codes as well as templates that can be reused in real-world projects


Graduate students and researchers in a variety of fields, who need huge data processing without losing the many benefits of Matlab.

Table of Contents


Target Readers and Contents

Directions of this Book

1. Accelerating MATLAB without GPU

1.1 Chapter Objectives

1.2 Vectorization

1.3 Preallocation

1.4 For-Loop

1.5 Consider a Sparse Matrix Form

1.6 Miscellaneous Tips

1.7 Examples

2. Configurations for MATLAB and CUDA

2.1 Chapter Objectives

2.2 MATLAB Configuration for c-mex Programming

2.3 “Hello, mex!” using C-MEX

2.4 CUDA Configuration for MATLAB

2.5 Example: Simple Vector Addition Using CUDA

2.6 Example with Image Convolution

2.7 Summary

3. Optimization Planning through Profiling

3.1 Chapter Objectives

3.2 MATLAB Code Profiling to Find Bottlenecks

3.3 c-mex Code Profiling for CUDA

3.4 Environment Setting for the c-mex Debugger

4. CUDA Coding with c-mex

4.1 Chapter Objectives

4.2 Memory Layout for c-mex

4.3 Logical Programming Model

4.4 Tidbits of GPU

4.5 Analyzing Our First Naïve Approach

5. MATLAB and Parallel Computing Toolbox

5.1 Chapter Objectives

5.2 GPU Processing for Built-in MATLAB Functions

5.3 GPU Processing for Non-Built-in MATLAB Functions

5.4 Parallel Task Processing

5.5 Parallel Data Processing

5.6 Direct use of CUDA Files without c-mex

6. Using CUDA-Accelerated Libraries

6.1 Chapter Objectives



6.4 Thrust

7. Example in Computer Graphics

7.1 Chapter Objectives

7.2 Marching Cubes

7.3 Implementation in MATLAB

7.4 Implementation in c-mex with CUDA

7.5 Implementation Using c-mex and GPU

7.6 Conclusion

8. CUDA Conversion Example: 3D Image Processing

8.1 Chapter Objectives

8.2 MATLAB Code for Atlas-Based Segmentation

8.3 Planning for CUDA Optimization Through Profiling

8.4 CUDA Conversion 1 - Regularization

8.5 CUDA Conversion 2 - Image Registration

8.6 CUDA Conversion Results

8.7 Conclusion

Appendix 1. Download and Install the CUDA Library

A1.1 CUDA Toolkit Download

A1.2 Installation

A1.3 Verification

Appendix 2. Installing NVIDIA Nsight into Visual Studio




No. of pages:
© Morgan Kaufmann 2013
18th November 2013
Morgan Kaufmann
Paperback ISBN:
eBook ISBN:

About the Authors

Jung Suh

Jung Suh

Jung W. Suh is a senior algorithm engineer and research scientist at KLA-Tencor. Dr. Suh received his Ph.D. from Virginia Tech in 2007 for his 3D medical image processing work. He was involved in the development of MPEG-4 and Digital Mobile Broadcasting (DMB) systems in Samsung Electronics. He was a senior scientist at HeartFlow, Inc., prior to joining KLA-Tencor. His research interests are in the fields of biomedical image processing, pattern recognition, machine learning and image/video compression. He has more than 30 journal and conference papers and 6 patents.

Affiliations and Expertise

Senior Algorithm Engineer & Research Scientist, KLA-Tencor

Youngmin Kim

Youngmin Kim is a staff software engineer at Life Technologies where he has been programming in the area that requires real-time image acquisition and high-throughput image analysis. His previous works involved designing and developing software for automated microscopy and integrating imaging algorithms for real time analysis. He received his BS and MS from the University of Illinois at Urbana-Champaign in electrical engineering. Since then he developed 3D medical software at Samsung and led a software team at the startup company, prior to joining Life Technologies.

Affiliations and Expertise

Staff Software Engineer, Life Technologies


"This truly is a practical primer. It is well written and delivers what it promises. Its main contribution is that it will assist “naive” programmers in advancing their code optimization capabilities for graphics processing units (GPUs) without any agonizing pain."--Computing Reviews,July 2 2014

"Suh and Kim show graduate students and researchers in engineering, science, and technology how to use a graphics processing unit (GPU) and the NVIDIA company's Compute Unified Device Architecture (CUDA) to process huge amounts of data without losing the many benefits of MATLAB. Readers are assumed to have at least some experience programming MATLAB, but not sufficient background in programming or computer architecture for parallelization.", February 2014

Ratings and Reviews