Description

Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap.

Starting with the basics, setting up MATLAB for CUDA (in Windows, Linux and Mac OS X) and profiling, it then guides users through advanced topics such as CUDA libraries. The authors share their experience developing algorithms using MATLAB, C++ and GPUs for huge datasets, modifying MATLAB codes to better utilize the computational power of GPUs, and integrating them into commercial software products.  Throughout the book, they demonstrate many example codes that can be used as templates of C-MEX and CUDA codes for readers’ projects.  Download example codes from the publisher's website: http://booksite.elsevier.com/9780124080805/

Key Features

  • Shows how to accelerate MATLAB codes through the GPU for parallel processing, with minimal hardware knowledge
  • Explains the related background on hardware, architecture and programming for ease of use
  • Provides simple worked examples of MATLAB and CUDA C codes as well as templates that can be reused in real-world projects

Readership

Graduate students and researchers in a variety of fields, who need huge data processing without losing the many benefits of Matlab.

Table of Contents

Preface

Target Readers and Contents

Directions of this Book

1. Accelerating MATLAB without GPU

1.1 Chapter Objectives

1.2 Vectorization

1.3 Preallocation

1.4 For-Loop

1.5 Consider a Sparse Matrix Form

1.6 Miscellaneous Tips

1.7 Examples

2. Configurations for MATLAB and CUDA

2.1 Chapter Objectives

2.2 MATLAB Configuration for c-mex Programming

2.3 “Hello, mex!” using C-MEX

2.4 CUDA Configuration for MATLAB

2.5 Example: Simple Vector Addition Using CUDA

2.6 Example with Image Convolution

2.7 Summary

3. Optimization Planning through Profiling

3.1 Chapter Objectives

3.2 MATLAB Code Profiling to Find Bottlenecks

3.3 c-mex Code Profiling for CUDA

3.4 Environment Setting for the c-mex Debugger

4. CUDA Coding with c-mex

4.1 Chapter Objectives

4.2 Memory Layout for c-mex

4.3 Logical Programming Model

4.4 Tidbits of GPU

4.5 Analyzing Our First Naïve Approach

5. MATLAB and Parallel Computing Toolbox

5.1 Chapter Objectives

5.2 GPU Processing for Built-in MATLAB Functions

5.3 GPU Processing for Non-Built-in MATLAB Functions

5.4 Parallel Task Processing

5.5 Parallel Data Processing

5.6 Direct use of CUDA Files without c-mex

6. Using CUDA-Accelerated Libraries

6.1 Chapter Objectives

6.2 CUBLAS

6.3 CUFFT

6.4 Thrust

7. Example in Computer Graphics

7.1 Chapter Objectives

7.2 Marching Cubes

7.3 Implementation in MATLAB

7.4 Implementation in c-mex

Details

No. of pages:
258
Language:
English
Copyright:
© 2014
Published:
Imprint:
Morgan Kaufmann
Electronic ISBN:
9780124079168
Print ISBN:
9780124080805

About the authors

Jung Suh

Jung W. Suh is a senior algorithm engineer and research scientist at KLA-Tencor. Dr. Suh received his Ph.D. from Virginia Tech in 2007 for his 3D medical image processing work. He was involved in the development of MPEG-4 and Digital Mobile Broadcasting (DMB) systems in Samsung Electronics. He was a senior scientist at HeartFlow, Inc., prior to joining KLA-Tencor. His research interests are in the fields of biomedical image processing, pattern recognition, machine learning and image/video compression. He has more than 30 journal and conference papers and 6 patents.

Youngmin Kim

Youngmin Kim is a staff software engineer at Life Technologies where he has been programming in the area that requires real-time image acquisition and high-throughput image analysis. His previous works involved designing and developing software for automated microscopy and integrating imaging algorithms for real time analysis. He received his BS and MS from the University of Illinois at Urbana-Champaign in electrical engineering. Since then he developed 3D medical software at Samsung and led a software team at the startup company, prior to joining Life Technologies.

Reviews

"This truly is a practical primer. It is well written and delivers what it promises. Its main contribution is that it will assist “naive” programmers in advancing their code optimization capabilities for graphics processing units (GPUs) without any agonizing pain."--Computing Reviews,July 2 2014

"Suh and Kim show graduate students and researchers in engineering, science, and technology how to use a graphics processing unit (GPU) and the NVIDIA company's Compute Unified Device Architecture (CUDA) to process huge amounts of data without losing the many benefits of MATLAB. Readers are assumed to have at least some experience programming MATLAB, but not sufficient background in programming or computer architecture for parallelization."--ProtoView.com, February 2014