Secure CheckoutPersonal information is secured with SSL technology.
Free ShippingFree global shipping
No minimum order.
Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap.
Starting with the basics, setting up MATLAB for CUDA (in Windows, Linux and Mac OS X) and profiling, it then guides users through advanced topics such as CUDA libraries. The authors share their experience developing algorithms using MATLAB, C++ and GPUs for huge datasets, modifying MATLAB codes to better utilize the computational power of GPUs, and integrating them into commercial software products. Throughout the book, they demonstrate many example codes that can be used as templates of C-MEX and CUDA codes for readers’ projects. Download example codes from the publisher's website: http://booksite.elsevier.com/9780124080805/
- Shows how to accelerate MATLAB codes through the GPU for parallel processing, with minimal hardware knowledge
- Explains the related background on hardware, architecture and programming for ease of use
- Provides simple worked examples of MATLAB and CUDA C codes as well as templates that can be reused in real-world projects
Graduate students and researchers in a variety of fields, who need huge data processing without losing the many benefits of Matlab.
Target Readers and Contents
Directions of this Book
1. Accelerating MATLAB without GPU
1.1 Chapter Objectives
1.5 Consider a Sparse Matrix Form
1.6 Miscellaneous Tips
2. Configurations for MATLAB and CUDA
2.1 Chapter Objectives
2.2 MATLAB Configuration for c-mex Programming
2.3 “Hello, mex!” using C-MEX
2.4 CUDA Configuration for MATLAB
2.5 Example: Simple Vector Addition Using CUDA
2.6 Example with Image Convolution
3. Optimization Planning through Profiling
3.1 Chapter Objectives
3.2 MATLAB Code Profiling to Find Bottlenecks
3.3 c-mex Code Profiling for CUDA
3.4 Environment Setting for the c-mex Debugger
4. CUDA Coding with c-mex
4.1 Chapter Objectives
4.2 Memory Layout for c-mex
4.3 Logical Programming Model
4.4 Tidbits of GPU
4.5 Analyzing Our First Naïve Approach
5. MATLAB and Parallel Computing Toolbox
5.1 Chapter Objectives
5.2 GPU Processing for Built-in MATLAB Functions
5.3 GPU Processing for Non-Built-in MATLAB Functions
5.4 Parallel Task Processing
5.5 Parallel Data Processing
5.6 Direct use of CUDA Files without c-mex
6. Using CUDA-Accelerated Libraries
6.1 Chapter Objectives
7. Example in Computer Graphics
7.1 Chapter Objectives
7.2 Marching Cubes
7.3 Implementation in MATLAB
7.4 Implementation in c-mex with CUDA
7.5 Implementation Using c-mex and GPU
8. CUDA Conversion Example: 3D Image Processing
8.1 Chapter Objectives
8.2 MATLAB Code for Atlas-Based Segmentation
8.3 Planning for CUDA Optimization Through Profiling
8.4 CUDA Conversion 1 - Regularization
8.5 CUDA Conversion 2 - Image Registration
8.6 CUDA Conversion Results
Appendix 1. Download and Install the CUDA Library
A1.1 CUDA Toolkit Download
Appendix 2. Installing NVIDIA Nsight into Visual Studio
- No. of pages:
- © Morgan Kaufmann 2014
- 2nd December 2013
- Morgan Kaufmann
- Paperback ISBN:
- eBook ISBN:
Jung W. Suh is a senior algorithm engineer and research scientist at KLA-Tencor. Dr. Suh received his Ph.D. from Virginia Tech in 2007 for his 3D medical image processing work. He was involved in the development of MPEG-4 and Digital Mobile Broadcasting (DMB) systems in Samsung Electronics. He was a senior scientist at HeartFlow, Inc., prior to joining KLA-Tencor. His research interests are in the fields of biomedical image processing, pattern recognition, machine learning and image/video compression. He has more than 30 journal and conference papers and 6 patents.
Senior Algorithm Engineer & Research Scientist, KLA-Tencor
Youngmin Kim is a staff software engineer at Life Technologies where he has been programming in the area that requires real-time image acquisition and high-throughput image analysis. His previous works involved designing and developing software for automated microscopy and integrating imaging algorithms for real time analysis. He received his BS and MS from the University of Illinois at Urbana-Champaign in electrical engineering. Since then he developed 3D medical software at Samsung and led a software team at the startup company, prior to joining Life Technologies.
Staff Software Engineer, Life Technologies
"This truly is a practical primer. It is well written and delivers what it promises. Its main contribution is that it will assist “naive” programmers in advancing their code optimization capabilities for graphics processing units (GPUs) without any agonizing pain."--Computing Reviews,July 2 2014
"Suh and Kim show graduate students and researchers in engineering, science, and technology how to use a graphics processing unit (GPU) and the NVIDIA company's Compute Unified Device Architecture (CUDA) to process huge amounts of data without losing the many benefits of MATLAB. Readers are assumed to have at least some experience programming MATLAB, but not sufficient background in programming or computer architecture for parallelization."--ProtoView.com, February 2014
Elsevier.com visitor survey
We are always looking for ways to improve customer experience on Elsevier.com.
We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit.
If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website.
Thanks in advance for your time.