Skip to content

7.1 GPGPU Programming Model

The programming model for a Graphics Processing Unit (GPU) is fundamentally different from that of a Central Processing Unit (CPU). This difference stems from their distinct architectural optimizations: GPUs are optimized for high throughput, while CPUs are optimized for low latency. A comprehensive understanding of the GPU programming model, including its execution model, thread hierarchy, and memory structure, is necessary for efficient General-Purpose GPU (GPGPU) application development.

CPU vs. GPU: Architectural Design for Latency and Throughput

Section titled “CPU vs. GPU: Architectural Design for Latency and Throughput”

The architectural designs of CPUs and GPUs reflect their specialization for different computational tasks, leading to significant performance differences in parallel workloads.

  • CPU: Latency Optimization: A CPU is optimized to minimize the execution time, or latency, of a single instruction stream (a thread).1 It utilizes a few powerful cores with high clock speeds. A significant portion of the chip’s area is allocated to control logic (e.g., branch prediction, out-of-order execution) and large cache memories to expedite single-thread performance.1
  • GPU: Throughput Optimization: A GPU is optimized to maximize the total number of operations completed per unit of time, or throughput.1 It contains a large number of simpler arithmetic logic units (ALUs) organized into cores.2 GPUs have less complex control logic and smaller caches compared to CPUs. To manage memory latency, a GPU’s scheduler switches execution to other ready threads when one group of threads is stalled, thereby maintaining high utilization of the computational units.1

The SIMT (Single Instruction, Multiple Threads) Execution Model

Section titled “The SIMT (Single Instruction, Multiple Threads) Execution Model”

The primary execution model for GPUs is Single Instruction, Multiple Threads (SIMT).3 SIMT provides a programming abstraction that combines the efficiency of a Single Instruction, Multiple Data (SIMD) architecture with a more straightforward programming approach.

In the SIMT model, a programmer writes code for a single, scalar thread. The program is then executed by thousands of threads in parallel, each with its own program counter and state.4 This model abstracts away the need for manual data vectorization, which is typical in SIMD programming.4

The hardware groups threads into fixed-size blocks for execution. In NVIDIA’s CUDA architecture, this group is a warp (typically 32 threads).5 In AMD’s ROCm platform, it is a wavefront (historically 64 threads, now often 32).3 All threads within a warp execute the same instruction in lock-step on different data. This hardware-level grouping improves efficiency by using a single instruction fetch/decode unit for all threads in the warp, allowing more silicon to be dedicated to ALUs.3

A key performance consideration in the SIMT model is branch divergence. Because all threads in a warp share a program counter, they must execute the same instruction at the same time. If a conditional branch causes threads within a warp to follow different execution paths, the hardware serializes the paths. Threads taking one path execute while the others are masked (deactivated). Then, the situation is reversed. This serialization leaves computational resources idle and can significantly degrade performance. Therefore, GPGPU algorithms should be designed to minimize branch divergence within a warp.3

Hierarchical Thread Organization: Grids, Blocks, and Threads

Section titled “Hierarchical Thread Organization: Grids, Blocks, and Threads”

GPGPU programming models like CUDA and OpenCL use a hierarchical structure to manage the large number of threads. This abstraction allows programmers to organize parallel tasks logically and enables the hardware to schedule work efficiently across different GPU architectures.

The hierarchy has three levels:

  1. Thread: The fundamental unit of execution. Each thread executes an instance of the kernel function and is identified by a unique ID within its block.6
  2. Block (or Work-Group in OpenCL): A group of threads organized in a one-, two-, or three-dimensional structure. Threads within a block can cooperate using fast, on-chip shared memory and can synchronize their execution.7 All threads in a block are executed on the same Streaming Multiprocessor (SM).6
  3. Grid: A collection of blocks organized in a one-, two-, or three-dimensional structure. The grid encompasses all threads for a single kernel launch.6

This hierarchy maps the software model to the physical hardware. When a kernel is launched, the grid of blocks is distributed among the GPU’s SMs. Each SM can execute one or more blocks concurrently, depending on the resources (e.g., registers, shared memory) required by each block.6 The threads within each block are then executed by the SM’s cores in warps.

A critical aspect of this model is that threads within a block can communicate and synchronize, but threads in different blocks operate independently and cannot directly communicate.6 This independence allows blocks to be scheduled in any order on any available SM, which is the key to the model’s scalability. Code written using this model can automatically scale to run on future GPUs with more SMs, as the runtime system will distribute the blocks across the larger number of processors.8

A GPU’s computational throughput is effective only if its cores are continuously supplied with data. As a result, GPGPU performance is often limited by memory access rather than computation.9 Effective use of the GPU’s multi-layered memory hierarchy is essential for achieving high performance and avoiding memory bottlenecks.10

The memory hierarchy involves a trade-off between speed, size, and scope. The different levels of memory are designed to correspond with the thread hierarchy.

Memory TypeLocationScopeAccess SpeedTypical CapacityPrimary Use Case
RegistersOn-Chip (in SM)Per-ThreadFastest (~1 cycle)Kilobytes per SMFrequently accessed thread-private variables.
Shared MemoryOn-Chip (in SM)Per-BlockVery Fast (~10s of cycles)Tens of Kilobytes per SMUser-managed cache; inter-thread communication within a block.
L1/L2 CacheOn/Off-ChipPer-SM / Per-DeviceFastKB (L1) / MB (L2)Hardware-managed cache for global/local memory accesses.
Global MemoryOff-Chip (DRAM)Per-Grid (Device-wide)Slow (~100s of cycles)GigabytesMain data storage for kernel input/output.
Constant MemoryOff-Chip (DRAM), CachedPer-Grid (Device-wide)Fast (if cached)Tens of KilobytesRead-only data broadcast to all threads (e.g., coefficients).
Texture MemoryOff-Chip (DRAM), CachedPer-Grid (Device-wide)Fast (if cached)GigabytesRead-only data with spatial locality optimization.

A key performance concept related to global memory is coalesced memory access. When all 32 threads in a warp access contiguous locations in global memory, the hardware can group these requests into a single, large memory transaction, maximizing effective memory bandwidth. Conversely, scattered, random memory access patterns result in multiple inefficient transactions, which significantly reduces performance. The architecture thus favors algorithms with structured and predictable memory access patterns.

GPGPU Programming Frameworks: A Comparative Analysis

Section titled “GPGPU Programming Frameworks: A Comparative Analysis”

Several programming frameworks exist for GPGPU, each with different design philosophies, strengths, and weaknesses. The choice of framework affects performance, portability, and developer productivity.

  • NVIDIA CUDA: As the first major GPGPU platform, CUDA has a mature and extensive ecosystem.11 It is a proprietary framework exclusive to NVIDIA GPUs.7 Its main advantages are its tight hardware integration, which often provides the highest performance, and its large collection of optimized libraries for specific domains (e.g., cuDNN for deep learning, cuBLAS for linear algebra). It also includes advanced developer tools like the Nsight profiler.12 The primary disadvantage is vendor lock-in, as CUDA code is not portable to hardware from other manufacturers.7
  • OpenCL (Open Computing Language): An open, royalty-free standard from the Khronos Group, OpenCL’s main advantage is portability.11 An OpenCL program can theoretically run on various hardware, including GPUs from NVIDIA, AMD, and Intel, as well as CPUs, FPGAs, and DSPs.7 However, this portability has drawbacks. The OpenCL standard can lag behind CUDA in supporting new hardware features, and vendor support may be inconsistent.13 Achieving optimal performance often requires hardware-specific optimizations, which can compromise the “write once, run anywhere” goal.11 The API is also generally more verbose than CUDA’s.14
  • SYCL: Also a Khronos Group standard, SYCL is a higher-level programming model built on top of backends like OpenCL.7 It enables developers to write single-source, modern C++ code for heterogeneous systems, abstracting away much of the boilerplate associated with OpenCL.7 Its goal is to provide the portability of OpenCL with a more integrated programming experience. As a newer standard, its ecosystem is less mature than CUDA’s, and the level of abstraction can sometimes hinder fine-grained, hardware-specific optimizations.15
  • DirectCompute: Microsoft’s GPGPU API, part of the DirectX suite.13 It is primarily used on the Windows operating system, especially in game development for tasks like physics simulations and post-processing effects. It is less common in scientific high-performance computing and AI, which are dominated by CUDA and OpenCL.13

The following table provides a summary comparison of the major GPGPU frameworks.

FeatureNVIDIA CUDAOpenCLSYCL
Governing BodyNVIDIA (Proprietary)Khronos Group (Open Standard)Khronos Group (Open Standard)
Primary LanguageC/C++ with extensionsC/C++ based kernel languageModern C++ (single-source)
Hardware SupportNVIDIA GPUs onlyCPUs, GPUs (NVIDIA, AMD, Intel), FPGAs, DSPsCPUs, GPUs, FPGAs (via OpenCL or other backends)
PortabilityLow (Vendor-specific)High (Cross-vendor, cross-device)High (Built on OpenCL/other backends)
Ecosystem & LibrariesExtremely mature and extensive (cuDNN, cuBLAS, etc.)Less extensive; vendor-specific libraries existGrowing, but less mature than CUDA
PerformanceTypically highest on NVIDIA hardware due to tight integrationCan be high, but may require vendor-specific tuningPerformance is dependent on the underlying backend (e.g., OpenCL)
Ease of UseHigh, with a well-documented, stable APIModerate, more verbose and requires manual boilerplateHigh, abstracts away boilerplate with modern C++ features
  1. CUDA Refresher: Reviewing the Origins of GPU Computing | NVIDIA Technical Blog, accessed October 6, 2025, https://developer.nvidia.com/blog/cuda-refresher-reviewing-the-origins-of-gpu-computing/ 2 3 4

  2. Runtime Comparison of CPU and GPU Using Portable … - SciSpace, accessed October 6, 2025, https://scispace.com/pdf/runtime-comparison-of-cpu-and-gpu-using-portable-programming-2hzy2njaya.pdf

  3. Single instruction, multiple threads - Wikipedia, accessed October 6, 2025, https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads 2 3 4

  4. SIMT vs SIMD: Parallelism in Modern Processors - Benjamin H Glick, accessed October 6, 2025, https://www.glick.cloud/blog/simt-vs-simd-parallelism-in-modern-processors 2

  5. Cornell Virtual Workshop > Understanding GPU Architecture > GPU …, accessed October 6, 2025, https://cvw.cac.cornell.edu/gpu-architecture/gpu-characteristics/simt_warp

  6. Thread block (CUDA programming) - Wikipedia, accessed October 6, 2025, https://en.wikipedia.org/wiki/Thread_block_(CUDA_programming) 2 3 4 5

  7. Comparing SYCL, OpenCL, and CUDA: Matrix Multiplication …, accessed October 6, 2025, https://sgurwinderr.github.io/blog/sycl-opencl-cuda/ 2 3 4 5 6

  8. CUDA programming model of threads, blocks, and grids, with… - ResearchGate, accessed October 6, 2025, https://www.researchgate.net/figure/CUDA-programming-model-of-threads-blocks-and-grids-with-corresponding-per-thread_fig3_224194485

  9. Dissecting GPU Memory Hierarchy through Microbenchmarking - arXiv, accessed October 6, 2025, https://arxiv.org/pdf/1509.02308

  10. Memory Hierarchy of GPUs - Arc Compute, accessed October 6, 2025, https://www.arccompute.io/arc-blog/gpu-101-memory-hierarchy

  11. Cuda OpenCL comparison cuda, openCL, nvidia - CUDA Programming and Performance, accessed October 6, 2025, https://forums.developer.nvidia.com/t/cuda-opencl-comparison-cuda-opencl-nvidia/14428 2 3

  12. GPU programming comparison: OpenCL vs Compute Shader vs CUDA vs Thrust - Reddit, accessed October 6, 2025, https://www.reddit.com/r/gamedev/comments/9pvq12/gpu_programming_comparison_opencl_vs_compute/

  13. OpenCL vs. DirectCompute? - Stack Overflow, accessed October 6, 2025, https://stackoverflow.com/questions/3172220/opencl-vs-directcompute 2 3

  14. CUDA vs OpenCL: Which One For GPU Programming? | Incredibuild, accessed October 6, 2025, https://www.incredibuild.com/blog/cuda-vs-opencl-which-to-use-for-gpu-programming

  15. SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? : r/cpp - Reddit, accessed October 6, 2025, https://www.reddit.com/r/cpp/comments/1im99l2/sycl_cuda_and_others_experiences_and_future/