Stanford CS149, Fall 2021
From smart phones, to multi-core CPUs and GPUs, to the world's largest supercomputers and web sites, parallel processing is ubiquitous in modern computing. The goal of this course is to provide a deep understanding of the fundamental principles and engineering trade-offs involved in designing modern parallel computing systems as well as to teach parallel programming techniques necessary to effectively utilize these machines. Because writing good parallel programs requires an understanding of key machine performance characteristics, this course will cover both parallel hardware and software design.
All lectures are virtual
Instructors: Kayvon Fatahalian and Kunle Olukotun
See the course info page for more info on policies and logistics.
Fall 2021 Schedule
Challenges of parallelizing code, motivations for parallel chips, processor basics
Forms of parallelism: multicore, SIMD, threading + understanding latency and bandwidth
Ways of thinking about parallel programs, and their corresponding hardware implementations, ISPC programming
Thought process of parallelizing a program in data parallel and shared address space models
Achieving good work distribution while minimizing overhead, scheduling Cilk programs with work stealing
Message passing, async vs. blocking sends/receives, pipelining, increasing arithmetic intensity, avoiding contention
CUDA programming abstractions, and how they are implemented on modern GPUs
Data-parallel operations like map, reduce, scan, prefix sum, groupByKey
Producer-consumer locality, RDD abstraction, Spark implementation and scheduling
Definition of memory coherence, invalidation-based coherence using MSI and MESI, false sharing
Consistency vs. coherence, relaxed consistency models and their motivation, acquire/release semantics
Implementation of locks, fine-grained synchronization via locks, basics of lock-free programming: single-reader/writer queues, lock-free stacks, the ABA problem, hazard pointers
No class (Stanford Election Day Holiday)
Motivation for transactions, design space of transactional memory implementations.
Finishing up transactional memory focusing on implementations of STM and HTM.
Energy-efficient computing, motivation for heterogeneous processing, fixed-function processing, FPGAs, mobile SoCs
Performance/productivity motivations for DSLs, case study on Halide image processing DSL
domain-specific frameworks for graph processing, streaming graph processing, graph compression, DRAM basics
Programming reconfigurable hardware like FPGAs and CGRAs
Scheduling conv layers, exploiting precision and sparsity, DNN acelerators (e.g., GPU TensorCores, TPU)
|Sep 30||Written Assignment 1|
|Oct 7||Written Assignment 2|
|Oct 26||Written Assignment 3|
|Nov 10||Written Assignment 4|
|Nov 30||Written Assignment 5|