Visual computing tasks such as computational imaging, image/video understanding, and real-time 3D graphics are key responsibilities of modern computer systems ranging from sensor-rich smart phones, autonomous robots, and large datacenters. These workloads demand exceptional system efficiency and this course examines the key ideas, techniques, and challenges associated with the design of parallel, heterogeneous systems that accelerate visual computing applications. This course is intended for systems students interested in architecting efficient graphics, image processing, and computer vision platforms (both new hardware architectures and domain-optimized programming frameworks for these platforms) and for graphics, vision, and machine learning students that wish to understand throughput computing principles to design new algorithms that map efficiently to these machines.
Review of multi-core, multi-threading, SIMD, caches, and the value of hardware specialization
Algorithms for taking raw sensor pixels to an RGB image: demosaicing, sharpening, correcting lens aberrations, multi-shot alignment/merging, image filtering
Multi-scale processing with Gaussian and Laplacian pyramids, HDR (local tone mapping), portrait mode
Autofocus, autoexposure, use of ML in advanced camera operations, the Frankencamera.
Balancing locality, parallelism, and work, fusion and tiling, design of the Halide domain-specific language, automatically scheduling image processing pipelines
Popular DNN trunks and topologies, where the compute lies in modern networks, data layout optimizations, scheduling decisions, modern code generation frameworks
GPUs, TPUs, special instructions for DNN evaluation (and their efficiency vs custom ASIC), choice of precision in arithmetic, modern commercial DNN accelerators, flexibility vs efficiency trade-offs
If the most important step of ML is acquiring labeled data for training and validation, why don't we have better systems for it?
Systems for specifying models at a higher level of abstraction than DNN architecture graphs (Overton, Ludwig). Goal: removing the need for a low-level ML engineer.
H.264 video representation/encoding, parallel encoding, motivations for ASIC acceleration, ML-based compression methods, emerging opportunities for compression when machines, not humans, will observe most images
System design issues for building a video conferencing system: reducing latency, bandwidth, etc. How real-time video analysis will enable richer video-based applications.
The light field, initial discussion of NeRF algorithms
discussion of the arc of NeRF papers + review of the 3D rasterization pipeline (so we can talk about performance challenges next.)
Scheduling graphics pipeline onto parallel GPUs, key optimizations for modern, power-optimized mobile GPUs.
Modern hardware acceleration (RTX GPUs), memory coherence challenges, converting noisy images to clean images using neural techniques.
Topic depends on how fast we get through other lectures.
Epic’s Nanite Renderer (Guest Lecture: Brian Karis - Epic Games)
Brian Karis will talk about the design of Epic's Nanite renderer.
Rendering and Simulation for Model Training
How might systems for rendering and simulating virtual worlds be architected differently to support the needs of training machines instead of playing video games? (a.k.a. rendering for machine eyes, not human eyes)
The Slang Shading Language (Guest Lecture: Yong He and Teresa Foley - NVIDIA)
The design and implementation of Slang, discussion about transferring academic systems research into industry efforts.
Guest Lecture II
To be announced
In addition to expectation that all students attend and participate in discussions in live lecture, there will be two short programming assignments and a self-selected term project.
|Apr 18||Burst Mode HDR Camera RAW Processing|
|Apr 29||Scheduling a DNN Conv Layer (Making Students Appreciate cuBLAS)|
|Jun 3||Term Project|