Previous | Next --- Slide 6 of 83
Back to Lecture Thumbnails
ghostcow

To summarize some of the discussion in future slides, I thought it would be be helpful to break down the abstraction and implementation of each of the programming models below.

Shared address space

Abstraction: A computer's memory is organized as an array of bytes, each byte is identified by an array of bytes, each byte is identified by its address in memory (its position in this array).

Implementation: most data out in DRAM (perhaps some in fs if swap is used), some data stored in caches: the instruction "load the value stored at address X into register R0" may involve a complex sequence of operations by multiple data caches and access to DRAM

Message passing

Abstraction: threads operate within their own private address spaces, and communicate by sending/receiving messages (e.g. signals, sent between processes)

Implementation: hardware need not implement system-wide loads and stores to execute message passing programs (it only needs to communicate messages between nodes). We can connect commodity systems together to form a large parallel machine (message passing is a common programming model for clusters and supercomputers)

Data-parallel

Abstraction: perform the same function on elements of large collections

Implementation: organize computation as operations on sequences of elements (e.g. perform same function on all elements of a sequence), perhaps via libraries like PyTorch and NumPy that internally use SIMD to perform this type of vectorization.

Please log in to leave a comment.