Previous | Next --- Slide 68 of 116
Back to Lecture Thumbnails
dma1

Is the tradeoff from this design to the design on the previous slide that these large contexts can do more arithmetic per thread compared to the smaller ones, but are worse at performing many smaller tasks as a whole, due to the lesser latency hiding ability?

evs

Slides 76 and 77 explain the pro's and con's of these two approaches but your pretty much right. Having larger contexts works best when you have to run more arithmetic operations and don't need to switch threads as often.

ccheng18

These threads are executed on a multicore processor right? A processor has a number of ALU and processing units, like one...that are shared by the cores, and the cores are the execution contexts that represent each thread right?

Please log in to leave a comment.