This is similar to the concept of a task queue that we used in dynamic assignment in previous lectures.
awu
"GPU implements maps thread blocks to cores using a dynamic scheduling policy that respects resource requirements"
As I learned from working on PA3, shared memory has to be declared either statically or at the host using "extern". This way the memory size is known at compile time. Is the the way GPU enforces these requirements?
This is similar to the concept of a task queue that we used in dynamic assignment in previous lectures.