Previous | Next --- Slide 42 of 63
Back to Lecture Thumbnails
wkorbe

if writing schedules is analogous to designing how your custom parallel code should be partitioned and run, that means there's also a very small number of us who can be trusted to do this manually too. ;)

sanjayen

@wkorbe, agreed!! I'm really curious how [Adams 2019] approached the task of generating efficient schedules, especially because this is something we should be able to roughly understand. From the paper here, it seems as if the compiler uses a trained cost model built with symbolic analysis/machine learning along with a backtracking tree search algorithm. Some aspects of the paper definitely seem outside of the scope of this class, but it would be cool to synthesize the key principles that drive the search process for the optimal schedule, as this is of great interest to us as CS149 programmers!

gmudel

@sanjayen The feature extraction/selection part of the paper seems to be the hardest/most interesting part of the paper -- i.e. just figuring out which factors are relevant to determining a schedule. The last 2 pages of the paper seem to list all of them.

amagibaba

This is indeed super interesting - so in a gist, Halide indeed does trial and error within the possibility space (which is already pruned by virtue of the fact that the human has to write the code and tiling and vectorizing and parallelization and whatever) and determines what the best strategy is? And each time it is compiled on a different machine, the strategy might be different?

Please log in to leave a comment.