Back to Lecture Thumbnails
lindenli
sanjayen
@lindenli do you know what kinds of networks would see the greatest speedup gains from running on GPUs. I imagine most networks have a lot of parallel work but I am curious about the level of differences - could we imagine even specialized hardware for different types of networks?
jiaju
I'm not well versed in compression schemes and how they affect data locality for sparse matrix computations, so I wonder how well the size reduction and compression that happens in model pruning speeds up matrix multiplication on SIMD processors. For example, is it harder to block matrix multiply when the matrices are sparse?
Please log in to leave a comment.
Copyright 2021 Stanford University
Convolutional networks are a great candidate for GPUs: there are a lot more ALUs that can exploit the high arithmetic intensity operations that the matrix multiplications require.