Previous | Next --- Slide 13 of 81
Back to Lecture Thumbnails
ayushaga

Here the variable sum is a shared variable among all threads that receive independent loop iterations of going through the neighbors. Hence, it is important to atomically update sum in the inner loop or keep a local copy of sum in each thread and only update the final sum at the end.

It's me!

Vertex program defines per-vertex operations that can be run on all the vertices of the graph. In this slide the pagerank vertex program is one such example. In the foreach loop, the weighted sum of the ranks of neighbors is calculated parallelly. And in the next step the rank of the current vertex is updated. It also uses the convenient built-in functions like in_neighbours and num_out_neighbours to find the incoming or outgoing edges and total number of edges from a vertex, etc.

apappu

This reminds me of Kunle's descriptions of imperative vs declarative programming models (in the context of transactional memory) -- this slide is fairly declarative, i.e. describes what needs to be done at a high level, with very few implementation details (which graph lab takes care of, like scheduling and parallelization as indicated on the slide)

narwhale

From the equation and the code, it seems like alpha can be toggled to increase, or decrease, how much one's rank is determined by its neighbors, and thus how much variance there can be across the different ranks.

Please log in to leave a comment.