Previous | Next --- Slide 10 of 63
Back to Lecture Thumbnails
lee

This was the demo where everyone in the class would add up all the number of units they were in, then try to sum it up. It took me 1 second to get my unit count (and let's say it's similar for everyone else). This takes such a small amount of time to compute because most people know their unit count off the top of their head! BUT let's say there are 500 people enrolled in the class, and it takes 5 seconds to add one person's unit count to the total so far. We save the 1 second it takes for each person to get their unit count if everyone calculates their own total in parallel, but that's still 500*5 = 2500 seconds to add their sum to the total :( If each of those 500 people are their own "processor", is it worth it to have all those extra processors for less than 20% speedup? A lot of the time, the answer is no!

qwerty

This is why managing communication between processors is important, and it seems one way to achieve this is to limit the number of processors that have to communicate with each other. There is a tradeoff between the work allocated to each processor and the communication between them.

cassiez

This demo demonstrates that sometimes the communication cost is too high compared to computation cost and we can't make a program run faster simply by stacking up computing resources & increasing the level of parallelism. An experience I had similar to this demo is that while playing the multiplayer co-op game "Overcooked" (players work together to prepare and cook dishes), 2 players can get things done a lot faster than a single player, but 4 players almost always end up in a mess (bumping into each other, busy waiting a cutting board, chaotic communication, etc).

xiyan

This demo shows that that with scaling up the number of processors, while the computation costs is reduced as work to each is reduced, performance bottlenecked by the communication cost between many processors. This highlight the importance of managing both the computation cost as well as communication cost when designing parallel programs. It makes me wonder about what are some of the example scenarios where we would benefit more from reducing computation cost v.s. reducing communication cost, and vise versa.

tigerpanda

I don't think I realized until now that distributing tasks to be done in parallel comes at the cost of more communication networks that must be maintained. It reminds me of an article I read on Brook's Law which states the more engineers you add onto a late software project, the later it will be. I don't think that this is necessarily always true, but there is some truth and logic behind having to now maintain new networks of communication, much like in parallel processing. I guess a misconception I had coming into this class is if you can do something in parallel, you should in order to get the task done faster but from today's lecture I learned that parallelizing the wrong tasks can actually make the task take even longer.

ckk

This slide shows that a lot of programs are not candidates for parallel programming. Even if the program has sections we could parallelize, it doesn’t automatically qualify it for parallel programming. We have examine the program we are writing along with all the possible communication channels before deciding to parallelize any program

Please log in to leave a comment.