If the max speedup with an application that is only 0.1% serial is as low as 10,000x, I am curious to know how this compares to the average application run on supercomputers of this scale. Are processes even lower than 0.1% serial, or are they more towards 1-10% or higher? How do engineers weigh the business cost of running with poor parallel code versus the cost of the extra effort and time needed to rewrite the code to be more parallel?
juliewang
@tennis_player I believe that the max speedup achievable per amdahl's law is only 1/(.001) = 1000x, not 10,000x
alexder
If you think about it, running a program serially on a large parallel machine is in a sense utilizing a very small fraction of the computation time. Moreover, @tennisplayer, see Amdahl's law. It is rather surprising how very little serial code can lead to exponential decrease in runtimes
If the max speedup with an application that is only 0.1% serial is as low as 10,000x, I am curious to know how this compares to the average application run on supercomputers of this scale. Are processes even lower than 0.1% serial, or are they more towards 1-10% or higher? How do engineers weigh the business cost of running with poor parallel code versus the cost of the extra effort and time needed to rewrite the code to be more parallel?