Amdahl's Law states that a parallel process cannot achieve linear scaling because there is a portion of every program that must occur in serial. This portion determines a limit to the speedup one can hope to achieve by adding processors. If 80% of your program's time is spent in this serial portion, then adding an infinite number of processors can eliminate, at most, 20% of the program's total execution time, and that's assuming that the commúnications involved come at no cost.
With rendering, the percentages are typically inverted: 1% or less of a render's time is spent in the serial portion, so adding processors can give you a nearly linear speedup: going from 1 to 2 processors will give you nearly double the throughput, and going from 1 to 700 processors will give you nearly 700x the throughput. In other words, if you want to render 700 frames of animation, and have 700 computers to do it with, you can expect it to take roughly the amount of time it would take you to render the first frame on 1 computer. At ResPower, we routinely see this sort of speedup for animations.
But what about Split/Frame renders? Can you divide a single frame 700 ways and have it render 700 times faster? Unfortunately, for individual frames, the serial percentage involved is significantly higher than for animations, and you start to see a point of diminishing returns much sooner. As the charts show, you see a tremendous boost very quickly, but after about 100-200 computers, adding buckets doesn't speed things up very much.
Why is thatí Well, John Gustafson created a rebuttal to Amdahl that explains it fairly well. With a single Split/Frame render, the size of the problem remains unchanged, and so your speedup follows Amdahl's Law. The shape of the curve looks almost exactly like a graphing of Amdahl's equation:
With animations, we tend to increase the size of the problem more than the number of processors. As a result, you see a scaling mode where 400 frames finish in roughly the same amount of time as 1 frame. You haven't increased the speed of any individual frame - you've just done more frames. So, was Amdahl wrong, or Gustafson? ResPower's testing indicates that they're both right, depending on your perspective. If you keep the problem size constant and add processors, you see a very logarithmically-shaped curve, with maximum benefit around 100-200 computers (at least for the test scene we used). If you change the problem size in conjunction with the number of processors, you see a nearly linear curve.
As it turns out, Dr. Yuan Shi over at Temple University was able to prove mathematically that Gustafson and Amdahl were both saying exactly the same thing, when you account for the imprecision in their respective terminologies. Their formulae all worked out to the same thing, with results that jive with what we see at ResPower: a static problem size yields diminishing returns, but additional processors let you solve larger problems in the same amount of time. So thanks to Dr. Shi, we can now see that the universe makes sense again and peace has been restored.
FIg2. The deploy command/script.