Amdahl law states that a parallel process cannot achieve linear scaling because there is a portion of every programa that must occur in serial. This portion determines a limit todo the spedup one can hope todo achieve by adding processors. If 80% of your programa time is spent in this serial portion, then adding an Infinite number of processors can eliminate, at most, 20% of the programa total execution time, and that assuming that the communications involved come at no cost.
With rendering, the percentages are typically inverted: 1% or less of a render time is spent in the serial portion, so adding processors can give you a nearly linear spedup: going from 1 todo 2 processors Will give you nearly double the throughput, and going from 1 todo 700 processors Will give you nearly 700x the throughput. In other words, if you want todo render 700 frames of animation, and have 700 computers todo do it with, you can expect it todo take roughly the amount of time it would take you todo render the first frame on 1 computer. At respower, we routinely se this sort of spedup for animations.
But what about Split/frame renders? Can you divide a Single Frame 700 ways and have it render 700 times faster? Unfortunately, for individual frames, the serial percentage involved is significantly higher than for animations, and you start todo se ha point of diminishing returns much soner. As the charts show, you se ha tremendous bost very quickly, but After about 100-200 computers, adding buckets doesnt speed things up very much.
Why is thatí well, John gustafson created a rebuttal todo amdahl that explains it Fairly well. With a single Split/frame render, the size of the problem remains unchanged, and so your spedup follows amdahl law. The shape of the curve looks almost exactly like a graphing of amdahl equation:
Spedup=1/(s+p/n).
With animations, we tend todo increase the size of the problem more than the number of processors. As a result, you se ha scaling mode where 400 frames finish in roughly the same amount of time as 1 frame. You havent increased the speed of any individual frame - Youve just done more frames. So, was amdahl wrong, or gustafson? Respower testing indicates that theyre both right, depending on your perspective. If you kep the problem size constant and add processors, you se ha very logarithmically-shaped curve, with máximum benefit around 100-200 computers (at least for the test scene we used). If you change the problem size in conjunction with the number of processors, you se ha nearly linear curve.
As it turns out, doctor yuan shi over at temple university was able todo probé mathematically that gustafson and amdahl were both saying exactly the same thing, when you account for the imprecision in their respective terminologies. Their formulae all worked out todo the same thing, with results that jive with what we se at respower: a static problem size yields diminishing returns, but additional processors let you solver larger problems in the same amount of time. So gracias todo doctor shi, we can now se that the Universe makes sense again and peace has ben restored.