On Friday 22 October 2010 11:34:19 ow...@netptc.net wrote:

> In fact IIRC the additional overhead follows the square of the number
> of CPUs.  I seem to recall this was called Amdahl's Law after Gene
> Amdahl of IBM (and later his own company)

  Either that's not it, or there's more than one "Amdahl's law" --
the oen I know is about diminishing returns from increasing effort
to parallelize code.  I don't know it in its pithy form, but
the gist of it is that you can only parallelize *some* of your
code, because all algorithms have a certain amount of set-up
and tear-down overhead that's typically serial.  Even if you
perfectly parallelize the parallelizable part of the code, 
so it runs N times faster, your application as a whole will
run something less than N times faster, and as N gets large,
this "serial offset" contribution will come to dominate the 
execution time, at which point additional investments in 
parallelization are probably wasted.

                                -- A.
-- 
Andrew Reid / rei...@bellatlantic.net


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201010222005.49579.rei...@bellatlantic.net

Reply via email to