I found one major negative to this change - it assumes that my build is
being done in exclusion of anything else on my computer. Unfortunately, this
is never true.

So my laptop hemorrhaged itself into frozen silence, overheated to the point
of being burning hot, and had to have its battery yanked to stop the runaway
behavior. Not a really good thing.

I would suggest you default this "heuristic" out, and let someone set it to
use multiple runs if-and-only-if they want it. Hate to cite the lowest
common denominator, but this was a very nasty surprise.



On Wed, Sep 22, 2010 at 7:50 AM, Jeff Squyres <jsquy...@cisco.com> wrote:

> Some of you may be unaware that recent versions of automake can run in
> parallel.  That is, automake will run in parallel with a degree of (at most)
> $AUTOMAKE_JOBS.  This can speed up the execution time of autogen.pl quite
> a bit on some platforms.  On my cluster at cisco, here's a few quick timings
> of the entire autogen.pl process (of which, automake is the bottleneck):
>
> $AUTOMAKE_JOBS           Total wall time
> value                    of autogen.pl
> 8                        3:01.46
> 4                        2:55.57
> 2                        3:28.09
> 1                        4:38.44
>
> This is an older Xeon machine with 2 sockets, each with 2 cores.
>
> There's a nice performance jump from 1 to 2, and a smaller jump from 2 to
> 4.  4 and 8 are close enough to not matter.  YMMV.
>
> I just committed a heuristic to autogen.pl to setenv AUTOMAKE_JOBS if it
> is not already set (https://svn.open-mpi.org/trac/ompi/changeset/23788):
>
> - If lstopo is found in your $PATH, runs it and count how many PU's
> (processing units) you have.  It'll set AUTOMAKE_JOBS to that number, or a
> maximum of 4 (which is admittedly a further heuristic).
> - If lstopo is not found, it just sets AUTOMAKE_JOBS to 2.
>
> Enjoy.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>

Reply via email to