A few comments: > module multiprocessing found 32 cores: using make_np = 24
However - all the externalpackage builds on 'maint' are sequential. fblaslapack metis parmetis superlu_dist hypre [with 'master' - metis,parmetis,hypre would use parallel builds] > Starting Configure Run at Thu Feb 26 01:01:07 2015 > Finishing Configure Run at Thu Feb 26 01:39:20 2015 so wallclock times do match.. BTW: wrt compilers - the optimization flags selected could affect their performance. [and if they are constantly contacting the license server - that would add up] I'll send in numbers/logs for similar builds on mira [alcf] and my laptop in a followup e-mail. Satish On Thu, 26 Feb 2015, Nathan Collier wrote: > Barry, > > * see attached configure log > * times are the "real time" reported by the unix time command > * all the packages should rebuild because the reconfigure script has a > --with-clean=1 > * not sure about the load while configuring, if you can tell me how to > check this I can run again and monitor it > > Nate > > On Thu, Feb 26, 2015 at 3:15 PM, Barry Smith <[email protected]> wrote: > > > > > Nathan, > > > > Any idea what the load was on the compiler server during the > > configure/make ? > > > > Barry > > > > > On Feb 26, 2015, at 8:13 AM, Nathan Collier <[email protected]> > > wrote: > > > > > > Ok, so I built PETSc with metis, parmetis, superlu_dist, and hyper on > > Titan. The configure time is the second configure--when you run the > > reconfigure script that the batch submission generates for you. > > > > > > configure: 38m15.488s > > > make: 15m37.610s > > > > > > Nate > > > > > > > > > On Thu, Feb 26, 2015 at 2:28 AM, Satish Balay <[email protected]> wrote: > > > I think we made some progress in improving build times. > > > > > > We have some of the externalpackages building using parallel make - so > > > that part is faster now. [ Some of this stuff might be in master - but > > > not 3.5] > > > > > > Some packages are still built sequentially [for eg: > > > fblaslapack,scalapack,superlu etc]. Fixing them can reduce build time > > > significantly. [esp if the machine has many cores] > > > > > > The sequential configure [of all packages] is still the > > > bottleneck. All compiles [by PETSc configure] are done in TMPDIR to > > > avoid NFS I/O. > > > > > > Reducing the number of tests done in configure won't be easy. I have a > > > minor fix that avoids unnecessary compiles wrt externalpackages in > > > branch 'balay/update-configure-lib-search' > > > > > > BTW: I don't have access to oakridge machines.. > > > > > > Satish > > > > > > On Wed, 25 Feb 2015, Barry Smith wrote: > > > > > > > > > > > Shockingly this is not bad (though more than it should be), we've > > seen times like an hour on the NERSC and ANL systems. > > > > > > > > If you have time :-) could you run with metis, permetis, > > superlu_dist and hypre --with-debugging=0 and get the times separately for > > configure and make? > > > > > > > > Thanks > > > > > > > > Barry > > > > > > > > > On Feb 25, 2015, at 9:05 PM, Nathan Collier < > > [email protected]> wrote: > > > > > > > > > > I have built on Titan, I can time my configure for more accurate > > answers but I would say it was on the order of 10-15 minutes. That is with > > a Metis/parmetis build. Is this the type of experience you are looking for? > > More details? > > > > > > > > > > Nate > > > > > > > > > > On Wednesday, February 25, 2015, Victor Eijkhout < > > [email protected]> wrote: > > > > > > > > > > > On Feb 25, 2015, at 1:27 PM, Barry Smith <[email protected]> > > wrote: > > > > > > > > > > > > If you have accounts there and can reproduce slow configure/make > > times > > > > > > > > > > Just let me know if you want a comparison to TACC machines. > > > > > > > > > > Starting with Ranger, we gave our build node its own file system > > because I regularly crashed lustre with the petsc build. No fault of Petsc. > > > > > > > > > > And I have no complaints about the configure/make speed, on either > > our build node or the regular user file system. > > > > > > > > > > Victor. > > > > > > > > > > > > > > > > > > > > > > > >
