hi everyone,
i was wondering if i could get some advice from folks familiar with
the parallel version of MPB about mpb-mpi's performance.
i have built mpb-mpi on a cluster running Scientific Linux 3.0; the
various packages i am using are:
fftw-2.1.5 hdf5-1.6.5 libctl-3.0.2 openmpi-1.1.2
guile-1.8.1 lapack-3.1.0 mpb-1.4.2
i have also built a serial version of MPB on the same cluster, using
the same packages, but without MPI support.
i am doing some initial check-out of the two builds using the
diamond.ctl file from the MPB manual. both builds give results
consistent with the manual, but i am concerned about the amount of
time each takes to complete:
- the serial mpb takes 66 seconds to complete (on the head node, which
is identical to the compute nodes)
- mpb-mpi takes 75 seconds to complete, using 1 processor
- mpb-mpi takes 61 seconds to complete, using 2 processors
- mpb-mpi takes 57 seconds to complete, using 4 processors
which is pretty bad linearity with respect to the number of processors
involved. however this is also a pretty small job. is this normal for
mpb-mpi calculations involving small jobs? can i expect an
improvement with larger jobs, or should i dig into locally-specific
issues like interconnects or scheduling priorities and whatnot?
thanks!
-nate
(by the way, Steven: this version of guile is complaining about
deprecated function calls, which i don't mind, but thought you might
like to know about:
scm_must_malloc is deprecated. Use scm_gc_malloc and scm_gc_free instead.
)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
nate lipkowitz
416 823 8057
[EMAIL PROTECTED]
_______________________________________________
mpb-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/mpb-discuss