On Sat, 21 Feb 2009, Mahmoud Payami Shabestari wrote: MP> MP> Hi Axel, MP> MP> > you should ask that the people developing MPICH2. MP> > this is not the right forum for this kind of question. MP> > MP> > one more remark. that might also help people here not MP> > to waste too much time on something pointless [...]. MP> MP> It is the converse! I am trying to save the community's time. Here you are: MP> ------------------------------------------------------------- MP> mpich2_with_those_switches_that_I_mentioned: MP> MP> PWSCF : 9m46.25s CPU time, 12m 9.57s wall time MP> --------------------------------------------------------------- MP> openmpi-1.2.9 (default switches): MP> MP> PWSCF : 32m8.27s CPU time, 51m 23.38s wall time MP> ----------------------------------------------------------------
mahmoud, as i already wrote to you in private e-mail. the huge discrepancy between wall and cpu time casts a severe doubt on the usefulness of these numbers. as i was stating in the previous mail. if the MPI library takes a significant time of the total time, there is a problem elsewhere that needs to be investigated first. you are obviously severely overloading your machine. MP> These are the results of an scf calculation for 21-layered Al(110) slab MP> with Ecut=22Ry, k_mesh=45x45x1, degauss=0.05 on a box with 2 x amd64 quad MP> (8 cores). MP> The memory used in mpich2 is less than half of that in (default MP> switches)openmpi. how did you determine that? you have to be careful to differentiate between memory being mapped into address space, physical memory used and virtual memory being in use. the respective columns in displayed in, e.g. top are: VIRT RES and CODE+DATA) cheers, axel. MP> MP> Cheers, MP> mahmoud MP> MP> MP> > MP> > in general, it is not a good idea to try an play with MP> > compiler optimization in the hope to make an MPI implementation MP> > (and thus communication) faster. MP> > MP> > if your application performance would depend so much on the MP> > optimization level use to compile your MPI library, then either MP> > you have a very crappy MPI implementation, or your application MP> > spends too much time in MPI calls. in the latter case, that MP> > usually corresponds to severe overload of your communication MP> > hardware and taking care of that would give you a much better MP> > performance increase than any compiler switches will. MP> > MP> > basically for almost _any_ system library (and that includes MP> > ATLAS and FFTW, btw) it is best to stick to a moderate optimization MP> > (-O) as aggressive optimization may interfere with the implemented MP> > algorithms and existing optimizations (e.g. both ATLAS and FFTW MP> > include optimizations on the C-language and algorithm level, higher MP> > compiler optimization can change the semantics of your code and MP> > thus negate the optimizations performed in the code). even more MP> > so, with many current compilers, aggressive optimization (-O3 or MP> > higher) incures a very high risk of the compiler miscompiling MP> > your library and thus leading to uncontrollable crashes or wrong MP> > results. that doesn't mean, that there may be a measurable benefit MP> > for singular cases, but from over 10 years of experience in MP> > management of HPC resources the overall effect is problematic. MP> > most of the time it is hard enough to chase down application bugs MP> > that are real or cause by compilers, you don't want to add having MP> > to track down problems in your libraries to that. MP> > MP> > cheers, MP> > axel. MP> > MP> > MP> > MP> Any comment is highly appreciated. MP> > MP> MP> > MP> Best regards, MP> > MP> Mahmoud Payami MP> > MP> Phys. Group, MP> > MP> Atomic Energy Org. of Iran MP> > MP> MP> > MP> _______________________________________________ MP> > MP> Pw_forum mailing list MP> > MP> Pw_forum at pwscf.org MP> > MP> http://www.democritos.it/mailman/listinfo/pw_forum MP> > MP> MP> > MP> > -- MP> > ======================================================================= MP> > Axel Kohlmeyer akohlmey at cmm.chem.upenn.edu http://www.cmm.upenn.edu MP> > Center for Molecular Modeling -- University of Pennsylvania MP> > Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323 MP> > tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425 MP> > ======================================================================= MP> > If you make something idiot-proof, the universe creates a better idiot. MP> MP> _______________________________________________ MP> Pw_forum mailing list MP> Pw_forum at pwscf.org MP> http://www.democritos.it/mailman/listinfo/pw_forum MP> -- ======================================================================= Axel Kohlmeyer akohlmey at cmm.chem.upenn.edu http://www.cmm.upenn.edu Center for Molecular Modeling -- University of Pennsylvania Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323 tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425 ======================================================================= If you make something idiot-proof, the universe creates a better idiot.
