Te, Jerez A., Ph.D. wrote:
Hi Mark,

Thank you for your reply. Just to confirm, mdrun_mpi is still being used in
Gromacs 4.5.3? The gromacs manual (Appendix A.5- Running Gromacs in parallel)
suggested using "mdrun -np 8 -s topol -v -N 8" in running a single machine
with multiple (8) processors and "mpirun -p goofus,doofus,fred 10 mdrun -s
topol -v -N 30" to run three machines with ten processors each. You probably
already know this but I am confused as to whether these commands are correct

Those commands are outdated.  I am updating the manual to fix this.

in running parallel simulations (thus the assumption that mdrun_mpi is no
longer applicable in gromacs 4.5.3). So far I have not been successful in
running parallel simulations (as I mentioned before the two options above
would only start X identical serial processes). My bottom line is I want to
run Gromacs simulation in parallel (regardless of whether it is running on
one silicon with a number of processors or on different machines or nodes
with a specified number of processors).


The exact implementation depends on the nature of your cluster setup. For instance, our aging supercomputer does not support threading, so we have to compile with --enable-mpi to produce an mdrun_mpi binary that is called via mpirun. My laptop, however, supports threading, so I can initiate a process over both cores with mdrun -nt, no external MPI support necessary.

I assume that in order to get mdrun_mpi, Gromacs has to be recompiled using
the --enable-mpi option because currently our gromacs executable bin does not
have mdrun_mpi. Our system administrator said that all mpi-related options
have been enabled in compiling gromacs and still the mdrun_mpi is not found
as one of the exe files. Please help.


Just because your /bin subdirectory does not have mdrun_mpi does not necessarily mean the binary was not compiled with MPI support. Based on what you have reported, it sounds like you indeed only have a serial mdrun, but it is possible to suppress the default suffix. If you have threading support, the -nt option will be printed if you issue mdrun -h.

-Justin

Thanks, JT


-----Original Message----- From: [email protected] on behalf of
Mark Abraham Sent: Mon 12/13/2010 4:38 PM To: Discussion list for GROMACS
users Subject: Re: [gmx-users] mpi run in Gromacs 4.5.3

On 14/12/2010 7:48 AM, Te, Jerez A., Ph.D. wrote:
Hi, I have been trying to run Gromacs 4.5.3 parallel simulations using openmpi 1.4.2. From my understanding, mdrun_mpi is not used in this version
of Gromacs.


I don't understand what (you think) you mean. You can use thread-based parallelism for processors that share common silicon, or MPI-based parallelism if a network connection is involved, but not both. The latter is
named mdrun_mpi by default.

Our system administrator told me that all mpi related options have been
turned on while installing Gromacs. With either commands: mdrun -np X
-deffnm topol -N X (run in an 8-cpu node)


This won't run in parallel at all. mdrun ignores -np and -N

or mpirun -np X mdrun -deffnm topol -N X (submitted in a number of nodes depending on availability)


This will get you the symptoms below, but -N is still ignored.

I get X identical simulations instead of a parallel run. If X=4, I get 4
identical simulations (the same simulation ran 4 times) instead of 1 parallel simulation in 4 processors. The performance difference between a
single-processor run and the X=4 run are also similar (no marked difference
in the time it takes to finish the simulation). Has anyone encountered this
problem?


You're using a serial mdrun. Use a parallel mdrun.

Mark



--
========================================

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================
--
gmx-users mailing list    [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to