If you are to run on a single node, then there's no need for mpi nowadays. mdrun uses all cores it can find anyway. If you need to split your calculation over many machines, however, you will need mpi.
Best, Erik 15 mar 2012 kl. 04.50 skrev cuong nguyen: > Dear Gromacs users, > > I prepare to run my simulations on the supercomputer on single node with 64 > CPUs. Although I have seen on Gromacs Mannual suggesting to use MPI to > parellel, I still haven't understood how to use this application and which > commands I have to use. Please help me to deal with this? > > Many thanks and regards, > > Cuong > > > -- > gmx-users mailing list [email protected] > http://lists.gromacs.org/mailman/listinfo/gmx-users > Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to [email protected]. > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists ----------------------------------------------- Erik Marklund, PhD Dept. of Cell and Molecular Biology, Uppsala University. Husargatan 3, Box 596, 75124 Uppsala, Sweden phone: +46 18 471 6688 fax: +46 18 511 755 [email protected] http://www2.icm.uu.se/molbio/elflab/index.html
-- gmx-users mailing list [email protected] http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! Please don't post (un)subscribe requests to the list. Use the www interface or send it to [email protected]. Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

