On 4/02/2012 5:20 AM, Christoph Klein wrote:
Hi all,
I am running a water/surfactant system with just under 100000 atoms
using MPI on a local cluster and not getting the scaling I was hoping
for. The cluster consists of 8 core xeon nodes and I'm running gromacs
4.5 with mpich2-gnu. I've tried running a few benchmarks using 100ps
runs and get the following results:
*Threads: 8 16 24 32 40 48 56 64*
*hr/ns: 15 18 53 54 76 117 98 50 *
Are you sure you have the right performance number (and not ns/day or
something)!
*
*
*Each set of 8 threads is being sent to one node and the 8 threaded
run was performed without MPI. I have tried changing the -npme
settings for all permissible values on runs with 16 threads. In every
instance the results were worse than if I didn't specify anything.*
*
*
*The fact that I am getting negative scaling leads me to believe that
something is wrong with my set up. Any tips on what I could try?*
The simplest explanation is that your network (or MPI settings for it)
is not up to the job. Very low latency is required. Gigabit ethernet is
not good enough.
You could try installing OpenMPI, also.
Mark
--
gmx-users mailing list [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists