On 3/04/2012 11:58 PM, Paolo Franz wrote:
Hello!
I am wondering if the domain decomposition chosen by the code, once
the number of cpus dedicated to PME is chosen,can be overridden
somehow. That is, if in some sort there exist a reciprocal space
equivalent of "-dd" for the domain decomposition of the direct space.
The problem I have is that I am trying to get a good scaling factor
for some large system I have, on 1000/2000 CPUs. If for instance I
have 128 CPU on PME and a 128^3 grid, I would like to have a
decomposition 8X16X1 on PME, which should be perfectly ok with the
parallelization scheme, if I understand it correctly. Unfortunately,
the default one I get is 128X1X1, which is inevitably very
inefficient. Is there anything I could be doing?
The automatic choice depends on a large number of factors. A big factor
is optimizing the PP<->PME communication load. Big common factors in the
divisions of processors into grids are key, so long as those factors can
lead to useful hardware mappings. In particular, see last para 3.17.5 of
manual.
Mark
--
gmx-users mailing list [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists