[gmx-users] PME nodes

2012-05-31 Thread Ignacio Fernández Galván
Hi all,

There must be something I don't fully understand, by running grompp on a 
system, I get this:

  Estimate for the relative computational load of the PME mesh part: 0.32

Good, that's approximately 1/3, or a 2:1 PP:PME ratio, which is the recommended 
value for a dodecahedral box. But then I run the dynamics with mdrun_mpi -np 
8 (different cores in a single physical machine) and I get:

  Initializing Domain Decomposition on 8 nodes
  [...]
  Using 0 separate PME nodes

I would have expected at least 2 nodes (3:1, 0.25) to be used for PME, so 
there's obviously something wrong in my assumption.

Should I be looking somewhere in the output to find out why? Would it be better 
to try to get some dedicated PME node(s) (even in a single machine)?

Thanks,
Ignacio
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] PME nodes

2012-05-31 Thread Peter C. Lai
According to the manual, mdrun does not dedicate PME nodes unless -np  11
You can manually specify dedicated PME nodes using -npme, but it is highly
system dependent on whether this will be faster on lowcore systems.

Also the estimate given by grompp may not be optimal during runtime. You'll
have to repeat runs on different node combinations or use g_tune_pme to  
discover the optimal PME:PP ratio for your particular system.

On 2012-05-31 04:11:15AM -0700, Ignacio Fernández Galván wrote:
 Hi all,
 
 There must be something I don't fully understand, by running grompp on a 
 system, I get this:
 
   Estimate for the relative computational load of the PME mesh part: 0.32
 
 Good, that's approximately 1/3, or a 2:1 PP:PME ratio, which is the 
 recommended value for a dodecahedral box. But then I run the dynamics with 
 mdrun_mpi -np 8 (different cores in a single physical machine) and I get:
 
   Initializing Domain Decomposition on 8 nodes
   [...]
   Using 0 separate PME nodes
 
 I would have expected at least 2 nodes (3:1, 0.25) to be used for PME, so 
 there's obviously something wrong in my assumption.
 
 Should I be looking somewhere in the output to find out why? Would it be 
 better to try to get some dedicated PME node(s) (even in a single machine)?
 
 Thanks,
 Ignacio
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
==
Peter C. Lai| University of Alabama-Birmingham
Programmer/Analyst  | KAUL 752A
Genetics, Div. of Research  | 705 South 20th Street
p...@uab.edu| Birmingham AL 35294-4461
(205) 690-0808  |
==

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] PME nodes

2012-05-31 Thread Mark Abraham

On 31/05/2012 9:11 PM, Ignacio Fernández Galván wrote:

Hi all,

There must be something I don't fully understand, by running grompp on a 
system, I get this:

   Estimate for the relative computational load of the PME mesh part: 0.32

Good, that's approximately 1/3, or a 2:1 PP:PME ratio, which is the recommended value for 
a dodecahedral box. But then I run the dynamics with mdrun_mpi -np 8 
(different cores in a single physical machine) and I get:

   Initializing Domain Decomposition on 8 nodes
   [...]
   Using 0 separate PME nodes

I would have expected at least 2 nodes (3:1, 0.25) to be used for PME, so 
there's obviously something wrong in my assumption.

Should I be looking somewhere in the output to find out why? Would it be better 
to try to get some dedicated PME node(s) (even in a single machine)?


Generally mdrun does pretty well, given the constraints you've set for 
it. Here, you've implicitly let it choose (with mdrun -npme -1), and 
with fewer than a minimum number of nodes (10, in 4.5.5) it doesn't 
bother, since the book-keeping would be too costly. Otherwise, you can 
investigate the reasons for the choices mdrun made from the output in 
the .log file.


You can try mdrun -npme 2 or 3 if you like, but it's likely not faster 
or might even refuse to run. See manual 3.17, also.


Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] PME nodes

2009-06-08 Thread Carsten Kutzner

On Jun 6, 2009, at 1:20 PM, XAvier Periole wrote:



On Jun 6, 2009, at 1:08 PM, Justin A. Lemkul wrote:




XAvier Periole wrote:

Dears,
I am having troubles finding the better balance between the PME  
CPUs and the rest.
I played with the rdd, rcon and -npme options but nothing really  
appears very

straightforwardly best.
I'd appreciate if some of you could post their experience in that  
matter. I mean the

number of pme nodes as compared the total number of CPUs used.
I think this info as been discussed recently on the list but the  
archive is not accessible.
It may matter that I have a system containing about 7 atoms, a  
protein in a bilayer.


Some advice that I got from Berk long ago has worked beautifully  
for me.  You want a 3:1 PP:PME balance for a regular triclinic cell  
(grompp will report the relative PME load as 25% if your parameters  
create such a balance), 2:1 for an octahedron.  My scaling has been  
great using this information, without having to alter -rdd, -rcon,  
etc.


Thanks for the info. I got more or less to that ratio. Although a  
2:1 PP:PME sometimes is better.


However my problem now is to get 256 CPUs more efficient (ns/day)  
than 128 CPUs. The communications
become a limiting factor ... can't get it to go faster! The system  
might be small, but not sure.


I'll take a look at the CVS tool.

There is also a version for gromacs 4.0.x available for download at
www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/

Regards,
   Carsten



--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] PME nodes

2009-06-08 Thread Andrei Neamtu
Hello,

How can I compile the g_tune_pme program available at:
http://www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/

Many thanks,
Andrei
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PME nodes

2009-06-08 Thread Carsten Kutzner

Hi,

it's written at the begin of the .c file:

 * You can compile this tool using the Gromacs Makefile from the
 * share/gromacs/template directory, just replace 'template' by  
'g_tune_pme'

 * where needed. To enable shell completions for g_tune_pme, just
 * copy the provided completion.* files to your Gromacs bin directory.

Carsten


On Jun 8, 2009, at 12:48 PM, Andrei Neamtu wrote:


Hello,

How can I compile the g_tune_pme program available at:
http://www.mpibpc.mpg.de/home/grubmueller/projects/MethodAdvancements/Gromacs/

Many thanks,
Andrei
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] PME nodes

2009-06-06 Thread XAvier Periole


Dears,

I am having troubles finding the better balance between the PME CPUs  
and the rest.
I played with the rdd, rcon and -npme options but nothing really  
appears very

straightforwardly best.

I'd appreciate if some of you could post their experience in that  
matter. I mean the

number of pme nodes as compared the total number of CPUs used.

I think this info as been discussed recently on the list but the  
archive is not accessible.


It may matter that I have a system containing about 7 atoms, a  
protein in a bilayer.


Best,
XAvier.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PME nodes

2009-06-06 Thread David van der Spoel

XAvier Periole wrote:


Dears,

I am having troubles finding the better balance between the PME CPUs and 
the rest.
I played with the rdd, rcon and -npme options but nothing really appears 
very

straightforwardly best.

I'd appreciate if some of you could post their experience in that 
matter. I mean the

number of pme nodes as compared the total number of CPUs used.

I think this info as been discussed recently on the list but the archive 
is not accessible.


It may matter that I have a system containing about 7 atoms, a 
protein in a bilayer.


g_tunepme in the CVS is your friend here. Carsten Kutzner is the author 
of this program.




Best,
XAvier.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Professor of Biology
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
sp...@xray.bmc.uu.sesp...@gromacs.org   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PME nodes

2009-06-06 Thread Justin A. Lemkul



XAvier Periole wrote:


Dears,

I am having troubles finding the better balance between the PME CPUs and 
the rest.
I played with the rdd, rcon and -npme options but nothing really appears 
very

straightforwardly best.

I'd appreciate if some of you could post their experience in that 
matter. I mean the

number of pme nodes as compared the total number of CPUs used.

I think this info as been discussed recently on the list but the archive 
is not accessible.


It may matter that I have a system containing about 7 atoms, a 
protein in a bilayer.




Some advice that I got from Berk long ago has worked beautifully for me.  You 
want a 3:1 PP:PME balance for a regular triclinic cell (grompp will report the 
relative PME load as 25% if your parameters create such a balance), 2:1 for an 
octahedron.  My scaling has been great using this information, without having to 
alter -rdd, -rcon, etc.


-Justin


Best,
XAvier.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PME nodes

2009-06-06 Thread XAvier Periole


On Jun 6, 2009, at 1:08 PM, Justin A. Lemkul wrote:




XAvier Periole wrote:

Dears,
I am having troubles finding the better balance between the PME  
CPUs and the rest.
I played with the rdd, rcon and -npme options but nothing really  
appears very

straightforwardly best.
I'd appreciate if some of you could post their experience in that  
matter. I mean the

number of pme nodes as compared the total number of CPUs used.
I think this info as been discussed recently on the list but the  
archive is not accessible.
It may matter that I have a system containing about 7 atoms, a  
protein in a bilayer.


Some advice that I got from Berk long ago has worked beautifully for  
me.  You want a 3:1 PP:PME balance for a regular triclinic cell  
(grompp will report the relative PME load as 25% if your parameters  
create such a balance), 2:1 for an octahedron.  My scaling has been  
great using this information, without having to alter -rdd, -rcon,  
etc.


Thanks for the info. I got more or less to that ratio. Although a 2:1  
PP:PME sometimes is better.


However my problem now is to get 256 CPUs more efficient (ns/day) than  
128 CPUs. The communications
become a limiting factor ... can't get it to go faster! The system  
might be small, but not sure.


I'll take a look at the CVS tool.

XAvier.


-Justin


Best,
XAvier.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!
Please don't post (un)subscribe requests to the list. Use thewww  
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!
Please don't post (un)subscribe requests to the list. Use the www  
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] PME nodes

2009-06-06 Thread David van der Spoel

XAvier Periole wrote:


On Jun 6, 2009, at 1:08 PM, Justin A. Lemkul wrote:




XAvier Periole wrote:

Dears,
I am having troubles finding the better balance between the PME CPUs 
and the rest.
I played with the rdd, rcon and -npme options but nothing really 
appears very

straightforwardly best.
I'd appreciate if some of you could post their experience in that 
matter. I mean the

number of pme nodes as compared the total number of CPUs used.
I think this info as been discussed recently on the list but the 
archive is not accessible.
It may matter that I have a system containing about 7 atoms, a 
protein in a bilayer.


Some advice that I got from Berk long ago has worked beautifully for 
me.  You want a 3:1 PP:PME balance for a regular triclinic cell 
(grompp will report the relative PME load as 25% if your parameters 
create such a balance), 2:1 for an octahedron.  My scaling has been 
great using this information, without having to alter -rdd, -rcon, etc.


Thanks for the info. I got more or less to that ratio. Although a 2:1 
PP:PME sometimes is better.


However my problem now is to get 256 CPUs more efficient (ns/day) than 
128 CPUs. The communications
become a limiting factor ... can't get it to go faster! The system might 
be small, but not sure.


Depends a lot on the interconnect and the ratio atoms/node. You may also 
want to play with cut-offs, the tool will help you with this.



I'll take a look at the CVS tool.

XAvier.


-Justin


Best,
XAvier.
___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before 
posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--


Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before 
posting!
Please don't post (un)subscribe requests to the list. Use the www 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use thewww 
interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Professor of Biology
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
sp...@xray.bmc.uu.sesp...@gromacs.org   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php