Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mark Abraham
Hi,

As I said yesterday, you can't tell that about any program unless you look
at its output and how it performs with different amounts of resources.

Mark

On Mon, Oct 17, 2016 at 2:31 PM Mahmood Naderan 
wrote:

> Problem is that I can not find out if gromacs (or MPI) is using the
> resources correctly. Is there any idea to see if there is any bottleneck
> for such low utilization?
>
> Regards,
> Mahmood
>
>
>
> On Mon, Oct 17, 2016 at 11:30 AM, Mahmood Naderan 
> wrote:
>
> > it is interesting for me that I specified Verlet, but the log warns about
> > group.
> >
> > mahmood@cluster:LPN$ grep -r cut-off .
> > ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet:
> particle
> > based cut-offs)
> > ./mdout.mdp:; nblist cut-off
> > ./mdout.mdp:; long-range cut-off for switched potentials
> > ./mdout.mdp:; cut-off lengths
> > ./mdout.mdp:; Extension of the potential lookup tables beyond the cut-off
> > mahmood@cluster:LPN$ grep -r Verlet .
> > ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet:
> particle
> > based cut-offs)
> > ./mdout.mdp:cutoff-scheme= Verlet
> > ./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
> > per atom,
> > ./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
> > ./mdout.mdp:vdw-modifier = Potential-shift-Verlet
> >
> >
> >
> >
> > Regards,
> > Mahmood
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
Problem is that I can not find out if gromacs (or MPI) is using the
resources correctly. Is there any idea to see if there is any bottleneck
for such low utilization?

Regards,
Mahmood



On Mon, Oct 17, 2016 at 11:30 AM, Mahmood Naderan 
wrote:

> it is interesting for me that I specified Verlet, but the log warns about
> group.
>
> mahmood@cluster:LPN$ grep -r cut-off .
> ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
> based cut-offs)
> ./mdout.mdp:; nblist cut-off
> ./mdout.mdp:; long-range cut-off for switched potentials
> ./mdout.mdp:; cut-off lengths
> ./mdout.mdp:; Extension of the potential lookup tables beyond the cut-off
> mahmood@cluster:LPN$ grep -r Verlet .
> ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
> based cut-offs)
> ./mdout.mdp:cutoff-scheme= Verlet
> ./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
> per atom,
> ./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
> ./mdout.mdp:vdw-modifier = Potential-shift-Verlet
>
>
>
>
> Regards,
> Mahmood
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
it is interesting for me that I specified Verlet, but the log warns about
group.

mahmood@cluster:LPN$ grep -r cut-off .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:; nblist cut-off
./mdout.mdp:; long-range cut-off for switched potentials
./mdout.mdp:; cut-off lengths
./mdout.mdp:; Extension of the potential lookup tables beyond the cut-off
mahmood@cluster:LPN$ grep -r Verlet .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:cutoff-scheme= Verlet
./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
per atom,
./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
./mdout.mdp:vdw-modifier = Potential-shift-Verlet




Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
​Here is what I did...
I changed the cutoff-method to Verlet as suggested by
http://www.gromacs.org/Documentation/Cut-off_schemes#
How_to_use_the_Verlet_scheme


Then I followed two scenarios:

1) On the frontend, where gromacs and openmpi have been installed, I ran

​mahmood@cluster:LPN$ date
Mon Oct 17 11:06:40 2016
mahmood@cluster:LPN$ /share/apps/computer/openmpi-2.0.1/bin/mpirun -np 2
/share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi -v
...
...
starting mdrun 'Protein in water'
5000 steps,  5.0 ps.
step 0
[cluster.scu.ac.ir:28044] 1 more process has sent help message
help-mpi-btl-base.txt / btl:no-nics
[cluster.scu.ac.ir:28044] Set MCA parameter "orte_base_help_aggregate" to 0
to see all help / error messages
imb F  0% step 100, will finish Tue Dec  6 11:41:44 2016
imb F  0% step 200, will finish Sun Dec  4 23:06:02 2016
^Cmahmood@cluster:LPN$ date
Mon Oct 17 11:07:01 2016


​So, roughly 21 seconds for about 200 steps. As I checked 'top' command,
two cpus were 100%. Full log is available at http://pastebin.com/CzViEmRb



2) I specified two nodes instead of the frontend. Two nodes have at least
one free core. So, one process on each of them is similar to the previous
scenario.

mahmood@cluster:LPN$ cat hosts.txt
compute-0-2
compute-0-1
mahmood@cluster:LPN$ date
Mon Oct 17 11:12:34 2016
mahmood@cluster:LPN$ /share/apps/computer/openmpi-2.0.1/bin/mpirun -np 2
--hostfile hosts.txt /share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi -v
...
...
starting mdrun 'Protein in water'
5000 steps,  5.0 ps.
step 0
^CKilled by signal 2.
Killed by signal 2.
mahmood@cluster:LPN$ date
Mon Oct 17 11:15:47 2016

So, roughly 3 minutes without any progress!! As I ssh'ed to compute-0-2,
the 'top' command shows

23153 mahmood   39  19  190m  15m 6080 R  1.3  0.0   0:00.39 mdrun_mpi
23154 mahmood   39  19  190m  16m 5700 R  1.3  0.0   0:00.39 mdrun_mpi

And that is very very low cpu utilization. Please see the log at
http://pastebin.com/MZbjK4vD



Any idea is welcomed.
​




Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mark Abraham
Hi,

None, but with one of them you decided to ask cmake to build only
mdrun_mpi, so you don't get the other gmx tools. Since none of them are
aware of MPI, it's perfectly normal to build gmx without MPI and mdrun_mpi
with MPI.

Mark

On Sun, Oct 16, 2016 at 11:36 PM Mahmood Naderan 
wrote:

> Hi mark,
> There is a question here... What is the difference between
>
> mpirun gmx_mpi mdrun
> And
> mpirun mdrun_mpi
>
> ?
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
Hi mark,
There is a question here... What is the difference between

mpirun gmx_mpi mdrun
And
mpirun mdrun_mpi

?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mark Abraham
Hi,

GROMACS is compute bound when it is not network bound, but the output of ps
is barely informative. Looking inside md.log file for the helpful
diagnostics mdrun prints is a fine start. Also, do check out
http://manual.gromacs.org/documentation/2016/user-guide/mdrun-performance.html
for
basics. To know how much hardware makes sense to use for a given
simulation, you need to look at performance on a single core and on all
cores of a node before worrying about adding a second node.

Mark

On Sun, Oct 16, 2016 at 8:24 PM Parvez Mh  wrote:

> Not sure, I do not use OpenMPI, you could try compile following simple mpi
> program and run, see if you get proper node allocation .
>
> --Masrul
>
> On Sun, Oct 16, 2016 at 1:15 PM, Mahmood Naderan 
> wrote:
>
> > Well that is provided by nodes=2:ppn=10 in the PBS script.
> >
> > Regards,
> > Mahmood
> >
> >
> >
> > On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh  wrote:
> >
> > > Hi,
> > >
> > > Where is -np option in mpirun ?
> > >
> > > --Masrul
> > >
> > > On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan <
> mahmood...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > > A PBS script for a gromacs job has been submitted with the following
> > > > content:
> > > >
> > > > #!/bin/bash
> > > > #PBS -V
> > > > #PBS -q default
> > > > #PBS -j oe
> > > > #PBS -l nodes=2:ppn=10
> > > > #PBS -N LPN
> > > > #PBS -o /home/dayer/LPN/mdout.out
> > > > cd $PBS_O_WORKDIR
> > > > mpirun gromacs-5.1/bin/mdrun_mpi -v
> > > >
> > > >
> > > > As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that
> the
> > > cpu
> > > > utilization is not good enough!
> > > >
> > > >
> > > > [root@compute-0-1 ~]# ps aux | grep mdrun_mpi
> > > > dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > >
> > > >
> > > >
> > > > [root@compute-0-3 ~]# ps aux | grep mdrun_mpi
> > > > dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03
> mpirun
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > > dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
> > > > gromacs-5.1/bin/mdrun_mpi -v
> > > >
> > > >
> > > >
> > > > Please note that the third column is the cpu utilization.
> > > > Gromacs is a compute intensive application, so there is little IO or
> > > > something else for that.
> > > >
> > > >
> > > > Please also note that in compute-0-3 the first process is "mpirun
> > > > gromacs-5.1" while the others are only "gromacs-5.1"
> > > >
> > > >
> > > > Any idea is welcomed.
> > > >
> > > > Regards,
> > > > Mahmood
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/
> > > > Support/Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send 

Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Parvez Mh
Not sure, I do not use OpenMPI, you could try compile following simple mpi
program and run, see if you get proper node allocation .

--Masrul

On Sun, Oct 16, 2016 at 1:15 PM, Mahmood Naderan 
wrote:

> Well that is provided by nodes=2:ppn=10 in the PBS script.
>
> Regards,
> Mahmood
>
>
>
> On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh  wrote:
>
> > Hi,
> >
> > Where is -np option in mpirun ?
> >
> > --Masrul
> >
> > On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan 
> > wrote:
> >
> > > Hi,
> > > A PBS script for a gromacs job has been submitted with the following
> > > content:
> > >
> > > #!/bin/bash
> > > #PBS -V
> > > #PBS -q default
> > > #PBS -j oe
> > > #PBS -l nodes=2:ppn=10
> > > #PBS -N LPN
> > > #PBS -o /home/dayer/LPN/mdout.out
> > > cd $PBS_O_WORKDIR
> > > mpirun gromacs-5.1/bin/mdrun_mpi -v
> > >
> > >
> > > As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that the
> > cpu
> > > utilization is not good enough!
> > >
> > >
> > > [root@compute-0-1 ~]# ps aux | grep mdrun_mpi
> > > dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
> > > gromacs-5.1/bin/mdrun_mpi -v
> > >
> > >
> > >
> > > [root@compute-0-3 ~]# ps aux | grep mdrun_mpi
> > > dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03 mpirun
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
> > > gromacs-5.1/bin/mdrun_mpi -v
> > > dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
> > > gromacs-5.1/bin/mdrun_mpi -v
> > >
> > >
> > >
> > > Please note that the third column is the cpu utilization.
> > > Gromacs is a compute intensive application, so there is little IO or
> > > something else for that.
> > >
> > >
> > > Please also note that in compute-0-3 the first process is "mpirun
> > > gromacs-5.1" while the others are only "gromacs-5.1"
> > >
> > >
> > > Any idea is welcomed.
> > >
> > > Regards,
> > > Mahmood
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to 

Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
​>Where is -np option in mpirun ?

Please see this

https://mail-archive.com/users@lists.open-mpi.org/msg30043.html



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
Well that is provided by nodes=2:ppn=10 in the PBS script.

Regards,
Mahmood



On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh  wrote:

> Hi,
>
> Where is -np option in mpirun ?
>
> --Masrul
>
> On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan 
> wrote:
>
> > Hi,
> > A PBS script for a gromacs job has been submitted with the following
> > content:
> >
> > #!/bin/bash
> > #PBS -V
> > #PBS -q default
> > #PBS -j oe
> > #PBS -l nodes=2:ppn=10
> > #PBS -N LPN
> > #PBS -o /home/dayer/LPN/mdout.out
> > cd $PBS_O_WORKDIR
> > mpirun gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> > As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that the
> cpu
> > utilization is not good enough!
> >
> >
> > [root@compute-0-1 ~]# ps aux | grep mdrun_mpi
> > dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
> > gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> >
> > [root@compute-0-3 ~]# ps aux | grep mdrun_mpi
> > dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03 mpirun
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
> > gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> >
> > Please note that the third column is the cpu utilization.
> > Gromacs is a compute intensive application, so there is little IO or
> > something else for that.
> >
> >
> > Please also note that in compute-0-3 the first process is "mpirun
> > gromacs-5.1" while the others are only "gromacs-5.1"
> >
> >
> > Any idea is welcomed.
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Parvez Mh
Hi,

Where is -np option in mpirun ?

--Masrul

On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan 
wrote:

> Hi,
> A PBS script for a gromacs job has been submitted with the following
> content:
>
> #!/bin/bash
> #PBS -V
> #PBS -q default
> #PBS -j oe
> #PBS -l nodes=2:ppn=10
> #PBS -N LPN
> #PBS -o /home/dayer/LPN/mdout.out
> cd $PBS_O_WORKDIR
> mpirun gromacs-5.1/bin/mdrun_mpi -v
>
>
> As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that the cpu
> utilization is not good enough!
>
>
> [root@compute-0-1 ~]# ps aux | grep mdrun_mpi
> dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
> gromacs-5.1/bin/mdrun_mpi -v
>
>
>
> [root@compute-0-3 ~]# ps aux | grep mdrun_mpi
> dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03 mpirun
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
> gromacs-5.1/bin/mdrun_mpi -v
> dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
> gromacs-5.1/bin/mdrun_mpi -v
>
>
>
> Please note that the third column is the cpu utilization.
> Gromacs is a compute intensive application, so there is little IO or
> something else for that.
>
>
> Please also note that in compute-0-3 the first process is "mpirun
> gromacs-5.1" while the others are only "gromacs-5.1"
>
>
> Any idea is welcomed.
>
> Regards,
> Mahmood
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.