Re: [gmx-users] Water tutorial

2016-01-28 Thread Carmen Di Giovanni

Hi Arpita

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/
http://www.gromacs.org/Documentation/Tutorials

Cheers
Carmen

- Original Message - 
From: "Arpita Srivastava" 

To: ; 
Sent: Thursday, January 28, 2016 3:57 PM
Subject: [gmx-users] Water tutorial



Dear Sir,

I am a new user of GROMACS software.I need a tutorial to understand the
working of software. Please send me a tutorial of putting water molecules
in a box and simulating the system.

Thank you.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
a mail to gmx-users-requ...@gromacs.org.





--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-27 Thread Carmen Di Giovanni

I report the changes made to improve the performance of a molecular dynamics
on a protein of 1925 running on GPU INVIDIA K20 Tesla :



a.. To limit the number of cores used in the calculation (option -pin on)
and to have a better performance:
gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



a.. clock frequency using the NVDIA management tool is been increased from
the default 705 MHz to 758 MHz.


a.. to reduce runtime to calculate energies every step in mdp file:


nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these
changes.

Carmen





- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Friday, February 20, 2015 1:25 AM
Subject: Re: [gmx-users] GPU low performance


Please consult the manual an wiki.


--
Szilárd


On Thu, Feb 19, 2015 at 6:44 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:


Szilard,
about:

Fatal error
1) Setting the number of thread-MPI threads is only supported with
thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the 
GROMACS

website at http://www.gromacs.org/Documentation/Errors
---
The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.

Can you help me about the MPI launching syntax?  What is the suitable
command ?


A previous poster has already pointed you to the Acceleration and
parallelization page which, I believe describes the matter in detail.




2) Have you looked at the the performance table at the end of the log?
You are wasting a large amount of runtime calculating energies every
step and this overhead comes in multiple places in the code - one of
them being the non-timed code parts which typically take 3%.


As can I reduce runtime to calculate the energies every step?
I must to modify something in mdp file ?


This is discussed throughly in the manual, you should be looking for
the nstcalcenergy option.



Thank you in advance

Carmen
--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


On Thu, Feb 19, 2015 at 11:32 AM, Carmen Di Giovanni cdigi...@unina.it
wrote:


Dear Szilárd,

1) the output of command nvidia-smi -ac 2600,758 is

[root@localhost test_gpu]# nvidia-smi -ac 2600,758
Applications clocks set to (MEM 2600, SM 758) for GPU :03:00.0

Warning: persistence mode is disabled on this device. This settings will
go
back to default as soon as driver unloads (e.g. last application like
nvidia-smi or cuda application terminates). Run with [--help | -h] 
switch

to
get more information on how to enable persistence mode.



run nvidia-smi -pm 1 if you want to avoid that.


Setting applications clocks is not supported for GPU :82:00.0.
Treating as warning and moving on.
All done.


2) I decreased nlists to 20
However when I do the command:
 gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 
give me a fatal error:

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 


Back Off! I just backed up nvt.log to ./#nvt.log.8#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097


---
Program gmx_mpi, VERSION 5.0
Source code file: /opt/SW/gromacs-5.0/src/programs/mdrun/runner.c, line:
876

Fatal error:
Setting the number of thread-MPI threads is only supported with
thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the
GROMACS
website at http://www.gromacs.org/Documentation/Errors
---



The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.


Halting program gmx_mpi

gcq#223: Jesus Not Only Saves, He Also Frequently Makes Backups. 
(Myron

Bradshaw)


--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them

Re: [gmx-users] GTX980 performance

2015-02-27 Thread Carmen Di Giovanni
Szilárd thank you for the useful adivices about the configuration of the new 
server machine.


I report the changes made to improve the performance of a molecular dynamics 
on a  protein of 1925 running on GPU INVIDIA K20 Tesla :




 a.. To limit the number of cores used in the calculation (option -pin on) 
and to have a better performance:

 gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



 a.. clock frequency using the NVDIA management tool is been increased from 
the default 705 MHz to 758 MHz.



 a.. to reduce runtime  to calculate energies every step in mdp file:


   nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these 
changes.


Carmen




- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Thursday, February 26, 2015 2:37 PM
Subject: Re: GTX980 performance


On Wed, Feb 25, 2015 at 1:21 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:

A special thank you to Szilárd Páll for the good advices in GPU low
performance discussion.
The perfomance calculation is much improved after its suggestions.



I'm glad it helped. Could you post the changes you made to your
mdp/command line and the results these gave? It would allow others to
learn from it.


Dear GROMACS users and developers,

we are thinking to buy a new tyan server machine with  these features:

SERVER SYSTEM TYAN FT48 - Tower/Rack,Dual Xeon ,8xSATA

N. 2 CPU INTEL XEON E5-2620 2.0Ghz - 6 CORE 15MCache LGA2011

N. 4 Scheda Video Nvidia GTX980, 4GB GDDR5,PCIE 3.0 , 2DVI, HD

N. 4 DDR3 8 GB 1600 Mhz
HARD DISK 1 TB SATA 3 WD

I known that the GTX980 offer good performance for GROMACS 5.0
What are your views about this ?


That CPU-GPU combination will give heavily CPU-bound GROMACS runs,
those GTX 980s are 1.5-2x faster than what you can use with those CPUs
- conversely, if you'd get 6-core 3 GHz CPUs, you'll see a huge,
nearly 50% improvement in performance.

This will change in the futur, but at least with GROMACS v5.1 and
earlier, the performance on this machine won't be much higher than
with a single fast CPU and one GTX 980.

For better performance with GROMACS, consider getting better CPUs in
this machine or for the same (or less) money get two workstations with
i7 4930K or 4960X CPUs.


--
Szilárd



Thank you in advance
Carmen




Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GTX980 performance

2015-02-27 Thread Carmen Di Giovanni
Szilárd thank you for the useful adivices about the configuration of the new 
server machine.


I report the changes made to improve the performance of a molecular dynamics 
on a  protein of 1925 running on GPU INVIDIA K20 Tesla :




 a.. To limit the number of cores used in the calculation (option -pin on) 
and to have a better performance:

 gmx_mpi mdrun ... -ntomp 16 -pin on



where ntomp is the number of OpenMP threads



 a.. clock frequency using the NVDIA management tool is been increased from 
the default 705 MHz to 758 MHz.



 a.. to reduce runtime  to calculate energies every step in mdp file:


   nstcalcenergy option = -1



The actual performance is about 7ns /day against 2ns/day without these 
changes.


Carmen




- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com

To: Carmen Di Giovanni cdigi...@unina.it
Cc: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Thursday, February 26, 2015 2:37 PM
Subject: Re: GTX980 performance


On Wed, Feb 25, 2015 at 1:21 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:

A special thank you to Szilárd Páll for the good advices in GPU low
performance discussion.
The perfomance calculation is much improved after its suggestions.



I'm glad it helped. Could you post the changes you made to your
mdp/command line and the results these gave? It would allow others to
learn from it.


Dear GROMACS users and developers,

we are thinking to buy a new tyan server machine with  these features:

SERVER SYSTEM TYAN FT48 - Tower/Rack,Dual Xeon ,8xSATA

N. 2 CPU INTEL XEON E5-2620 2.0Ghz - 6 CORE 15MCache LGA2011

N. 4 Scheda Video Nvidia GTX980, 4GB GDDR5,PCIE 3.0 , 2DVI, HD

N. 4 DDR3 8 GB 1600 Mhz
HARD DISK 1 TB SATA 3 WD

I known that the GTX980 offer good performance for GROMACS 5.0
What are your views about this ?


That CPU-GPU combination will give heavily CPU-bound GROMACS runs,
those GTX 980s are 1.5-2x faster than what you can use with those CPUs
- conversely, if you'd get 6-core 3 GHz CPUs, you'll see a huge,
nearly 50% improvement in performance.

This will change in the futur, but at least with GROMACS v5.1 and
earlier, the performance on this machine won't be much higher than
with a single fast CPU and one GTX 980.

For better performance with GROMACS, consider getting better CPUs in
this machine or for the same (or less) money get two workstations with
i7 4930K or 4960X CPUs.


--
Szilárd



Thank you in advance
Carmen




Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GTX980 performance

2015-02-25 Thread Carmen Di Giovanni
A special thank you to Szilárd Páll for the good advices in GPU low 
performance discussion.
The perfomance calculation is much improved after its suggestions. 


Dear GROMACS users and developers, 

we are thinking to buy a new tyan server machine with  these features:
  a.. SERVER SYSTEM TYAN FT48 - Tower/Rack,Dual Xeon ,8xSATA
  a.. N. 2 CPU INTEL XEON E5-2620 2.0Ghz - 6 CORE 15MCache LGA2011
  a.. N. 4 Scheda Video Nvidia GTX980, 4GB GDDR5,PCIE 3.0 , 2DVI, HD
  a.. N. 4 DDR3 8 GB 1600 Mhz
  b.. HARD DISK 1 TB SATA 3 WD
I known that the GTX980 offer good performance for GROMACS 5.0 
What are your views about this ?


Thank you in advance  
Carmen 




Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-19 Thread Carmen Di Giovanni


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it

Hi Szilard,
I post also the output of the command:
 gmx_mpi mdrun -deffnm nvt -nb gpu -ntomp 16 -pin on

.
Back Off! I just backed up nvt.log to ./#nvt.log.10#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097

Using 1 MPI process
Using 16 OpenMP threads

2 GPUs detected on host localhost.localdomain:
  #0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible
  #1: NVIDIA GeForce GTX 650, compute cap.: 3.0, ECC:  no, stat: compatible

1 GPU auto-selected for this run.
Mapping of GPU to the 1 PP rank in this node: #0


NOTE: potentially sub-optimal launch configuration, gmx_mpi started with less
  PP MPI process per node than GPUs available.
  Each PP MPI process can use only one GPU, 1 GPU per node will be used





Quoting Carmen Di Giovanni cdigi...@unina.it:


Dear all, the full log file is too big.
However in the middle part of it, there are only informations about  
the energies at each time. The first part is alrady posted.

So I post the final part of it:
-
   Step   Time Lambda
   10002.00.0

Writing checkpoint, step 1000 at Mon Dec 29 13:16:22 2014


   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
9.34206e+034.14342e+032.79172e+03   -1.75465e+027.99811e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01135e+06   -7.13064e+062.01349e+04   -6.00306e+061.08201e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.92106e+06   -5.86747e+062.99426e+021.29480e+022.16280e-05

==  ###  ==
  A V E R A G E S  
==  ###  ==

Statistics over 1001 steps using 1001 frames

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
9.45818e+034.30665e+032.92407e+03   -1.75556e+028.02473e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01284e+06   -7.13138e+062.01510e+04   -6.00163e+061.08407e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.91756e+06   -5.38519e+062.8e+021.37549e+020.0e+00

   Total Virial (kJ/mol)
3.42887e+051.63625e+011.23658e+02
1.67406e+013.42916e+05   -4.27834e+01
1.23997e+02   -4.29636e+013.42881e+05

   Pressure (bar)
1.37573e+027.50214e-02   -1.03916e-01
7.22048e-021.37623e+02   -1.66417e-02
   -1.06444e-01   -1.52990e-021.37453e+02


M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 VF=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check16343508.605344   147091577.448 0.0
 NxN Ewald Elec. + LJ [VF]   5072118956.506304 542716728346.17498.1
 1,4 nonbonded interactions   95860.009586 8627400.863 0.0
 Calc Weights  13039741.303974   469430686.943 0.1
 Spread Q Bspline 278181147.818112   556362295.636 0.1
 Gather F Bspline 278181147.818112  1669086886.909 0.3
 3D-FFT   880787450.909824  7046299607.279 1.3
 Solve PME   163837.90950410485626.208 0.0
 Shift-X 108664.934658  651989.608 0.0
 Angles   86090.00860914463121.446 0.0
 Propers  31380.003138 7186020.719 0.0
 Impropers28790.002879 5988320.599 0.0
 Virial 4347030.43470378246547.825 0.0
 Stop-CM4346580.86931643465808.693 0.0
 Calc-Ekin  4346580.869316   117357683.472 0.0
 Lincs59130.017739 3547801.064 0.0
 Lincs-Mat  1033080.309924 4132321.240 0.0
 Constraint-V   4406580.88131635252647.051 0.0
 Constraint-Vir 4347450.434745   104338810.434 0.0
 Settle

Re: [gmx-users] GPU low performance

2015-02-19 Thread Carmen Di Giovanni

Dear Szilárd,

1) the output of command nvidia-smi -ac 2600,758 is

[root@localhost test_gpu]# nvidia-smi -ac 2600,758
Applications clocks set to (MEM 2600, SM 758) for GPU :03:00.0

Warning: persistence mode is disabled on this device. This settings
will go back to default as soon as driver unloads (e.g. last
application like nvidia-smi or cuda application terminates). Run with
[--help | -h] switch to get more information on how to enable
persistence mode.

Setting applications clocks is not supported for GPU :82:00.0.
Treating as warning and moving on.
All done.

2) I decreased nlists to 20
However when I do the command:
gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 
give me a fatal error:

GROMACS: gmx mdrun, VERSION 5.0
Executable: /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir: /opt/SW/gromacs-5.0/share/top
Command line:
gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 


Back Off! I just backed up nvt.log to ./#nvt.log.8#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097


---
Program gmx_mpi, VERSION 5.0
Source code file: /opt/SW/gromacs-5.0/src/programs/mdrun/runner.c, line: 876

Fatal error:
Setting the number of thread-MPI threads is only supported with
thread-MPI and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Halting program gmx_mpi

gcq#223: Jesus Not Only Saves, He Also Frequently Makes Backups.
(Myron Bradshaw)

--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
-


3) I don't understand as I can reduce the Rest time

Carmen



--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it

- Original Message - 
From: Szilárd Páll pall.szil...@gmail.com
To: Discussion list for GROMACS users gmx-us...@gromacs.org; Carmen Di 
Giovanni cdigi...@unina.it

Sent: Wednesday, February 18, 2015 6:38 PM
Subject: Re: [gmx-users] GPU low performance


Please keep the mails on the list.

On Wed, Feb 18, 2015 at 6:32 PM, Carmen Di Giovanni cdigi...@unina.it 
wrote:

nvidia-smi -q -g 0

==NVSMI LOG==

Timestamp   : Wed Feb 18 18:30:01 2015
Driver Version  : 340.24

Attached GPUs   : 2
GPU :03:00.0
Product Name: Tesla K20c

[...

Clocks
Graphics: 705 MHz
SM  : 705 MHz
Memory  : 2600 MHz
Applications Clocks
Graphics: 705 MHz
Memory  : 2600 MHz
Default Applications Clocks
Graphics: 705 MHz
Memory  : 2600 MHz
Max Clocks
Graphics: 758 MHz
SM  : 758 MHz
Memory  : 2600 MHz


This is the relevant part I was looking for. The Tesla K20c supports
setting a so-called application clock which is essentially means that
you can bump its clock frequency using the NVDIA management tool
nvidia-smi from the default 705 MHz to 758 MHz.

Use the command:
nvidia-smi -ac 2600,758

This should give you another 7% or so (I didn't remember the correct
max clock before, that's why I guessing 5%).

Cheers,
Szilard


Clock Policy
Auto Boost  : N/A
Auto Boost Default  : N/A
Compute Processes
Process ID  : 19441
Name: gmx_mpi
Used GPU Memory : 110 MiB

[carmendigi@localhost test_gpu]$







--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


As I suggested above please use pastebin.com or similar!
--
Szilárd


On Wed, Feb 18, 2015 at 6:09 PM, Carmen Di Giovanni cdigi...@unina.it
wrote:


Dear Szilàrd, it's not possible attach the full log file in the forum
mail
because it is too big.
I send it by your private mail address.
Thank you in advance
Carmen


--
Carmen Di

Re: [gmx-users] GPU low performance

2015-02-19 Thread Carmen Di Giovanni

Dear Szilárd,

1) the output of command nvidia-smi -ac 2600,758 is

[root@localhost test_gpu]# nvidia-smi -ac 2600,758
Applications clocks set to (MEM 2600, SM 758) for GPU :03:00.0

Warning: persistence mode is disabled on this device. This settings  
will go back to default as soon as driver unloads (e.g. last  
application like nvidia-smi or cuda application terminates). Run with  
[--help | -h] switch to get more information on how to enable  
persistence mode.


Setting applications clocks is not supported for GPU :82:00.0.
Treating as warning and moving on.
All done.

2) I decreased nlists to 20
However when I do the command:
 gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 
give me a fatal error:

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 


Back Off! I just backed up nvt.log to ./#nvt.log.8#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097


---
Program gmx_mpi, VERSION 5.0
Source code file: /opt/SW/gromacs-5.0/src/programs/mdrun/runner.c, line: 876

Fatal error:
Setting the number of thread-MPI threads is only supported with  
thread-MPI and Gromacs was compiled without thread-MPI

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Halting program gmx_mpi

gcq#223: Jesus Not Only Saves, He Also Frequently Makes Backups.  
(Myron Bradshaw)


--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
-


4) I don't understand as I can reduce the Rest time

Carmen



--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


Please keep the mails on the list.

On Wed, Feb 18, 2015 at 6:32 PM, Carmen Di Giovanni  
cdigi...@unina.it wrote:

nvidia-smi -q -g 0

==NVSMI LOG==

Timestamp   : Wed Feb 18 18:30:01 2015
Driver Version  : 340.24

Attached GPUs   : 2
GPU :03:00.0
Product Name: Tesla K20c

[...

Clocks
Graphics: 705 MHz
SM  : 705 MHz
Memory  : 2600 MHz
Applications Clocks
Graphics: 705 MHz
Memory  : 2600 MHz
Default Applications Clocks
Graphics: 705 MHz
Memory  : 2600 MHz
Max Clocks
Graphics: 758 MHz
SM  : 758 MHz
Memory  : 2600 MHz


This is the relevant part I was looking for. The Tesla K20c supports
setting a so-called application clock which is essentially means that
you can bump its clock frequency using the NVDIA management tool
nvidia-smi from the default 705 MHz to 758 MHz.

Use the command:
nvidia-smi -ac 2600,758

This should give you another 7% or so (I didn't remember the correct
max clock before, that's why I guessing 5%).

Cheers,
Szilard


Clock Policy
Auto Boost  : N/A
Auto Boost Default  : N/A
Compute Processes
Process ID  : 19441
Name: gmx_mpi
Used GPU Memory : 110 MiB

[carmendigi@localhost test_gpu]$







--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


As I suggested above please use pastebin.com or similar!
--
Szilárd


On Wed, Feb 18, 2015 at 6:09 PM, Carmen Di Giovanni cdigi...@unina.it
wrote:


Dear Szilàrd, it's not possible attach the full log file in the forum
mail
because it is too big.
I send it by your private mail address.
Thank you in advance
Carmen


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081

Re: [gmx-users] GPU low performance

2015-02-19 Thread Carmen Di Giovanni


Szilard,
about:

Fatal error
1) Setting the number of thread-MPI threads is only supported with thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.

Can you help me about the MPI launching syntax?  What is the suitable  
command ?




2) Have you looked at the the performance table at the end of the log?
You are wasting a large amount of runtime calculating energies every
step and this overhead comes in multiple places in the code - one of
them being the non-timed code parts which typically take 3%.


As can I reduce runtime to calculate the energies every step?
I must to modify something in mdp file ?


Thank you in advance
Carmen
--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:

On Thu, Feb 19, 2015 at 11:32 AM, Carmen Di Giovanni  
cdigi...@unina.it wrote:

Dear Szilárd,

1) the output of command nvidia-smi -ac 2600,758 is

[root@localhost test_gpu]# nvidia-smi -ac 2600,758
Applications clocks set to (MEM 2600, SM 758) for GPU :03:00.0

Warning: persistence mode is disabled on this device. This settings will go
back to default as soon as driver unloads (e.g. last application like
nvidia-smi or cuda application terminates). Run with [--help | -h] switch to
get more information on how to enable persistence mode.


run nvidia-smi -pm 1 if you want to avoid that.


Setting applications clocks is not supported for GPU :82:00.0.
Treating as warning and moving on.
All done.

2) I decreased nlists to 20
However when I do the command:
 gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 
give me a fatal error:

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm nvt -ntmpi 8 -gpu_id 


Back Off! I just backed up nvt.log to ./#nvt.log.8#
Reading file nvt.tpr, VERSION 5.0 (single precision)
Changing nstlist from 10 to 40, rlist from 1 to 1.097


---
Program gmx_mpi, VERSION 5.0
Source code file: /opt/SW/gromacs-5.0/src/programs/mdrun/runner.c, line: 876

Fatal error:
Setting the number of thread-MPI threads is only supported with thread-MPI
and Gromacs was compiled without thread-MPI
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


The error is quite clearly explains that you're trying to use mdrun's
built-in thread-MPI parallelization, but you have a binary that does
not support it. Use the MPI launching syntax instead.


Halting program gmx_mpi

gcq#223: Jesus Not Only Saves, He Also Frequently Makes Backups. (Myron
Bradshaw)

--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
-


4) I don't understand as I can reduce the Rest time


Have you looked at the the performance table at the end of the log?
You are wasting a large amount of runtime calculating energies every
step and this overhead comes in multiple places in the code - one of
them being the non-timed code parts which typically take 3%.

Cheers,
--
Szilard




Carmen



--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


Please keep the mails on the list.

On Wed, Feb 18, 2015 at 6:32 PM, Carmen Di Giovanni cdigi...@unina.it
wrote:


nvidia-smi -q -g 0

==NVSMI LOG==

Timestamp   : Wed Feb 18 18:30:01 2015
Driver Version  : 340.24

Attached GPUs   : 2
GPU :03:00.0
Product Name: Tesla K20c


[...


Clocks
Graphics: 705 MHz
SM  : 705 MHz
Memory  : 2600 MHz
Applications

Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni
 Constraints1   32   1001   34210.1422846293.908   5.8
 Rest   92338.7817682613.897  15.6
-
 Total 593302.894   49362976.023 100.0
-
 Breakdown of PME mesh computation
-
 PME spread/gather  1   32   2002  144767.207   12044674.424  24.4
 PME 3D-FFT 1   32   2002   39499.1573286341.501   6.7
 PME solve Elec 1   32   10019947.340 827621.589   1.7
-

 GPU timings
-
 Computing: Count  Wall t (s)  ms/step   %
-
 Pair list H2D 250001 935.7513.743 0.2
 X / q H2D   1001   11509.2091.151 2.8
 Nonbonded F+ene k.   975  377111.949   38.67892.0
 Nonbonded F+ene+prune k.  250001   12049.010   48.196 2.9
 F D2H   10018129.2920.813 2.0
-
 Total 409735.211   40.974   100.0
-

Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME grid.

   Core t (s)   Wall t (s)(%)
   Time: 18713831.228   593302.894 3154.2
 6d20h48:22
 (ns/day)(hour/ns)
Performance:2.9138.240
Finished mdrun on rank 0 Mon Dec 29 13:16:24 2014


---
thank you in advance
Carmen



--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Szilárd Páll pall.szil...@gmail.com:


We need a *full* log file, not parts of it!

You can try running with -ntomp 16 -pin on - it may be a bit faster
not not use HyperThreading.
--
Szilárd


On Wed, Feb 18, 2015 at 5:20 PM, Carmen Di Giovanni  
cdigi...@unina.it wrote:

Justin,
the problem is evident for all calculations.
This is the log file  of a recent run:



Log file opened on Mon Dec 22 16:28:00 2014
Host: localhost.localdomain  pid: 8378  rank ID: 0  number of ranks:  1
GROMACS:gmx mdrun, VERSION 5.0

GROMACS is written by:
Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par Bjelkmar
Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
Peter Tieleman Christian Wennberg Maarten Wolf
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2014, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm prod_20ns

Gromacs version:VERSION 5.0
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:enabled
invsqrt routine:gmx_software_invsqrt(x)
SIMD instructions:  AVX_256
FFT library:fftw-3.3.3-sse2
RDTSCP usage:   enabled
C++11 compilation:  disabled
TNG support:enabled
Tracing support:disabled
Built on:   Thu Jul 31 18:30:37 CEST 2014
Built by:   root@localhost.localdomain [CMAKE]
Build OS/arch:  Linux 2.6.32-431.el6.x86_64 x86_64
Build CPU vendor:   GenuineIntel
Build CPU

Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni
2.97359e+03   -1.93107e+028.05534e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01340e+06   -7.13271e+062.01361e+04   -6.00175e+061.09887e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.90288e+06   -4.90288e+063.04092e+021.70897e+022.16683e-05

step   80: timed with pme grid 128 128 128, coulomb cutoff 1.200:  
6279.0 M-cycles
step  160: timed with pme grid 112 112 112, coulomb cutoff 1.306:  
6962.2 M-cycles
step  240: timed with pme grid 100 100 100, coulomb cutoff 1.463:  
8406.5 M-cycles
step  320: timed with pme grid 128 128 128, coulomb cutoff 1.200:  
6424.0 M-cycles
step  400: timed with pme grid 120 120 120, coulomb cutoff 1.219:  
6369.1 M-cycles
step  480: timed with pme grid 112 112 112, coulomb cutoff 1.306:  
7309.0 M-cycles
step  560: timed with pme grid 108 108 108, coulomb cutoff 1.355:  
7521.2 M-cycles
step  640: timed with pme grid 104 104 104, coulomb cutoff 1.407:  
8369.8 M-cycles

  optimal pme grid 128 128 128, coulomb cutoff 1.200
   Step   Time Lambda
   25005.00.0

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
9.72545e+034.33046e+032.98087e+03   -1.95794e+028.05967e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01293e+06   -7.13110e+062.01689e+04   -6.00057e+061.08489e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.91567e+06   -4.90300e+063.00225e+021.36173e+022.25998e-05

   Step   Time Lambda
   5000   10.00.0



---

Thank you in advance

--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 11:09 AM, Barnett, James W wrote:

What's your exact command?



A full .log file would be even better; it would tell us everything  
we need to know :)


-Justin

Have you reviewed this page:  
http://www.gromacs.org/Documentation/Acceleration_and_parallelization


James Wes Barnett
Ph.D. Candidate
Chemical and Biomolecular Engineering

Tulane University
Boggs Center for Energy and Biotechnology, Room 341-B


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se  
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of  
Carmen Di Giovanni cdigi...@unina.it

Sent: Wednesday, February 18, 2015 10:06 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GPU low performance

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a  
finer PME grid.


As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve  
this problem.

In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,
upload it to a file-sharing service and provide a URL.  The full
.log is quite important for understanding your hardware,
optimizations, and seeing full details of the performance breakdown.
 But again, base your assessment on MD, not EM.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME grid.

As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,  
upload it to a file-sharing service and provide a URL.  The full  
.log is quite important for understanding your hardware,  
optimizations, and seeing full details of the performance breakdown.  
 But again, base your assessment on MD, not EM.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.







--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni


Dear James, this is the command:
gmx_mpi mdrun -s prod_30ns.tpr  -deffnm prod_30ns -gpu_id 0
where gpu_id = 0 is INVIDIA Tesla K20


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Barnett, James W jbarn...@tulane.edu:


What's your exact command?

Have you reviewed this page:  
http://www.gromacs.org/Documentation/Acceleration_and_parallelization


James Wes Barnett
Ph.D. Candidate
Chemical and Biomolecular Engineering

Tulane University
Boggs Center for Energy and Biotechnology, Room 341-B


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se  
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of  
Carmen Di Giovanni cdigi...@unina.it

Sent: Wednesday, February 18, 2015 10:06 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GPU low performance

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a  
finer PME grid.


As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve  
this problem.

In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,
upload it to a file-sharing service and provide a URL.  The full
.log is quite important for understanding your hardware,
optimizations, and seeing full details of the performance breakdown.
 But again, base your assessment on MD, not EM.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.







--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni
Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file

thank you in advance
Carmen Di Giovanni


-- 
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it


Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU Installation

2015-02-16 Thread Carmen Di Giovanni

Many thanks Mark






- Original Message - 
From: Mark Abraham mark.j.abra...@gmail.com

To: Discussion list for GROMACS users gmx-us...@gromacs.org
Sent: Monday, February 16, 2015 1:15 PM
Subject: Re: [gmx-users] GPU Installation



Hi,

Technically yes, but usefully, no. I suspect all GPU MD implementation 
need
several tens of thousands of particles per GPU to work efficiently, and 
the

more GPUs the more issues there are to manage (for software and hardware).
The GROMACS implementation needs a comparably tasty CPU, which you 
probably

can't get for 8 GTX Titans. So for GROMACS, I would think more in terms of
4 Titans per dual-socket Haswell node.

Mark

On Mon, Feb 16, 2015 at 11:30 AM, Carmen Di Giovanni cdigi...@unina.it
wrote:


Dear all,
I would like to known if Gromacs can be installed on 8 GPU GTX Titan in
parallel
Thank you in advance
Carmen



Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
a mail to gmx-users-requ...@gromacs.org.





--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU Installation

2015-02-16 Thread Carmen Di Giovanni
Dear all,
I would like to known if Gromacs can be installed on 8 GPU GTX Titan in parallel
Thank you in advance
Carmen 



Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.