Re: [gmx-users] mdrun -nsteps

2015-02-18 Thread Sabine Reisser

Alright, thanks!

Am 17.02.2015 13:48, schrieb Mark Abraham:

Hi,

The second call to mdrun will run 1 steps if it can. Whether that is a
continuation or not depends what checkpoint mdrun finds, etc.

gmx mdrun -h says

-nsteps Run this number of steps, overrides .mdp option

This is true, from the point of view of the next MD simulation part, which
is what mdrun does. That this part is just one piece of a bigger picture
isn't something mdrun needs to help you manage, I think.

Using the word append would be ambiguous, because gmx mdrun -append
refers to running some number of steps and concatenating the output files
(or not, with -noappend). Continue doesn't fully work either, because it
implies the existence of a previous simulation part.

The main alternative would be defining -nsteps as do this total number of
steps, starting from the checkpoint (if any). This isn't very usable
unless you know for sure what is in your checkpoint. The current meaning
for -nsteps means that it is easy to write a job submission script e.g. for
2 hours doing 50,000 more steps - which is awkward otherwise.

Cheers,

Mark


On Tue, Feb 17, 2015 at 12:10 PM, Sabine Reisser sabine.reis...@kit.edu
wrote:


Hi,

just to be sure: is it intended, that the -nsteps option in mdrun (which
is a great feature in general!) actually appends the given number of steps
rather than overriding the mdp option?
This is what I do:
- grompp with mdp file with 5000 steps
- mdrun
- mdrun -nsteps 1 steps

The second mdrun runs until 15000 steps, not 1 steps. Is this
intended? If yes, the option description is somewhat misleading.

Cheers
Sabine


--
Dipl. Phys. Sabine Reißer
Karlsruhe Institute of Technology (KIT)
Institute of Physical Chemistry

Phone +49 (0) 721 / 608-45070

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.




--
Dipl. Phys. Sabine Reißer
Karlsruhe Institute of Technology (KIT)
Institute of Physical Chemistry

Phone +49 (0) 721 / 608-45070

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni

Dear all, the full log file is too big.
However in the middle part of it, there are only informations about  
the energies at each time. The first part is alrady posted.

So I post the final part of it:
-
   Step   Time Lambda
   10002.00.0

Writing checkpoint, step 1000 at Mon Dec 29 13:16:22 2014


   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
9.34206e+034.14342e+032.79172e+03   -1.75465e+027.99811e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01135e+06   -7.13064e+062.01349e+04   -6.00306e+061.08201e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.92106e+06   -5.86747e+062.99426e+021.29480e+022.16280e-05

==  ###  ==
  A V E R A G E S  
==  ###  ==

Statistics over 1001 steps using 1001 frames

   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
9.45818e+034.30665e+032.92407e+03   -1.75556e+028.02473e+04
LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
1.01284e+06   -7.13138e+062.01510e+04   -6.00163e+061.08407e+06
   Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
   -4.91756e+06   -5.38519e+062.8e+021.37549e+020.0e+00

   Total Virial (kJ/mol)
3.42887e+051.63625e+011.23658e+02
1.67406e+013.42916e+05   -4.27834e+01
1.23997e+02   -4.29636e+013.42881e+05

   Pressure (bar)
1.37573e+027.50214e-02   -1.03916e-01
7.22048e-021.37623e+02   -1.66417e-02
   -1.06444e-01   -1.52990e-021.37453e+02


M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 VF=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check16343508.605344   147091577.448 0.0
 NxN Ewald Elec. + LJ [VF]   5072118956.506304 542716728346.17498.1
 1,4 nonbonded interactions   95860.009586 8627400.863 0.0
 Calc Weights  13039741.303974   469430686.943 0.1
 Spread Q Bspline 278181147.818112   556362295.636 0.1
 Gather F Bspline 278181147.818112  1669086886.909 0.3
 3D-FFT   880787450.909824  7046299607.279 1.3
 Solve PME   163837.90950410485626.208 0.0
 Shift-X 108664.934658  651989.608 0.0
 Angles   86090.00860914463121.446 0.0
 Propers  31380.003138 7186020.719 0.0
 Impropers28790.002879 5988320.599 0.0
 Virial 4347030.43470378246547.825 0.0
 Stop-CM4346580.86931643465808.693 0.0
 Calc-Ekin  4346580.869316   117357683.472 0.0
 Lincs59130.017739 3547801.064 0.0
 Lincs-Mat  1033080.309924 4132321.240 0.0
 Constraint-V   4406580.88131635252647.051 0.0
 Constraint-Vir 4347450.434745   104338810.434 0.0
 Settle 1429440.428832   461709258.513 0.1
-
 Total 553500452758.122   100.0
-


 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 32 OpenMP threads

 Computing:  Num   Num  CallWall time Giga-Cycles
 Ranks Threads  Count  (s) total sum%
-
 Neighbor search1   32 2500016231.657 518475.694   1.1
 Launch GPU ops.1   32   10011825.689 151897.833   0.3
 Force  1   32   1001   49568.9594124152.027   8.4
 PME mesh   1   32   1001  194798.850   16207321.863  32.8
 Wait GPU local 1   32   1001  170272.438   14166717.115  28.7
 NB X/F buffer ops. 1   32   19750001   29175.6322427421.177   4.9
 Write traj.1   32  206351567.928 130452.056   0.3
 Update 1   32   1001   13312.8191107630.452   2.2
 

Re: [gmx-users] GPU low performance

2015-02-18 Thread Szilárd Páll
On Wed, Feb 18, 2015 at 5:57 PM, Carmen Di Giovanni cdigi...@unina.it wrote:
 Dear all, the full log file is too big.

Use pastebin or similar services.

 However in the middle part of it, there are only informations about the
 energies at each time. The first part is alrady posted.

OK, so first of all, this looks nothing like the alarmingly low
CPU-GPU overlap you posted about initially. Here, the GPU you are
using simply can't keep up with 2x8 Haswell-E cores. You observing
this by looking at the fraction of runtime spent by the CPU waiting
for the GPU displayed in the performace table's Wait GPU local row
which shows 28.7% idling.

At the moment, the non-bonded computation which is fully don on the
GPU can't be split between CPU and GPU, so your options are limited
and most of these will a minor effect:
i) indirectly shift work back to the CPU and/or improve the overlap efficiency
  a) try decreasing nstlist to 10-20-25
  b) run on less threads (as suggested before) which will likely
improve performance in some non-overlap code parts
  c) run with DD, e.g. -ntmpi 4 -ntomp 4/8 -gpu_id 0011 or -ntmpi 8
-gpu_id 

ii) Reduce the Rest time. Not sure what's causing it, but you
simulation spends a substantial amount (15.6%) of the runtime in
unaccounted for likely serial calculation; i-b and i-c will likely
reduce this somewhat too;

iii) get more/faster or GPUs

 So I post the final part of it:
 -
Step   Time Lambda
10002.00.0

 Writing checkpoint, step 1000 at Mon Dec 29 13:16:22 2014


Energies (kJ/mol)
G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
 9.34206e+034.14342e+032.79172e+03   -1.75465e+027.99811e+04
 LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
 1.01135e+06   -7.13064e+062.01349e+04   -6.00306e+061.08201e+06
Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
-4.92106e+06   -5.86747e+062.99426e+021.29480e+022.16280e-05

 ==  ###  ==
   A V E R A G E S  
 ==  ###  ==

 Statistics over 1001 steps using 1001 frames

Energies (kJ/mol)
G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
 9.45818e+034.30665e+032.92407e+03   -1.75556e+028.02473e+04
 LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
 1.01284e+06   -7.13138e+062.01510e+04   -6.00163e+061.08407e+06
Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
-4.91756e+06   -5.38519e+062.8e+021.37549e+020.0e+00

Total Virial (kJ/mol)
 3.42887e+051.63625e+011.23658e+02
 1.67406e+013.42916e+05   -4.27834e+01
 1.23997e+02   -4.29636e+013.42881e+05

Pressure (bar)
 1.37573e+027.50214e-02   -1.03916e-01
 7.22048e-021.37623e+02   -1.66417e-02
-1.06444e-01   -1.52990e-021.37453e+02


 M E G A - F L O P S   A C C O U N T I N G

  NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
  RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
  W3=SPC/TIP3p  W4=TIP4p (single or pairs)
  VF=Potential and force  V=Potential only  F=Force only

  Computing:   M-Number M-Flops  % Flops
 -
  Pair Search distance check16343508.605344   147091577.448 0.0
  NxN Ewald Elec. + LJ [VF]   5072118956.506304 542716728346.17498.1
  1,4 nonbonded interactions   95860.009586 8627400.863 0.0
  Calc Weights  13039741.303974   469430686.943 0.1
  Spread Q Bspline 278181147.818112   556362295.636 0.1
  Gather F Bspline 278181147.818112  1669086886.909 0.3
  3D-FFT   880787450.909824  7046299607.279 1.3
  Solve PME   163837.90950410485626.208 0.0
  Shift-X 108664.934658  651989.608 0.0
  Angles   86090.00860914463121.446 0.0
  Propers  31380.003138 7186020.719 0.0
  Impropers28790.002879 5988320.599 0.0
  Virial 4347030.43470378246547.825 0.0
  Stop-CM4346580.86931643465808.693 0.0
  Calc-Ekin  4346580.869316   117357683.472 0.0
  Lincs59130.017739 3547801.064 0.0
  Lincs-Mat  1033080.309924 4132321.240 0.0
  Constraint-V   4406580.88131635252647.051 0.0
  Constraint-Vir 4347450.434745   

Re: [gmx-users] GPU low performance

2015-02-18 Thread Szilárd Páll
I've just noticed something serious. Why are you calculating energies
every step? Doing that makes the non-bonded force calculation on
average 25-30% slower than e.g. calculating energies every 100-th
step.

You may be able to get another 5% or so form your GPU, could you post
the output of nvidia-smi -q -g 0?
--
Szilárd


On Wed, Feb 18, 2015 at 6:14 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 On Wed, Feb 18, 2015 at 5:57 PM, Carmen Di Giovanni cdigi...@unina.it wrote:
 Dear all, the full log file is too big.

 Use pastebin or similar services.

 However in the middle part of it, there are only informations about the
 energies at each time. The first part is alrady posted.

 OK, so first of all, this looks nothing like the alarmingly low
 CPU-GPU overlap you posted about initially. Here, the GPU you are
 using simply can't keep up with 2x8 Haswell-E cores. You observing
 this by looking at the fraction of runtime spent by the CPU waiting
 for the GPU displayed in the performace table's Wait GPU local row
 which shows 28.7% idling.

 At the moment, the non-bonded computation which is fully don on the
 GPU can't be split between CPU and GPU, so your options are limited
 and most of these will a minor effect:
 i) indirectly shift work back to the CPU and/or improve the overlap efficiency
   a) try decreasing nstlist to 10-20-25
   b) run on less threads (as suggested before) which will likely
 improve performance in some non-overlap code parts
   c) run with DD, e.g. -ntmpi 4 -ntomp 4/8 -gpu_id 0011 or -ntmpi 8
 -gpu_id 

 ii) Reduce the Rest time. Not sure what's causing it, but you
 simulation spends a substantial amount (15.6%) of the runtime in
 unaccounted for likely serial calculation; i-b and i-c will likely
 reduce this somewhat too;

 iii) get more/faster or GPUs

 So I post the final part of it:
 -
Step   Time Lambda
10002.00.0

 Writing checkpoint, step 1000 at Mon Dec 29 13:16:22 2014


Energies (kJ/mol)
G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
 9.34206e+034.14342e+032.79172e+03   -1.75465e+027.99811e+04
 LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
 1.01135e+06   -7.13064e+062.01349e+04   -6.00306e+061.08201e+06
Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
-4.92106e+06   -5.86747e+062.99426e+021.29480e+022.16280e-05

 ==  ###  ==
   A V E R A G E S  
 ==  ###  ==

 Statistics over 1001 steps using 1001 frames

Energies (kJ/mol)
G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
 9.45818e+034.30665e+032.92407e+03   -1.75556e+028.02473e+04
 LJ (SR)   Coulomb (SR)   Coul. recip.  PotentialKinetic En.
 1.01284e+06   -7.13138e+062.01510e+04   -6.00163e+061.08407e+06
Total Energy  Conserved En.Temperature Pressure (bar)   Constr. rmsd
-4.91756e+06   -5.38519e+062.8e+021.37549e+020.0e+00

Total Virial (kJ/mol)
 3.42887e+051.63625e+011.23658e+02
 1.67406e+013.42916e+05   -4.27834e+01
 1.23997e+02   -4.29636e+013.42881e+05

Pressure (bar)
 1.37573e+027.50214e-02   -1.03916e-01
 7.22048e-021.37623e+02   -1.66417e-02
-1.06444e-01   -1.52990e-021.37453e+02


 M E G A - F L O P S   A C C O U N T I N G

  NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
  RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
  W3=SPC/TIP3p  W4=TIP4p (single or pairs)
  VF=Potential and force  V=Potential only  F=Force only

  Computing:   M-Number M-Flops  % Flops
 -
  Pair Search distance check16343508.605344   147091577.448 0.0
  NxN Ewald Elec. + LJ [VF]   5072118956.506304 542716728346.17498.1
  1,4 nonbonded interactions   95860.009586 8627400.863 0.0
  Calc Weights  13039741.303974   469430686.943 0.1
  Spread Q Bspline 278181147.818112   556362295.636 0.1
  Gather F Bspline 278181147.818112  1669086886.909 0.3
  3D-FFT   880787450.909824  7046299607.279 1.3
  Solve PME   163837.90950410485626.208 0.0
  Shift-X 108664.934658  651989.608 0.0
  Angles   86090.00860914463121.446 0.0
  Propers  31380.003138 7186020.719 0.0
  Impropers28790.002879 5988320.599 0.0
  Virial 4347030.43470378246547.825 0.0
  Stop-CM 

[gmx-users] Need of mdp files

2015-02-18 Thread Antara mazumdar
Hi,

I am trying to simulate a peripheral membrane protein in a heterogenous
lipid bilayer of DOPC and DOPG using charmm 36 force field in GROMACS. i
need mdp files for NPT, NVT, steep and MD to check whether the conditions i
am using to simulate my system are appropriate or not.

-- 

Regards,

Antara

--
J.R.F.(Project)
Systems Biology Group
CSIR - Institute of Genomics  Integrative Biology
South Campus, New Delhi-110020
M:+91-9717970040
--
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation of heme along with h2o2

2015-02-18 Thread Sanchaita Rajkhowa
Dear Justin, thank you for the reply. However, I would like to know if
there is any server which can generate parameters for heme to be used in
OPLS-aa? We have already tried in swissparam with failed results. Please
help.

On 18 February 2015 at 21:15, Justin Lemkul jalem...@vt.edu wrote:



 On 2/18/15 9:18 AM, Sanchaita Rajkhowa wrote:

 Dear all, I am trying to simulate a heme containing protein in high
 concentration (having hydrogen peroxide). However, I do not know which
 forcefield to use. Heme has forcefield in gromos96 but not the forcefield
 of H2O2? Will generating .itp files for H2O2 from swissparam work? Please
 help.


 SwissParam creates CHARMM-compatible parameters, so no, you can't mix
 those with GROMOS.  CHARMM supports heme, but you have to clean up a lot of
 auto-generated angles and dihedrals that don't belong.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni

Justin,
the problem is evident for all calculations.
This is the log file  of a recent run:



Log file opened on Mon Dec 22 16:28:00 2014
Host: localhost.localdomain  pid: 8378  rank ID: 0  number of ranks:  1
GROMACS:gmx mdrun, VERSION 5.0

GROMACS is written by:
Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par Bjelkmar
Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
Peter Tieleman Christian Wennberg Maarten Wolf
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2014, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, VERSION 5.0
Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
Library dir:  /opt/SW/gromacs-5.0/share/top
Command line:
  gmx_mpi mdrun -deffnm prod_20ns

Gromacs version:VERSION 5.0
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled
GPU support:enabled
invsqrt routine:gmx_software_invsqrt(x)
SIMD instructions:  AVX_256
FFT library:fftw-3.3.3-sse2
RDTSCP usage:   enabled
C++11 compilation:  disabled
TNG support:enabled
Tracing support:disabled
Built on:   Thu Jul 31 18:30:37 CEST 2014
Built by:   root@localhost.localdomain [CMAKE]
Build OS/arch:  Linux 2.6.32-431.el6.x86_64 x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
Build CPU family:   6   Model: 62   Stepping: 4
Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf_lm  
mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp  
sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic

C compiler: /usr/bin/cc GNU 4.4.7
C compiler flags:-mavx   -Wno-maybe-uninitialized -Wextra  
-Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith  
-Wall -Wno-unused -Wunused-value -Wunused-parameter
-fomit-frame-pointer -funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG

C++ compiler:   /usr/bin/c++ GNU 4.4.7
C++ compiler flags:  -mavx   -Wextra -Wno-missing-field-initializers  
-Wpointer-arith -Wall -Wno-unused-function   -fomit-frame-pointer  
-funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG

Boost version:  1.55.0 (internal)
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda  
compiler driver;Copyright (c) 2005-2013 NVIDIA Corporation;Built on  
Thu_Mar_13_11:58:58_PDT_2014;Cuda compilation tools, release 6.0, V6.0.1
CUDA compiler  
flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_35,code=compute_35;-use_fast_math;-Xcompiler;-fPIC ;  
;-mavx;-Wextra;-Wno-missing-field-initializers;-Wpointer-arith;-Wall;-Wno-unused-function;-fomit-frame-pointer;-funroll-all-loops;-Wno-array-bounds;-O3;-DNDEBUG

CUDA driver:6.50
CUDA runtime:   6.0



 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  


For optimal performance with a GPU nstlist (now 10) should be larger.
The optimum depends on your CPU and GPU resources.
You 

Re: [gmx-users] GPU low performance

2015-02-18 Thread Justin Lemkul



On 2/18/15 11:20 AM, Carmen Di Giovanni wrote:

Justin,
the problem is evident for all calculations.
This is the log file  of a recent run:



Again, this is *part* of a .log file, but at least we're starting to get 
somewhere.  Please realize that I'm not insisting on this for no reason; there 
is critical information in the last part of the .log file!  We know what we're 
looking for - please give it to us so we can help!


A few things:


GROMACS:gmx mdrun, VERSION 5.0


Upgrade to 5.0.4 to avoid some GPU-specific bugs.  See the release notes.

snip


Using 1 MPI process
Using 32 OpenMP threads



As was pointed out earlier, there is a lot of fine-tuning that can try to 
improve performance.  mdrun tries to guess what will be best for your hardware, 
but it's not always right.  See the linked page that was posted before, as well 
as the numerous discussions about improving GPU performance.


snip


System total charge: -0.012


Check your topology; something might be wrong here.

snip


There are: 434658 Atoms



Pretty big system, so you *might* be getting as good of performance as you can 
expect from the hardware.


snip





There's important stuff you're leaving out here :)

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] simulation of heme along with h2o2

2015-02-18 Thread Sanchaita Rajkhowa
Dear all, I am trying to simulate a heme containing protein in high
concentration (having hydrogen peroxide). However, I do not know which
forcefield to use. Heme has forcefield in gromos96 but not the forcefield
of H2O2? Will generating .itp files for H2O2 from swissparam work? Please
help.

Thanks in advance.
Sanchaita.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] free energy calculation: combination of position restraint +refcoord_scaling cause crash during NPT

2015-02-18 Thread Justin Lemkul



On 2/17/15 12:31 PM, Jin Zhang wrote:

Dear all,

We're doing free energy calculation and found some of my lambda job crashes
due to the combination of position restraint +refcoord_scaling COM. All
crashes happened at NPT steps.

By checking each energy term, we found the Position Rest. term is
surprisingly as high as 10^6 kJ/mol, the others look fine. Either turn of
position restrain or refcoord_scaling could avoid the simulation to be
crashed. We tried to see if there's mismatch of either COM or coordinate
and found no direct answer.
Any help would be appreciated!

In the tpr file
   refcoord-scaling = COM
   posres-com (3):
   posres-com[0]= 3.15034e-01
   posres-com[1]= 4.48110e-01
   posres-com[2]= 5.17730e-01

COM of protein-ligand calculated by t_traj -com -ox
3.024224.301694.84964



If the reference COM is defined as (0.315,0.448,0.518) and your actual 
coordinates are what you're shown above, there will inherently be a huge 
restraint potential as mdrun tries to bias the coordinates towards the defined 
reference.  I suspect something is wrong in your definition of your reference.


-Justin


coordinate of 1st atom in tpr: posres_xA[0]={ 1.39783e-01,
-9.56922e-02,  1.22536e+00}
coordinate of 1st atom in gro: 1MOL C11   3.164   4.206
6.075

The same thing was also found in a normal non-free energy simulations with
combination of position restrain and refcoord-scaling.
Later on, we found some other people also have the same problem when use
that combination to do free energy calculation.
http://comments.gmane.org/gmane.science.biology.gromacs.user/66177
It makes more sense to me to turn on refcoord_scaling while using position
restraint.
Again, any help to understand this would be appreciated!

Regards,
Jin



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Double counting of h-bonds g_hbond:issue

2015-02-18 Thread Justin Lemkul



On 2/17/15 5:14 PM, Udaya Dahal wrote:

Thanks Justin for prompt reply. Actually I calculated for the h_bonds with
same criteria for (angle 30 and distance(D-A) 0.35nm) from VMD, and the
number is lot less for both Polymer water and water water.  I have 193
water molecules and the H-bond from VMD with same criteria gives me 232
H-bonds while GROMACS provides 340.  The value GROMACS provides is better
in the sense that it matches more closely to the experimental value but I
kind of unsure since the visualization (in VMD ) also showed very few bonds.
  It shows like only 5 bonds from vmd calculation(and visualization) in
previously mentioned Polymer water h-bonds.



Probably VMD's plugin and g_hbond are doing something different.  Note the VMD 
documentation seems to indicate the angle criteria is defined differently (D-H-A 
angle whereas g_hbond uses H-D-A, yet they use a similar value as a cutoff). 
Define a very small test system, something that you can literally look at and 
know the answer for sure, and see if the two programs agree.


-Justin


On Tue, Feb 17, 2015 at 7:58 AM, Justin Lemkul jalem...@vt.edu wrote:




On 2/16/15 7:39 PM, Udaya Dahal wrote:


Hi All,

When I used the g_hbond the hydrogen bonding i was getting was quite good
but when I checked the index file i find double counting of the bonds. So
far, it seems to me that g_hbond is giving higher than the real hydrogen
bonds present in the system. For eg. in the following OW-HW-oxygen, we see
there are two bonds between 109(OW) and 110(HW) for two different polymer
oxygens.

Can anyone explain this issue?

   76 77 12
  100101 47
  109110 40---1 (same hydrogen with two different oxygens)
  109110 54---2(same hydrogen with two different oxygens)



So does a visual inspection of this particular frame confirm that there
shouldn't actually be hydrogen bonds with these atoms?  This isn't
double-counting, it's just a bit unusual, but is possible within the
context of whatever criteria you have set for defining a hydrogen bond.

-Justin

   211212 54

  211212 68
  337338 68
  343344  5
  343344 19
  403404 61
  487488  5
  511512 26
  511512 40
  538539 33
  592593 12
  592593 26

Hbnum.xvg shows
hydrogen bonds pairs within 0.35
16   8


Regards,
Udaya



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME grid.

As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,  
upload it to a file-sharing service and provide a URL.  The full  
.log is quite important for understanding your hardware,  
optimizations, and seeing full details of the performance breakdown.  
 But again, base your assessment on MD, not EM.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.







--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Barnett, James W
What's your exact command?

Have you reviewed this page: 
http://www.gromacs.org/Documentation/Acceleration_and_parallelization

James Wes Barnett
Ph.D. Candidate
Chemical and Biomolecular Engineering

Tulane University
Boggs Center for Energy and Biotechnology, Room 341-B


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of Carmen Di 
Giovanni cdigi...@unina.it
Sent: Wednesday, February 18, 2015 10:06 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GPU low performance

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a finer PME grid.

As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:



 On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:
 Daear all,
 I'm working on a machine with an INVIDIA Teska K20.
 After a minimization on a protein of 1925 atoms this is the mesage:

 Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
 For optimal performance this ratio should be close to 1!


 Minimization is a poor indicator of performance.  Do a real MD run.


 NOTE: The GPU has 25% less load than the CPU. This imbalance causes
 performance loss.

 Core t (s) Wall t (s) (%)
 Time: 3289.010 205.891 1597.4
 (steps/hour)
 Performance: 8480.2
 Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


 Cai I improve the performance?
 At the moment in the forum I didn't full informations to solve this problem.
 In attachment there is the log. file


 The list does not accept attachments.  If you wish to share a file,
 upload it to a file-sharing service and provide a URL.  The full
 .log is quite important for understanding your hardware,
 optimizations, and seeing full details of the performance breakdown.
  But again, base your assessment on MD, not EM.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
 or send a mail to gmx-users-requ...@gromacs.org.





--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Justin Lemkul



On 2/18/15 11:09 AM, Barnett, James W wrote:

What's your exact command?



A full .log file would be even better; it would tell us everything we need to 
know :)


-Justin


Have you reviewed this page: 
http://www.gromacs.org/Documentation/Acceleration_and_parallelization

James Wes Barnett
Ph.D. Candidate
Chemical and Biomolecular Engineering

Tulane University
Boggs Center for Energy and Biotechnology, Room 341-B


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of Carmen Di Giovanni 
cdigi...@unina.it
Sent: Wednesday, February 18, 2015 10:06 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GPU low performance

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
performance loss, consider using a shorter cut-off and a finer PME grid.

As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,
upload it to a file-sharing service and provide a URL.  The full
.log is quite important for understanding your hardware,
optimizations, and seeing full details of the performance breakdown.
  But again, base your assessment on MD, not EM.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni


Dear James, this is the command:
gmx_mpi mdrun -s prod_30ns.tpr  -deffnm prod_30ns -gpu_id 0
where gpu_id = 0 is INVIDIA Tesla K20


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Barnett, James W jbarn...@tulane.edu:


What's your exact command?

Have you reviewed this page:  
http://www.gromacs.org/Documentation/Acceleration_and_parallelization


James Wes Barnett
Ph.D. Candidate
Chemical and Biomolecular Engineering

Tulane University
Boggs Center for Energy and Biotechnology, Room 341-B


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se  
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of  
Carmen Di Giovanni cdigi...@unina.it

Sent: Wednesday, February 18, 2015 10:06 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GPU low performance

I post the message of a md run :


Force evaluation time GPU/CPU: 40.974 ms/24.437 ms = 1.677
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a  
finer PME grid.


As can I solved this problem ?
Thank you in advance


--
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it



Quoting Justin Lemkul jalem...@vt.edu:




On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve  
this problem.

In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file,
upload it to a file-sharing service and provide a URL.  The full
.log is quite important for understanding your hardware,
optimizations, and seeing full details of the performance breakdown.
 But again, base your assessment on MD, not EM.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at  
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before  
posting!


* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  
or send a mail to gmx-users-requ...@gromacs.org.







--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU low performance

2015-02-18 Thread Carmen Di Giovanni
Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file

thank you in advance
Carmen Di Giovanni


-- 
Carmen Di Giovanni, PhD
Dept. of Pharmaceutical and Toxicological Chemistry
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it


Carmen Di Giovanni, PhD
Postdoctoral Researcher
Dept. of Pharmacy
Drug Discovery Lab
University of Naples Federico II
Via D. Montesano, 49
80131 Naples
Tel.: ++39 081 678623
Fax: ++39 081 678100
Email: cdigi...@unina.it
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation of heme along with h2o2

2015-02-18 Thread Justin Lemkul



On 2/18/15 9:18 AM, Sanchaita Rajkhowa wrote:

Dear all, I am trying to simulate a heme containing protein in high
concentration (having hydrogen peroxide). However, I do not know which
forcefield to use. Heme has forcefield in gromos96 but not the forcefield
of H2O2? Will generating .itp files for H2O2 from swissparam work? Please
help.



SwissParam creates CHARMM-compatible parameters, so no, you can't mix those with 
GROMOS.  CHARMM supports heme, but you have to clean up a lot of auto-generated 
angles and dihedrals that don't belong.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Justin Lemkul



On 2/18/15 10:30 AM, Carmen Di Giovanni wrote:

Daear all,
I'm working on a machine with an INVIDIA Teska K20.
After a minimization on a protein of 1925 atoms this is the mesage:

Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
For optimal performance this ratio should be close to 1!



Minimization is a poor indicator of performance.  Do a real MD run.



NOTE: The GPU has 25% less load than the CPU. This imbalance causes
performance loss.

Core t (s) Wall t (s) (%)
Time: 3289.010 205.891 1597.4
(steps/hour)
Performance: 8480.2
Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


Cai I improve the performance?
At the moment in the forum I didn't full informations to solve this problem.
In attachment there is the log. file



The list does not accept attachments.  If you wish to share a file, upload it to 
a file-sharing service and provide a URL.  The full .log is quite important for 
understanding your hardware, optimizations, and seeing full details of the 
performance breakdown.  But again, base your assessment on MD, not EM.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU low performance

2015-02-18 Thread Szilárd Páll
We need a *full* log file, not parts of it!

You can try running with -ntomp 16 -pin on - it may be a bit faster
not not use HyperThreading.
--
Szilárd


On Wed, Feb 18, 2015 at 5:20 PM, Carmen Di Giovanni cdigi...@unina.it wrote:
 Justin,
 the problem is evident for all calculations.
 This is the log file  of a recent run:

 

 Log file opened on Mon Dec 22 16:28:00 2014
 Host: localhost.localdomain  pid: 8378  rank ID: 0  number of ranks:  1
 GROMACS:gmx mdrun, VERSION 5.0

 GROMACS is written by:
 Emile Apol Rossen Apostolov   Herman J.C. Berendsen Par Bjelkmar
 Aldert van Buuren  Rudi van DrunenAnton Feenstra Sebastian Fritsch
 Gerrit GroenhofChristoph Junghans Peter Kasson   Carsten Kutzner
 Per LarssonJustin A. Lemkul   Magnus LundborgPieter Meulenhoff
 Erik Marklund  Teemu Murtola  Szilard Pall   Sander Pronk
 Roland Schulz  Alexey ShvetsovMichael Shirts Alfons Sijbers
 Peter Tieleman Christian Wennberg Maarten Wolf
 and the project leaders:
 Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

 Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2014, The GROMACS development team at
 Uppsala University, Stockholm University and
 the Royal Institute of Technology, Sweden.
 check out http://www.gromacs.org for more information.

 GROMACS is free software; you can redistribute it and/or modify it
 under the terms of the GNU Lesser General Public License
 as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

 GROMACS:  gmx mdrun, VERSION 5.0
 Executable:   /opt/SW/gromacs-5.0/build/mpi-cuda/bin/gmx_mpi
 Library dir:  /opt/SW/gromacs-5.0/share/top
 Command line:
   gmx_mpi mdrun -deffnm prod_20ns

 Gromacs version:VERSION 5.0
 Precision:  single
 Memory model:   64 bit
 MPI library:MPI
 OpenMP support: enabled
 GPU support:enabled
 invsqrt routine:gmx_software_invsqrt(x)
 SIMD instructions:  AVX_256
 FFT library:fftw-3.3.3-sse2
 RDTSCP usage:   enabled
 C++11 compilation:  disabled
 TNG support:enabled
 Tracing support:disabled
 Built on:   Thu Jul 31 18:30:37 CEST 2014
 Built by:   root@localhost.localdomain [CMAKE]
 Build OS/arch:  Linux 2.6.32-431.el6.x86_64 x86_64
 Build CPU vendor:   GenuineIntel
 Build CPU brand:Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
 Build CPU family:   6   Model: 62   Stepping: 4
 Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf_lm mmx
 msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2 sse3
 sse4.1 sse4.2 ssse3 tdt x2apic
 C compiler: /usr/bin/cc GNU 4.4.7
 C compiler flags:-mavx   -Wno-maybe-uninitialized -Wextra
 -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
 -Wno-unused -Wunused-value -Wunused-parameter   -fomit-frame-pointer
 -funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG
 C++ compiler:   /usr/bin/c++ GNU 4.4.7
 C++ compiler flags:  -mavx   -Wextra -Wno-missing-field-initializers
 -Wpointer-arith -Wall -Wno-unused-function   -fomit-frame-pointer
 -funroll-all-loops  -Wno-array-bounds  -O3 -DNDEBUG
 Boost version:  1.55.0 (internal)
 CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
 driver;Copyright (c) 2005-2013 NVIDIA Corporation;Built on
 Thu_Mar_13_11:58:58_PDT_2014;Cuda compilation tools, release 6.0, V6.0.1
 CUDA compiler
 flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_35,code=compute_35;-use_fast_math;-Xcompiler;-fPIC
 ;
 ;-mavx;-Wextra;-Wno-missing-field-initializers;-Wpointer-arith;-Wall;-Wno-unused-function;-fomit-frame-pointer;-funroll-all-loops;-Wno-array-bounds;-O3;-DNDEBUG
 CUDA driver:6.50
 CUDA runtime:   6.0



  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
 GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
 molecular simulation
 J. Chem. Theory Comput. 4 (2008) pp. 435-447
   --- Thank You ---  


  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
 Berendsen
 GROMACS: Fast, Flexible and Free
 J. Comp. Chem. 26 (2005) pp. 1701-1719
   --- Thank You ---  


  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 E. Lindahl and B. Hess and D. van der Spoel
 GROMACS 3.0: A package for molecular simulation and trajectory analysis
 J. Mol. Mod. 7 (2001) pp. 306-317
   --- Thank You ---  


  PLEASE READ AND CITE THE FOLLOWING REFERENCE 
 H. J. C. Berendsen, D. van der Spoel and R. van 

Re: [gmx-users] free energy calculation: combination of position restraint +refcoord_scaling cause crash during NPT (Justin Lemkul)

2015-02-18 Thread Jin Zhang
Dear Justin,

We thought about this, and checked these non-crashed lambda tpr as well as
the pre-equilbrated NPT step, all showed mismatch between posre-com[0]
posre-com[1] posre-com[2], and the real com. So I suspect this posre-com is
not the real center of mass for NPT simulation.
Since the reference coordinate is read from -c .gro when perform grompp, it
should not be wrong, unless it read elsewhere.

Best,
Jin


 Date: Wed, 18 Feb 2015 07:42:30 -0500
 From: Justin Lemkul jalem...@vt.edu
 To: gmx-us...@gromacs.org
 Subject: Re: [gmx-users] free energy calculation: combination of
 position restraint +refcoord_scaling cause crash during NPT
 Message-ID: 54e488b6.9090...@vt.edu
 Content-Type: text/plain; charset=windows-1252; format=flowed



 On 2/17/15 12:31 PM, Jin Zhang wrote:
  Dear all,
 
  We're doing free energy calculation and found some of my lambda job
 crashes
  due to the combination of position restraint +refcoord_scaling COM. All
  crashes happened at NPT steps.
 
  By checking each energy term, we found the Position Rest. term is
  surprisingly as high as 10^6 kJ/mol, the others look fine. Either turn of
  position restrain or refcoord_scaling could avoid the simulation to be
  crashed. We tried to see if there's mismatch of either COM or coordinate
  and found no direct answer.
  Any help would be appreciated!
 
  In the tpr file
 refcoord-scaling = COM
 posres-com (3):
 posres-com[0]= 3.15034e-01
 posres-com[1]= 4.48110e-01
 posres-com[2]= 5.17730e-01
 
  COM of protein-ligand calculated by t_traj -com -ox
  3.024224.301694.84964
 

 If the reference COM is defined as (0.315,0.448,0.518) and your actual
 coordinates are what you're shown above, there will inherently be a huge
 restraint potential as mdrun tries to bias the coordinates towards the
 defined
 reference.  I suspect something is wrong in your definition of your
 reference.

 -Justin

  coordinate of 1st atom in tpr: posres_xA[0]={ 1.39783e-01,
  -9.56922e-02,  1.22536e+00}
  coordinate of 1st atom in gro: 1MOL C11   3.164   4.206
  6.075
 
  The same thing was also found in a normal non-free energy simulations
 with
  combination of position restrain and refcoord-scaling.
  Later on, we found some other people also have the same problem when use
  that combination to do free energy calculation.
  http://comments.gmane.org/gmane.science.biology.gromacs.user/66177
  It makes more sense to me to turn on refcoord_scaling while using
 position
  restraint.
  Again, any help to understand this would be appreciated!
 
  Regards,
  Jin
 

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==


 --

 Message: 3
 Date: Wed, 18 Feb 2015 19:48:50 +0530
 From: Sanchaita Rajkhowa srajkhow...@gmail.com
 To: gromacs.org_gmx-users@maillist.sys.kth.se
 Subject: [gmx-users] simulation of heme along with h2o2
 Message-ID:
 CACE5zdsaXX==in=SPntZiaRZ=+
 nrjnyajn8uwmmwysvsr4x...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8

 Dear all, I am trying to simulate a heme containing protein in high
 concentration (having hydrogen peroxide). However, I do not know which
 forcefield to use. Heme has forcefield in gromos96 but not the forcefield
 of H2O2? Will generating .itp files for H2O2 from swissparam work? Please
 help.

 Thanks in advance.
 Sanchaita.


 --

 Message: 4
 Date: Wed, 18 Feb 2015 16:30:34 +0100
 From: Carmen Di Giovanni cdigi...@unina.it
 To: gromacs.org_gmx-users@maillist.sys.kth.se
 Subject: [gmx-users] GPU low performance
 Message-ID: 8525B4BE52BB4609BCED7556FAFB3587@PCCarmen
 Content-Type: text/plain;   charset=iso-8859-1

 Daear all,
 I'm working on a machine with an INVIDIA Teska K20.
 After a minimization on a protein of 1925 atoms this is the mesage:

 Force evaluation time GPU/CPU: 2.923 ms/116.774 ms = 0.025
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 25% less load than the CPU. This imbalance causes
 performance loss.

 Core t (s) Wall t (s) (%)
 Time: 3289.010 205.891 1597.4
 (steps/hour)
 Performance: 8480.2
 Finished mdrun on rank 0 Wed Feb 18 15:50:06 2015


 Cai I improve the performance?
 At the moment in the forum I didn't full informations to solve this
 problem.
 In attachment there is the log. file

 thank you in advance
 Carmen Di Giovanni


 --
 Carmen Di Giovanni, PhD
 Dept. of Pharmaceutical and Toxicological Chemistry
 Drug Discovery Lab
 University of Naples Federico II
 Via D. Montesano, 49
 80131 Naples
 Tel.: ++39 081 678623
 Fax: ++39 081