[gmx-users] Timesteps don't match

2014-08-29 Thread Oliver Schillinger

Hi,
when I minimize a system for 100 steps with nstxout set to 1 I expect to 
get a trajectory with 100 frames.
However, gmxcheck reports 85 frames only and prints several weird 
messages that several timesteps don't match.

Why is that?
Tested with gromacs 4.6.5 and 5.0, serial run as well as a parallel run 
on 12 cores and 2 GPUs, always the same result.

Cheers,
Oliver

--
Oliver Schillinger
PhD student

ICS-6 - Structural Biochemistry
Building 5.8v, Room 3010
Phone:  +49 2461-61-9532
Mobile: +49 172 53 27 914

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Martini force field for inhibitors

2014-08-29 Thread XAvier Periole

There are a few review article where the strategy and guidelines are described. 
You can find this material on the cgmartini.nl website. 

 On Aug 29, 2014, at 2:03, Sridhar Kumar Kannam srisri...@gmail.com wrote:
 
 Dear Gromacs users,
 
 I have very recently started working with  Gromacs and Martini force field.
 I am able to generate the coarse-grained model for HIV Protease (1hvr.pdb).
 I want to simulate the protein along with its inhibitor. Are there any
 guidelines for building (coarse-graining) its inhibitor ?
 
 Sorry for the naive question ...
 
 Thank you.
 
 
 
 
 
 
 -- 
 Cheers !!!
 Sridhar  Kumar Kannam :)
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Timesteps don't match

2014-08-29 Thread Mark Abraham
On Fri, Aug 29, 2014 at 10:05 AM, Oliver Schillinger 
o.schillin...@fz-juelich.de wrote:

 Hi,
 when I minimize a system for 100 steps with nstxout set to 1 I expect to
 get a trajectory with 100 frames.


nsteps is the maximum number of steps. If mdrun decides it can't make
further progress, it says so in the log file.


 However, gmxcheck reports 85 frames only and prints several weird messages
 that several timesteps don't match.
 Why is that?


Dunno, but since there's no time in a minimization, it's not important.

Mark

Tested with gromacs 4.6.5 and 5.0, serial run as well as a parallel run on
 12 cores and 2 GPUs, always the same result.
 Cheers,
 Oliver

 --
 Oliver Schillinger
 PhD student

 ICS-6 - Structural Biochemistry
 Building 5.8v, Room 3010
 Phone:  +49 2461-61-9532
 Mobile: +49 172 53 27 914

 Forschungszentrum Juelich GmbH
 52425 Juelich
 Sitz der Gesellschaft: Juelich
 Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
 Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
 Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
 Prof. Dr. Sebastian M. Schmidt
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU Acceleration in case of Implicit Solvent Simulations

2014-08-29 Thread Mark Abraham
On Thu, Aug 28, 2014 at 11:59 PM, Siva Dasetty sdas...@g.clemson.edu
wrote:

 Dear all,

 Can we use periodic boundary conditions in case of implicit solvent
 simulations?


Yes (but I only recommend doing so in GROMACS 4.5.x)


 If so, why?


In general, why not? It's a boundary condition. What's special about
infinite boundary conditions?


 Also, can implicit solvent model in gromacs in any version (till 5.0) be
 implemented in more than 2 processors or can it at least use GPU
 acceleration provided by gromacs?


No - the implicit solvent code has been unmaintained in practice, starting
with 4.6. Last I looked, only infinite boundary conditions and a single
thread worked. You'll probably get better mileage on GPUs with AMBER, which
actively targets this type of simulation.

I have tried using pbc=no and group cut-off scheme in gpu based gromacs
 V5.0, but there is a warning which says GPU is disabled because it
 effectively works only with verlet cut-off scheme and verlet cut-off scheme
 requires pbc=xyz or xy.


Correct. In 4.6 and 5.0, the required machinery for using GPUs and doing
implicit solvation are almost completely mismatched.

Mark


 Also tried implementing in gromacs version 4.5.5 using openMM after
 following the installation instructions in the following link, (

 http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM
 )
 and here again there is no luck as we dont have the compatible hardware.

 Thanks,
 Siva
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Timesteps don't match

2014-08-29 Thread Oliver Schillinger

Ok yes, I got that.
I should have made it clearer.
The minimization actually run for 100 steps and did not converge.
But that is a different issue.
My probelm is that it run for 100 steps according to the log file, but 
there are only 85 frames in the trajectory file even though nstxout was 
set to 1.

What happened to the rest?

On 08/29/2014 11:10 AM, Mark Abraham wrote:

On Fri, Aug 29, 2014 at 10:05 AM, Oliver Schillinger 
o.schillin...@fz-juelich.de wrote:


Hi,
when I minimize a system for 100 steps with nstxout set to 1 I expect to
get a trajectory with 100 frames.



nsteps is the maximum number of steps. If mdrun decides it can't make
further progress, it says so in the log file.



However, gmxcheck reports 85 frames only and prints several weird messages
that several timesteps don't match.
Why is that?



Dunno, but since there's no time in a minimization, it's not important.

Mark

Tested with gromacs 4.6.5 and 5.0, serial run as well as a parallel run on

12 cores and 2 GPUs, always the same result.
Cheers,
Oliver

--
Oliver Schillinger
PhD student

ICS-6 - Structural Biochemistry
Building 5.8v, Room 3010
Phone:  +49 2461-61-9532
Mobile: +49 172 53 27 914

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Oliver Schillinger
PhD student

ICS-6 - Structural Biochemistry
Building 5.8v, Room 3010
Phone:  +49 2461-61-9532
Mobile: +49 172 53 27 914

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] X-ray Diffraction (XRD)

2014-08-29 Thread Cyrus Djahedi
Hi everyone. Does anyone know of any function in GROMACS or other MD program 
for generating X-ray diffraction patterns?
/ Cyrus
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GPU and MPI

2014-08-29 Thread Da-Wei Li
Dear users,

I recently try to run Gromacs on two nodes, each of them has 12 cores and 2
GPUs. The nodes are connected with infiniband and scaling is pretty good
when no GPU is evolved.

My command is like this:

mpiexec  -npernode 2 -np 4 mdrun_mpi -ntomp 6


However, it looks like Gromacs only detected 2 GPUs on node 0, then skip
node 1. Part of the output looks like:




Using 4 MPI processes

Using 6 OpenMP threads per MPI process

2 GPUs detected on host n0316.ten:

  #0: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible

  #1: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible

2 GPUs user-selected for this run.

Mapping of GPUs to the 2 PP ranks in this node: #0, #1




The performance is about only 40% of the run, where I use only 1 node (12
cores+2GPUs).


Does I miss something?


thanks.


dawei
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-08-29 Thread Carsten Kutzner
Hi Dawei,

the mapping of GPUs to PP ranks is printed for the Master node only,
but if this node reports two GPUs, then all other PP ranks will also
use two GPUs (or an error is reported).

The scaling will depend also on your system size, if this is too small,
then you might be better off by using a single node.

Carsten


On 29 Aug 2014, at 16:24, Da-Wei Li lida...@gmail.com wrote:

 Dear users,
 
 I recently try to run Gromacs on two nodes, each of them has 12 cores and 2
 GPUs. The nodes are connected with infiniband and scaling is pretty good
 when no GPU is evolved.
 
 My command is like this:
 
 mpiexec  -npernode 2 -np 4 mdrun_mpi -ntomp 6
 
 
 However, it looks like Gromacs only detected 2 GPUs on node 0, then skip
 node 1. Part of the output looks like:
 
 
 
 
 Using 4 MPI processes
 
 Using 6 OpenMP threads per MPI process
 
 2 GPUs detected on host n0316.ten:
 
  #0: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible
 
  #1: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible
 
 2 GPUs user-selected for this run.
 
 Mapping of GPUs to the 2 PP ranks in this node: #0, #1
 
 
 
 
 The performance is about only 40% of the run, where I use only 1 node (12
 cores+2GPUs).
 
 
 Does I miss something?
 
 
 thanks.
 
 
 dawei
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Pull_geometry=cylinder

2014-08-29 Thread Alexandra Antipina
Hello,I want to calculate PMF of DNA and lipid bilayer. DNA is 4.5 nm from the center of bilayer at t=0. I use the pull-code to bring closer DNA to the bilayer  .This is my mdp.file: pull = umbrellapull_geometry    = cylinderpull_dim = N N Ypull-r1  = 1.2pull-r0  = 1.7pull_start   = yespull_nstxout = 500pull_nstfout = 500pull_ngroups = 1pull_group0  = POPCpull_group1  = DNApull_pbcatom1    = 178pull_vec1    = 0.0 0.0 1.0 ;(z-coordinate of DNAz-coordinate of POPC bilayer)pull_rate1   = 0.005pull-k1  = 500 I get the following Pull group  natoms  pbc atom  distance at start reference at t=0   0 17152  9334    1   758  178 1.249 1.249  I don't understand, why this distance at start = 1.249 and reference = 1.249. But when I use pull_geometry = distance, distance at start = 4.5, it is right! I want to use geometry=cylinder,but I think it works wrong. Also pbc atom of group0 remains the same in both case. I don't understand this. Anyone can help me? Thanks! Alex  -- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU and MPI

2014-08-29 Thread Carsten Kutzner
Hi Dawei,

On 29 Aug 2014, at 16:52, Da-Wei Li lida...@gmail.com wrote:

 Dear Carsten
 
 Thanks for the clarification. Here it is my benchmark for a small protein
 system (18k atoms).
 
 (1) 1 node (12 cores/node, no GPU):   50 ns/day
 (2) 2 nodes (12 cores/node, no GPU): 80 ns/day
 (3) 1 node (12 cores/node, 2 K40 GPUs/node): 100 ns/day
 (4) 2 nodes (12 cores/node, 2 K40 GPUs/node): 40 ns/day
 
 
 I send out this question because the benchmark 4 above is very suspicious.
Indeed, if you get 80 ns/day without GPUs, then it should not be less
with GPUs. For how many time steps do you run each of the
benchmarks? Do you use the -resethway command line switch to mdrun
to disregard the first half of the run (where initialization and
balancing is done, you don’t want to count that in a benchmark)?

Carsten

 But I agree size of my system may play a role.
 
 best,
 
 dawei
 
 
 On Fri, Aug 29, 2014 at 10:36 AM, Carsten Kutzner ckut...@gwdg.de wrote:
 
 Hi Dawei,
 
 the mapping of GPUs to PP ranks is printed for the Master node only,
 but if this node reports two GPUs, then all other PP ranks will also
 use two GPUs (or an error is reported).
 
 The scaling will depend also on your system size, if this is too small,
 then you might be better off by using a single node.
 
 Carsten
 
 
 On 29 Aug 2014, at 16:24, Da-Wei Li lida...@gmail.com wrote:
 
 Dear users,
 
 I recently try to run Gromacs on two nodes, each of them has 12 cores
 and 2
 GPUs. The nodes are connected with infiniband and scaling is pretty good
 when no GPU is evolved.
 
 My command is like this:
 
 mpiexec  -npernode 2 -np 4 mdrun_mpi -ntomp 6
 
 
 However, it looks like Gromacs only detected 2 GPUs on node 0, then skip
 node 1. Part of the output looks like:
 
 
 
 
 Using 4 MPI processes
 
 Using 6 OpenMP threads per MPI process
 
 2 GPUs detected on host n0316.ten:
 
 #0: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible
 
 #1: NVIDIA Tesla M2070, compute cap.: 2.0, ECC: yes, stat: compatible
 
 2 GPUs user-selected for this run.
 
 Mapping of GPUs to the 2 PP ranks in this node: #0, #1
 
 
 
 
 The performance is about only 40% of the run, where I use only 1 node (12
 cores+2GPUs).
 
 
 Does I miss something?
 
 
 thanks.
 
 
 dawei
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs 4.6.7 released

2014-08-29 Thread Mark Abraham
Hi GROMACS users,


 GROMACS 4.6.7 is officially released. It contains a few bug fixes,
particularly with regard to correctness of hybrid MPI/OpenMP PME
simulations at moderate-to-high parallelism. We encourage all users to
upgrade their installations from earlier 4.6.x releases, particularly from
4.6.6 wherein some of the problems now fixed were introduced. All the
applicable content updated in 4.6.7 will also be found in the 5.0.1
release, out shortly.


 This is (again) the last planned release for the 4.6 series. Our resources
only permit us to support one stable branch, which is now that of 5.0. We
might revise that and make a new 4.6.8 release if new evidence of serious
problems is found.


 You can find the code, manual, release notes, installation instructions
and test suite at the links below. Note that the tests and manual have not
changed since 4.6.5.


 ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.7.tar.gz

ftp://ftp.gromacs.org/pub/manual/manual-4.6.7.pdf

http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x#Release_notes_for_4.6.7

http://www.gromacs.org/Documentation/Installation_Instructions_4.6

http://gerrit.gromacs.org/download/regressiontests-4.6.7.tar.gz


 Happy simulating!


 Mark Abraham


 GROMACS development manager
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] PMF curve in umbrella sampling

2014-08-29 Thread Mana Ib
Dear Users,

I am doing an umbrella sampling for a protein-ligand complex, wherein I
first did an SMD run for 500 ps and generated 500 configurations, the COM
distances for these configurations start at 1.25nm for conf0 and so on till
6.4nm for conf500. Hence I used a 0.05nm spacing to select configurations
for the umbrella sampling windows.
I have currently completed running 2 windows, each window was subjected to
a 5ns  mdrun.

This is my mdp file parameters

title   = Umbrella pulling simulation
define  = -DPOSRES
; Run parameters
integrator  = md
dt  = 0.002
tinit   = 0
nsteps  = 250   ; 5 ns
nstcomm = 10
; Output parameters
nstxout = 5 ; every 100 ps
nstvout = 5
nstfout = 5000
nstxtcout   = 5000  ; every 10 ps
nstenergy   = 5000
; Bond parameters
constraint_algorithm= lincs
constraints = all-bonds
continuation= yes
; Single-range cutoff scheme
nstlist = 5
ns_type = grid
rlist   = 1.4
rcoulomb= 1.4
rvdw= 1.4
; PME electrostatics parameters
coulombtype = PME
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
; Berendsen temperature coupling is on in two groups
Tcoupl  = Nose-Hoover
tc_grps = Protein   Non-Protein
tau_t   = 0.5   0.5
ref_t   = 310   310
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0
refcoord_scaling = com
; Generate velocities is off
gen_vel = no
; Periodic boundary conditions are on in all directions
pbc = xyz
; Long-range dispersion correction
DispCorr= EnerPres
; Pull code
pull= umbrella
pull_geometry   = distance
pull_dim= Y N N
pull_start  = yes
pull_ngroups= 1
pull_group0 = Chain_A
pull_group1 = NL
pull_init1  = 0
pull_rate1  = 0.0
pull_k1 = 500  ; kJ mol^-1 nm^-2
pull_nstxout= 1000  ; every 2 ps
pull_nstfout= 1000  ; every 2 ps


When I use g_wham to plot the histogram and PMF curve for these 2 windows,
my plots don't seem to follow the general trend as given in
tutorials..(figure links below). Is this discrepancy because I have used
only files from 2 windows to plot these? Or are these due to some other
errors in the protocol?

http://i46.photobucket.com/albums/f121/fullmeasure29/histo_2win_zpsff19fe60.png

http://i46.photobucket.com/albums/f121/fullmeasure29/pmf_2win_zps04adb8f7.png
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PMF curve in umbrella sampling

2014-08-29 Thread Justin Lemkul



On 8/29/14, 2:08 PM, Mana Ib wrote:

Dear Users,

I am doing an umbrella sampling for a protein-ligand complex, wherein I
first did an SMD run for 500 ps and generated 500 configurations, the COM
distances for these configurations start at 1.25nm for conf0 and so on till
6.4nm for conf500. Hence I used a 0.05nm spacing to select configurations
for the umbrella sampling windows.
I have currently completed running 2 windows, each window was subjected to
a 5ns  mdrun.

This is my mdp file parameters

title   = Umbrella pulling simulation
define  = -DPOSRES
; Run parameters
integrator  = md
dt  = 0.002
tinit   = 0
nsteps  = 250   ; 5 ns
nstcomm = 10
; Output parameters
nstxout = 5 ; every 100 ps
nstvout = 5
nstfout = 5000
nstxtcout   = 5000  ; every 10 ps
nstenergy   = 5000
; Bond parameters
constraint_algorithm= lincs
constraints = all-bonds
continuation= yes
; Single-range cutoff scheme
nstlist = 5
ns_type = grid
rlist   = 1.4
rcoulomb= 1.4
rvdw= 1.4
; PME electrostatics parameters
coulombtype = PME
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
; Berendsen temperature coupling is on in two groups
Tcoupl  = Nose-Hoover
tc_grps = Protein   Non-Protein
tau_t   = 0.5   0.5
ref_t   = 310   310
; Pressure coupling is on
Pcoupl  = Parrinello-Rahman
pcoupltype  = isotropic
tau_p   = 1.0
compressibility = 4.5e-5
ref_p   = 1.0
refcoord_scaling = com
; Generate velocities is off
gen_vel = no
; Periodic boundary conditions are on in all directions
pbc = xyz
; Long-range dispersion correction
DispCorr= EnerPres
; Pull code
pull= umbrella
pull_geometry   = distance
pull_dim= Y N N
pull_start  = yes
pull_ngroups= 1
pull_group0 = Chain_A
pull_group1 = NL
pull_init1  = 0
pull_rate1  = 0.0
pull_k1 = 500  ; kJ mol^-1 nm^-2
pull_nstxout= 1000  ; every 2 ps
pull_nstfout= 1000  ; every 2 ps


When I use g_wham to plot the histogram and PMF curve for these 2 windows,
my plots don't seem to follow the general trend as given in
tutorials..(figure links below). Is this discrepancy because I have used
only files from 2 windows to plot these? Or are these due to some other
errors in the protocol?

http://i46.photobucket.com/albums/f121/fullmeasure29/histo_2win_zpsff19fe60.png

http://i46.photobucket.com/albums/f121/fullmeasure29/pmf_2win_zps04adb8f7.png



The two windows yield effectively overlapping distributions and a meaningless 
PMF.  Space the windows further (0.05 nm is very tight) and run more windows for 
a real PMF as a function of a meaningful reaction coordinate.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_select syntax

2014-08-29 Thread Teemu Murtola
Hi,

On Fri, Aug 29, 2014 at 4:19 AM, Bin Liu fdusuperstr...@gmail.com wrote:

 I am recently puzzled by the syntax and behaviour of g_select. I want to
 obtain the residue index list of LIPID whose center of mass is within 1.0
 nm of the surface of protein. In my case, each LIPID molecule consists of
 only one residue. I wrote the selection.dat as follows, and set -selrpos to
 atom and -seltype to res_com. Here I think Protein is the reference
 group, so -selrpos should be atom because I care about the distance to its
 surface. LIPID is the analysis group and I care about their individual
 center of mass. So -seltype should be res_com.

 selection.dat:
 resname LIPID and within 1.0 of group Protein;

 g_select -sf selection.dat -f traj.trr -s traj.tpr -n system.ndx  -oi
 index.dat -seltype res_com -selrpos atom


Yes, this should give you what you expect. It selects all LIPID atoms that
are within 1.0 nm from the protein, and then groups them by residue for the
-oi output.

However I tried another selection. This time instead of retrieving the
 residue index, I tried to retrieve the index of a key atom of the LIPID
 molecule.
 selection.dat:
 rdist = res_com within 1.0 of group Protein;
 group_C15 = (resname LIPID) and (rdist) and (name C15);
 group_C15;

 g_select -sf selection.dat -f traj.trr -s traj.tpr -n system.ndx  -oi
 index.dat -seltype atom -selrpos atom


res_com within has a different meaning from using -seltype res_com: your
second selection selects all C15 atoms that are in LIPID residues, and
where the center-of-mass of the whole residue is within 1 nm from the
protein (the last part is the res_com within expression).

-seltype res_com in the first example is equivalent to writing this, where
the res_com is in a very different location:
res_com of (resname LIPID and within 1.0 of group Protein)

Hopefully this helps understanding where the difference between the
selections comes from.

I thought these two selections should give the same number of indices per
 frame, as the second selection merely retrieve the atom indices of the
 corresponding key atoms in the LIPID molecules selected by the first
 selection. However the first selection gives significantly more indices
 than the second selection does. I guess my understanding of g_select syntax
 might be flawed. Please point out my misunderstanding. Thank you very much.


If you want to select the key atoms that match those from your first
selection, you need to write something more complex:

name C15 and same residue as (resname LIPID and within 1.0 of group
Protein)

The last selection should be self-explanatory.

Hope this helps,
Teemu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.