Re: [gmx-users] LJ-SR and LJ-14

2016-08-15 Thread Justin Lemkul



On 8/15/16 6:56 PM, Stella Nickerson wrote:

If you want the "total" Lennard-Jones potential between two groups, would
you simply add LJ-SR + LJ-14? If so, what is the utility of calculating
both potentials separately?



LJ-SR are normal nonbonded interactions within the short-range cutoff.  LJ-14 
are, by definition, intramolecular interactions occurring between 1-4 pairs. 
Whether or not such a decomposition has any physical meaning depends on whether 
or not the force field parametrization assigns them any meaning.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] LJ-SR and LJ-14

2016-08-15 Thread Stella Nickerson
If you want the "total" Lennard-Jones potential between two groups, would
you simply add LJ-SR + LJ-14? If so, what is the utility of calculating
both potentials separately?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Scc order parameter

2016-08-15 Thread Nikhil Maroli
There are some python codes to do that., Google it
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Loosing partly the available CPU time

2016-08-15 Thread Szilárd Páll
Hi,

Although I don't know what exactly is the system you are simulating,
one thing is clear: you're pushing the parallelization limit with
- 200 atoms/core
- likely "concentrated" free energy interactions.
The former that alone will make the run very sensitive to load
imbalance and the latter makes imbalance even worse as the very
expensive free energy interactions likely all fall in a few domains
(unless your 8 perturbed atoms are scattered).

There is not much you can do except the what I previously suggested
(trying more OpenMP threads e.g. 2-4 or simply use less cores). If you
have the possibility, using some hardware with fewer and faster cores
(and perhaps a GPU) will also be much more suitable than this 128-core
AMD node.

Cheers,
--
Szilárd


On Mon, Aug 15, 2016 at 4:01 PM, Alexander Alexander
 wrote:
> Hi Szilárd,
>
> Thanks for your response, please find below a link containing required
> files.log files.
>
> https://drive.google.com/file/d/0B_CbyhnbKqQDc2FaeWxITWxqdDg/view?usp=sharing
>
> Thanks,
> Cheers,
> Alex
>
> On Mon, Aug 15, 2016 at 2:52 PM, Szilárd Páll 
> wrote:
>
>> Hi,
>>
>> Please post full logs; what you cut out of the file will often miss
>> information needed to diagnose your issues.
>>
>> At first sight it seems that you simply have an imbalanced system. Not
>> sure about the source of the imbalance and without knowing more about
>> your system/setup and how is it decomposed what I can suggest is to:
>> try other decomposition schemes or simply less decomposition (use more
>> threads or less cores).
>>
>> Additionally you also have a pretty bad PP-PME load balance, but
>> that's likely going to get better if you get you PP performance
>> better.
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Sun, Aug 14, 2016 at 3:23 PM, Alexander Alexander
>>  wrote:
>> > Dear gromacs user,
>> >
>> > My free energy calculation works well, however, I am loosing around 56.5
>> %
>> > of the available CPU time as stated in my log file which is really
>> > considerable. The problem is due to the load imbalance and domain
>> > decomposition, but I have no idea to improve it, below is the very end of
>> > my log file and I would be so appreciated if you could help avoid this.
>> >
>> >
>> >D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S
>> >
>> >  av. #atoms communicated per step for force:  2 x 115357.4
>> >  av. #atoms communicated per step for LINCS:  2 x 2389.1
>> >
>> >  Average load imbalance: 285.9 %
>> >  Part of the total run time spent waiting due to load imbalance: 56.5 %
>> >  Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
>> X 2
>> > % Y 2 % Z 2 %
>> >  Average PME mesh/force load: 0.384
>> >  Part of the total run time spent waiting due to PP/PME imbalance: 14.5 %
>> >
>> > NOTE: 56.5 % of the available CPU time was lost due to load imbalance
>> >   in the domain decomposition.
>> >
>> > NOTE: 14.5 % performance was lost because the PME ranks
>> >   had less work to do than the PP ranks.
>> >   You might want to decrease the number of PME ranks
>> >   or decrease the cut-off and the grid spacing.
>> >
>> >
>> >  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
>> >
>> > On 96 MPI ranks doing PP, and
>> > on 32 MPI ranks doing PME
>> >
>> >  Computing:  Num   Num  CallWall time Giga-Cycles
>> >  Ranks Threads  Count  (s) total sum%
>> > 
>> -
>> >  Domain decomp.961 175000 242.339  53508.472
>>  0.5
>> >  DD comm. load 961 174903   9.076   2003.907
>>  0.0
>> >  DD comm. bounds   961 174901  27.054   5973.491
>>  0.1
>> >  Send X to PME 961701  44.342   9790.652
>>  0.1
>> >  Neighbor search   961 175001 251.994  55640.264
>>  0.6
>> >  Comm. coord.  96168250001521.009 335838.747
>>  3.4
>> >  Force 9617017001.9901546039.264
>> 15.5
>> >  Wait + Comm. F961701   10761.2962376093.759
>> 23.8
>> >  PME mesh *321701   11796.344 868210.788
>>  8.7
>> >  PME wait for PP *  22135.7521629191.096
>> 16.3
>> >  Wait + Recv. PME F961701 393.117  86800.265
>>  0.9
>> >  NB X/F buffer ops.961   20650001 132.713  29302.991
>>  0.3
>> >  COM pull force961701 165.613  36567.368
>>  0.4
>> >  Write traj.   961   7037  55.020  12148.457
>>  0.1
>> >  Update961   1402 140.972  31126.607
>>  0.3
>> >  Constraints   961   1402   12871.2362841968.551
>> 28.4
>> >  Comm. energies961 350001 261.976  57844.219
>>  

[gmx-users] Fwd: Restraints/constraints for keeping a helix as helical

2016-08-15 Thread Mohsen Ramezanpour
Dear All,

I was wondering if you have any comment or suggestion on this post.
Any comment is appreciated in advance.

Best,
Mohsen


-- Forwarded message --
From: Mohsen Ramezanpour 
Date: Wed, Aug 10, 2016 at 7:17 PM
Subject: Restraints/constraints for keeping a helix as helical
To: Discussion list for GROMACS users 


Dear Gromacs users,

In my simulation, I am interested to keep specific part of an alpha-helix
(e.g. residues 10-15) as helical through whole production run. This has
been discussed a few time in mailing list but I could not get my answers.
Reading through Gromacs manual, and mailing list, I found three possible
ways as follows:

After making an index file (which has a group made of Backbone atoms of
residues 10-15, I will use genrestr to create:

1) a distance restraint file using option  -disre, and replacing the type'
from 1 to 2 manually.

OR

2) a constraint file using option  -constr, with type 2

as I leave other flags in genrestr as default values.

for distance restraints, I have replaced the type'= 1 with type'=2 based on
following quote from page 88 of manual-4.6.7 :

"...The type’ column will usually be 1, but can be set to 2 to obtain a
distance restraint that will never be time-and ensemble-averaged; this can
be useful for restraining hydrogen bonds."

I did this because a-helix can be helical if all the hydrogen bonds are
conserved during simulations.


3) The other and seemingly most appropriate way would be to apply "dihedral
restraints" of type 1 on Backbone atoms of residues 10-15.

As I understood, this can be done by making a [ dihedral_restraints ]
section in the topology file and providing correct angles and force
constants. However, unfortunately, I could not find any tools to generate
this .itp file automatically, like what is possible in making "distance
restraints" and "constraints".


Are you aware of any tool or script which generates the dihedral restraints
for an a-helix, capable of accepting an index file?

Which approach do you recommend for keeping an a-helix as helical?


Please let me know your opinion.

Thanks in advance
Cheers
Mohsen

-- 
*Rewards work better than punishment ...*



-- 
*Rewards work better than punishment ...*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Loosing partly the available CPU time

2016-08-15 Thread Alexander Alexander
Hi Szilárd,

Thanks for your response, please find below a link containing required
files.log files.

https://drive.google.com/file/d/0B_CbyhnbKqQDc2FaeWxITWxqdDg/view?usp=sharing

Thanks,
Cheers,
Alex

On Mon, Aug 15, 2016 at 2:52 PM, Szilárd Páll 
wrote:

> Hi,
>
> Please post full logs; what you cut out of the file will often miss
> information needed to diagnose your issues.
>
> At first sight it seems that you simply have an imbalanced system. Not
> sure about the source of the imbalance and without knowing more about
> your system/setup and how is it decomposed what I can suggest is to:
> try other decomposition schemes or simply less decomposition (use more
> threads or less cores).
>
> Additionally you also have a pretty bad PP-PME load balance, but
> that's likely going to get better if you get you PP performance
> better.
>
> Cheers,
> --
> Szilárd
>
>
> On Sun, Aug 14, 2016 at 3:23 PM, Alexander Alexander
>  wrote:
> > Dear gromacs user,
> >
> > My free energy calculation works well, however, I am loosing around 56.5
> %
> > of the available CPU time as stated in my log file which is really
> > considerable. The problem is due to the load imbalance and domain
> > decomposition, but I have no idea to improve it, below is the very end of
> > my log file and I would be so appreciated if you could help avoid this.
> >
> >
> >D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S
> >
> >  av. #atoms communicated per step for force:  2 x 115357.4
> >  av. #atoms communicated per step for LINCS:  2 x 2389.1
> >
> >  Average load imbalance: 285.9 %
> >  Part of the total run time spent waiting due to load imbalance: 56.5 %
> >  Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
> X 2
> > % Y 2 % Z 2 %
> >  Average PME mesh/force load: 0.384
> >  Part of the total run time spent waiting due to PP/PME imbalance: 14.5 %
> >
> > NOTE: 56.5 % of the available CPU time was lost due to load imbalance
> >   in the domain decomposition.
> >
> > NOTE: 14.5 % performance was lost because the PME ranks
> >   had less work to do than the PP ranks.
> >   You might want to decrease the number of PME ranks
> >   or decrease the cut-off and the grid spacing.
> >
> >
> >  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
> >
> > On 96 MPI ranks doing PP, and
> > on 32 MPI ranks doing PME
> >
> >  Computing:  Num   Num  CallWall time Giga-Cycles
> >  Ranks Threads  Count  (s) total sum%
> > 
> -
> >  Domain decomp.961 175000 242.339  53508.472
>  0.5
> >  DD comm. load 961 174903   9.076   2003.907
>  0.0
> >  DD comm. bounds   961 174901  27.054   5973.491
>  0.1
> >  Send X to PME 961701  44.342   9790.652
>  0.1
> >  Neighbor search   961 175001 251.994  55640.264
>  0.6
> >  Comm. coord.  96168250001521.009 335838.747
>  3.4
> >  Force 9617017001.9901546039.264
> 15.5
> >  Wait + Comm. F961701   10761.2962376093.759
> 23.8
> >  PME mesh *321701   11796.344 868210.788
>  8.7
> >  PME wait for PP *  22135.7521629191.096
> 16.3
> >  Wait + Recv. PME F961701 393.117  86800.265
>  0.9
> >  NB X/F buffer ops.961   20650001 132.713  29302.991
>  0.3
> >  COM pull force961701 165.613  36567.368
>  0.4
> >  Write traj.   961   7037  55.020  12148.457
>  0.1
> >  Update961   1402 140.972  31126.607
>  0.3
> >  Constraints   961   1402   12871.2362841968.551
> 28.4
> >  Comm. energies961 350001 261.976  57844.219
>  0.6
> >  Rest  52.349  11558.715
>  0.1
> > 
> -
> >  Total  33932.0969989607.639
> 100.0
> > 
> -
> > (*) Note that with separate PME ranks, the walltime column actually sums
> to
> > twice the total reported, but the cycle count total and % are
> correct.
> > 
> -
> >  Breakdown of PME mesh computation
> > 
> -
> >  PME redist. X/F   321   21032334.608 171827.143
>  1.7
> >  PME spread/gather 321   28043640.870 267967.972
>  2.7
> >  PME 3D-FFT321   28041587.105 116810.882
>  1.2
> >  PME 3D-FFT Comm.

Re: [gmx-users] PDB file

2016-08-15 Thread Justin Lemkul



On 8/15/16 9:30 AM, f.namazi...@sci.ui.ac.ir wrote:

Hi Dear all;
What are these columns "1.00" and "0.00" in PDB file?


Google knows all about PDB format.

-Justin


for example:

ATOM 10  N   LEU A   2   4.595   6.365   3.756  1.00  0.00   N
ATOM 11  CA  LEU A   2   4.471   5.443   2.633  1.00  0.00   C
ATOM 12  C   LEU A   2   5.841   5.176   2.015  1.00  0.00   C
ATOM 13  O   LEU A   2   6.205   4.029   1.755  1.00  0.00   O
ATOM 14  CB  LEU A   2   3.526   6.037   1.578  1.00  0.00   C
ATOM 15  CG  LEU A   2   2.790   4.919   0.823  1.00  0.00   C
ATOM 16  CD1 LEU A   2   3.803   3.916   0.262  1.00  0.00   C
ATOM 17  CD2 LEU A   2   1.817   4.196   1.769  1.00  0.00   C
ATOM 18  H   LEU A   2   4.169   7.246   3.704  1.00  0.00   H
ATOM 19  HA  LEU A   2   4.063   4.514   2.992  1.00  0.00   H
ATOM 20  HB2 LEU A   2   2.804   6.675   2.065  1.00  0.00   H
ATOM 21  HB3 LEU A   2   4.099   6.623   0.873  1.00  0.00   H
ATOM 22  HG  LEU A   2   2.234   5.353   0.004  1.00  0.00   H
ATOM 23 HD11 LEU A   2   4.648   4.447  -0.148  1.00  0.00   H
ATOM 24 HD12 LEU A   2   3.334   3.331  -0.516  1.00  0.00   H
ATOM 25 HD13 LEU A   2   4.137   3.260   1.052  1.00  0.00   H
ATOM 26 HD21 LEU A   2   0.941   3.892   1.216  1.00  0.00   H
ATOM 27 HD22 LEU A   2   1.522   4.860   2.568  1.00  0.00   H
ATOM 28 HD23 LEU A   2   2.296   3.323   2.188  1.00  0.00   H
The Best. Farzaneh.


--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2016-08-15 Thread J Hu
*Hello,*


*I **have recently tried to simulate a shear flow in a molecular dynamics
model by implementing the **Lees Edwards boundary (periodic shear flow)
conditions in GROMACS. However the *

*results I got are strange ...  *

*I added the effect of the shear, delta_U(y)=gamma*dy, where gamma = dU/dy
by modifying the coordinate equation dx/dt = ... + gamma*dy  in the
x-direction so that coordinate change along the x direction becomes
x(t+dt) = x(t) + dt*[v(t) +gamma*dy]. The *
*calculation of the molecular dynamics velocity, v(t)  from the pair
potential was kept the same as usual from Newton's second law without any
change.*


*To compare with the analytical shear flow solution U(y), I then wrote a
routine that averages all atoms velocities in time as well as in the
homogeneuous x directions and so obtained the velocity profile along the
y-direction (the  velocity **information was directly obtained from the gro
files). However, strangely, the resulting profile of the mean atom
velocity, which I expected to be flat around zero, turned out to have a
negative slope, -gamma, as if the model tried to compensate for the
positive shear flow I was trying to impose. This effect is very consistent
for a range of different shears -- from moderate to very small gamma. So I
thought this is due to the thermostat or some other intrinsic GROMACS
feature which checks the atom velocities/deviations and *

*holds the velocities to correspond to the uniform state, hence, creates an
"anti-shear" effect.Did someone encounter a similar problem in GROMACS and
know how to disable these stabilising  features of *
*GROMACS?  Or, does someone know a simpler method how to generated a shear
flow in GROMACS ?*


*Thanks a lot.*


*Best wishes*


*Jin*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] PDB file

2016-08-15 Thread f . namazifar

Hi Dear all;
What are these columns "1.00" and "0.00" in PDB file?
for example:

ATOM 10  N   LEU A   2   4.595   6.365   3.756  1.00  0.00  
  N ATOM 11  CA  LEU A   2   4.471   5.443   2.633  1.00   
0.00   C ATOM 12  C   LEU A   2   5.841   5.176
2.015  1.00  0.00   C ATOM 13  O   LEU A   2   6.205
4.029   1.755  1.00  0.00   O ATOM 14  CB  LEU A   2
3.526   6.037   1.578  1.00  0.00   C ATOM 15  CG  LEU A
2   2.790   4.919   0.823  1.00  0.00   C ATOM 16  CD1  
LEU A   2   3.803   3.916   0.262  1.00  0.00   C ATOM  
17  CD2 LEU A   2   1.817   4.196   1.769  1.00  0.00   C  
ATOM 18  H   LEU A   2   4.169   7.246   3.704  1.00  0.00  
  H ATOM 19  HA  LEU A   2   4.063   4.514   2.992  1.00   
0.00   H ATOM 20  HB2 LEU A   2   2.804   6.675
2.065  1.00  0.00   H ATOM 21  HB3 LEU A   2   4.099
6.623   0.873  1.00  0.00   H ATOM 22  HG  LEU A   2
2.234   5.353   0.004  1.00  0.00   H ATOM 23 HD11 LEU A
2   4.648   4.447  -0.148  1.00  0.00   H ATOM 24 HD12  
LEU A   2   3.334   3.331  -0.516  1.00  0.00   H ATOM  
25 HD13 LEU A   2   4.137   3.260   1.052  1.00  0.00   H  
ATOM 26 HD21 LEU A   2   0.941   3.892   1.216  1.00  0.00  
  H ATOM 27 HD22 LEU A   2   1.522   4.860   2.568  1.00   
0.00   H ATOM 28 HD23 LEU A   2   2.296   3.323
2.188  1.00  0.00   H The Best. Farzaneh.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Loosing partly the available CPU time

2016-08-15 Thread Szilárd Páll
Hi,

Please post full logs; what you cut out of the file will often miss
information needed to diagnose your issues.

At first sight it seems that you simply have an imbalanced system. Not
sure about the source of the imbalance and without knowing more about
your system/setup and how is it decomposed what I can suggest is to:
try other decomposition schemes or simply less decomposition (use more
threads or less cores).

Additionally you also have a pretty bad PP-PME load balance, but
that's likely going to get better if you get you PP performance
better.

Cheers,
--
Szilárd


On Sun, Aug 14, 2016 at 3:23 PM, Alexander Alexander
 wrote:
> Dear gromacs user,
>
> My free energy calculation works well, however, I am loosing around 56.5 %
> of the available CPU time as stated in my log file which is really
> considerable. The problem is due to the load imbalance and domain
> decomposition, but I have no idea to improve it, below is the very end of
> my log file and I would be so appreciated if you could help avoid this.
>
>
>D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S
>
>  av. #atoms communicated per step for force:  2 x 115357.4
>  av. #atoms communicated per step for LINCS:  2 x 2389.1
>
>  Average load imbalance: 285.9 %
>  Part of the total run time spent waiting due to load imbalance: 56.5 %
>  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 2
> % Y 2 % Z 2 %
>  Average PME mesh/force load: 0.384
>  Part of the total run time spent waiting due to PP/PME imbalance: 14.5 %
>
> NOTE: 56.5 % of the available CPU time was lost due to load imbalance
>   in the domain decomposition.
>
> NOTE: 14.5 % performance was lost because the PME ranks
>   had less work to do than the PP ranks.
>   You might want to decrease the number of PME ranks
>   or decrease the cut-off and the grid spacing.
>
>
>  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
>
> On 96 MPI ranks doing PP, and
> on 32 MPI ranks doing PME
>
>  Computing:  Num   Num  CallWall time Giga-Cycles
>  Ranks Threads  Count  (s) total sum%
> -
>  Domain decomp.961 175000 242.339  53508.472   0.5
>  DD comm. load 961 174903   9.076   2003.907   0.0
>  DD comm. bounds   961 174901  27.054   5973.491   0.1
>  Send X to PME 961701  44.342   9790.652   0.1
>  Neighbor search   961 175001 251.994  55640.264   0.6
>  Comm. coord.  96168250001521.009 335838.747   3.4
>  Force 9617017001.9901546039.264  15.5
>  Wait + Comm. F961701   10761.2962376093.759  23.8
>  PME mesh *321701   11796.344 868210.788   8.7
>  PME wait for PP *  22135.7521629191.096  16.3
>  Wait + Recv. PME F961701 393.117  86800.265   0.9
>  NB X/F buffer ops.961   20650001 132.713  29302.991   0.3
>  COM pull force961701 165.613  36567.368   0.4
>  Write traj.   961   7037  55.020  12148.457   0.1
>  Update961   1402 140.972  31126.607   0.3
>  Constraints   961   1402   12871.2362841968.551  28.4
>  Comm. energies961 350001 261.976  57844.219   0.6
>  Rest  52.349  11558.715   0.1
> -
>  Total  33932.0969989607.639 100.0
> -
> (*) Note that with separate PME ranks, the walltime column actually sums to
> twice the total reported, but the cycle count total and % are correct.
> -
>  Breakdown of PME mesh computation
> -
>  PME redist. X/F   321   21032334.608 171827.143   1.7
>  PME spread/gather 321   28043640.870 267967.972   2.7
>  PME 3D-FFT321   28041587.105 116810.882   1.2
>  PME 3D-FFT Comm.  321   56084066.097 299264.666   3.0
>  PME solve Elec321   1402 148.284  10913.728   0.1
> -
>
>Core t (s)   Wall t (s)(%)
>Time:  4341204.79033932.09612793.8
>  9h25:32
>  (ns/day)(hour/ns)
> Performance:   35.6480.673
> Finished mdrun on rank 0 Sat Aug 13 23:45:45 2016
>
> Thanks,
> 

Re: [gmx-users] genion

2016-08-15 Thread Justin Lemkul



On 8/15/16 4:11 AM, f.namazi...@sci.ui.ac.ir wrote:

Hi every body;
Why we should neutralise net charge of the simulation system?


To properly answer your question, we must approach it with a bit more nuance.

One typically neutralizes the net charge of a condensed-phase system for two 
reasons, one physical and one algorithmic.  First, in e.g. aqueous solution, 
positives don't exist without negatives.  That's just fundamental physics.  If a 
species ionizes in solution, its corresponding counterion is generated.  Second, 
the de facto standard for computing electrostatic interactions in an MD 
simulation using PBC is PME, which requires a net-neutral simulation system. 
One does *not* have to add counterions to achieve net neutrality, as the PME 
algorithm itself contibutes a uniform background plasma to neutralize the 
charge.  The problem is that, in heterogeneous systems, this uniform background 
charge leads to artifactual behavior.  Most all biomolecular simulations are 
heterogeneous, so it is unwise to rely on PME itself to account for this effect. 
 So the very simple solution is to add counterions.  Generally, one should add 
some additional ions to mimic in vitro or in vivo conditions (e.g. for 
biomolecules) but the exact approach depends on what is being studied.


There are other systems, for instance an ionized protein in vacuo, for which you 
would not add counterions (there's nowhere to add them, really).  But in these 
cases, you wouldn't be using PME, anyway.


So the real answer depends on what you are doing.  It is commonly accepted that 
with PME, the easiest thing to do is add a couple of ions to balance out the charge.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] TPI and chemical potential

2016-08-15 Thread Gmx QA
Dear list

I am trying to teach myself how the test particle method for excess
chemical potential calculations work. To this end, I created a small system
of about 900 water molecules (TIP3p), and simulated for 1 ns.

I then set up files for inserting an extra water molecule to calculate mu,
the chemical potential.

The end of my tpi_start.gro looks like this:

  884SOL OW 2650   2.466   0.788   2.747 -0.5599 -0.0387  0.0570
  884SOLHW1 2651   2.529   0.835   2.802  0.0024 -0.9901  0.2261
  884SOLHW2 2652   2.506   0.703   2.732 -1.4472 -0.3598 -0.5022
  885SOL OW 2653   0.000   0.000   0.000  0.  0.  0.
  885SOLHW1 2654   0.000   0.000   0.000  0.  0.  0.
  885SOLHW2 2655   0.000   0.000   0.000  0.  0.  0.
   3.01188   3.01188   3.01188

So I think this last water molecule is the one to be inserted. The topology
was also updated accordingly.

I then made a rerun like this:

$ gmx mdrun -v -deffnm tpi -rerun md.xtc

And the output is a series of lines like this:

Reading frame 180 time  900.000   mu  8.924e+00   7.240e+00
mu  7.671e+00   7.242e+00
mu  6.987e+00   7.241e+00
mu  7.010e+00   7.239e+00
mu  7.691e+00   7.241e+00
mu  7.439e+00   7.242e+00
mu  6.757e+00   7.240e+00
mu  8.114e+00   7.243e+00
mu  7.173e+00   7.243e+00
mu  8.768e+00   7.249e+00
Reading frame 190 time  950.000   mu  7.799e+00   7.252e+00
mu  9.284e+00   7.259e+00
mu  8.103e+00   7.262e+00
mu  7.835e+00   7.265e+00
mu  7.321e+00   7.265e+00
mu  7.576e+00   7.267e+00
mu  9.348e+00   7.274e+00
mu  7.021e+00   7.272e+00
mu  7.162e+00   7.272e+00
mu  7.695e+00   7.274e+00
Reading frame 200 time 1000.000   mu  7.104e+00   7.273e+00

Now, is  here the average computed chemical potential? What is the TPI
energy distribution being reported in the tpi.xvg-file?

I tried to google for the chemical potential of water and according to e.g.
[1] (page 13) it should be around -23.5 kJ/mol. I would really appreciate
if someone could comment on this, as I need to understand it before moving
on to more relevant (for me) systems….

Thanks
/PK





[1] Chemical potential of liquids and mixtures via Adaptive Resolution ...

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] genion

2016-08-15 Thread Nikhil Maroli
Depends on the simulation but in general, most of the biological systems
occur at natural pH which around is 7.4, So mimicking the exact biological
system we are neutralising the system.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] genion

2016-08-15 Thread Alexander Alexander
Hi Farzaneh,

If I am not entirely wrong;
Otherwise, the tiny amount of charge in a single box would be huge in the
calculation because of PBC(periodic boundary condition) and a coulomb
explosion would happen.

Cheers,
Alex

On Mon, Aug 15, 2016 at 10:11 AM,  wrote:

> Hi every body;
> Why we should neutralise net charge of the simulation system?
> Regards.
> Farzaneh
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] genion

2016-08-15 Thread f . namazifar

Hi every body;
Why we should neutralise net charge of the simulation system?
Regards.
Farzaneh
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] _replay_Energy minimization, of peptide-ligand, tearing peptide apart

2016-08-15 Thread Nikhil Maroli
http://www.gromacs.org/Documentation/Terminology/Blowing_Up#Diagnosing_an_Unstable_System


Check Archive

-- 
Regards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.