Re: [gmx-users] Affinity setting for 1/16 threads failed. Version 5.0.2

2015-11-18 Thread Mark Abraham
Hi,

These are good issues to take up with the support staff of gordon. mdrun
tries to be a good citizen and by default stays out of the way if some
other part of the software stack is already managing process affinity. As
you can see, doing it right is crucial for good performance. But mdrun -pin
on always works everywhere we know about.

Mark

On Wed, Nov 18, 2015 at 3:28 PM Siva Dasetty  wrote:

> Dear all,
>
> I am running simulations using version 5.0.2 (default in gordon) and I am
> having a performance loss from 180 ns/day to 8 ns/day compared to the same
> simulations that I previously ran in a different cluster.
>
> In both the clusters I am using a single node and 16 cpus (no gpus) and the
> following is the command line I used:
>
> mdrun_mpi -s  -v -deffnm  -nb cpu -cpi 
> -append -pin on
>
>
> Following is reported in the log file:
>
>
> WARNING: Affinity setting for 1/16 threads failed.
>
>  This can cause performance degradation! If you think your setting
> are correct, contact the GROMACS developers.
>
>
> I even tried running a simulation without the flag -pin on and there is no
> change in the performance.
>
>
> Are there any other options that I can try to recover the performance?
>
>
>
> Additional Information:
>
>
> The other difference I could see is in the compilers:
>
>
> In gordon (8ns/day):
>
>
> C compiler: /opt/mvapich2/intel/ib/bin/mpicc Intel 13.0.0.20121010
>
> C compiler flags:-mavx-std=gnu99 -w3 -wd111 -wd177 -wd181 -wd193
> -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418
> -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346
> -wd11074 -wd11076  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> -ansi-alias
>
> C++ compiler:   /opt/mvapich2/intel/ib/bin/mpicxx Intel 13.0.0.20121010
>
> C++ compiler flags:  -mavx-w3 -wd111 -wd177 -wd181 -wd193 -wd271 -wd304
> -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572
> -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076
> -wd1782 -wd2282  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> -ansi-alias
>
>
> In our cluster (180 ns/day):
>
>
> C compiler: /software/openmpi/bin/mpicc GNU 4.8.1
>
> C compiler flags:-msse4.1-Wno-maybe-uninitialized -Wextra -Wno-miss
>
> ing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall -Wno-unused
>
> -Wunused-value -Wunused-parameter  -O3 -DNDEBUG -fomit-frame-pointer -funro
>
> ll-all-loops -fexcess-precision=fast  -Wno-array-bounds
>
> C++ compiler: /software/openmpi/bin/mpicxx GNU 4.8.1
>
> C++ compiler flags:  -msse4.1-Wextra -Wno-missing-field-initializers -W
>
> pointer-arith -Wall -Wno-unused-function  -O3 -DNDEBUG -fomit-frame-pointer
>
>  -funroll-all-loops -fexcess-precision=fast  -Wno-array-bounds
>
>
>
>
> Thanks in advance for your help,
>
> --
> Siva Dasetty
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Affinity setting for 1/16 threads failed. Version 5.0.2

2015-11-18 Thread Siva Dasetty
Gordon:

Command line:

  mdrun_mpi --version


Gromacs version:VERSION 5.0.2

Precision:  single

Memory model:   64 bit

MPI library:MPI

OpenMP support: enabled

GPU support:disabled

invsqrt routine:gmx_software_invsqrt(x)

SIMD instructions:  AVX_256

FFT library:fftw-3.3.3-sse2

RDTSCP usage:   enabled

C++11 compilation:  disabled

TNG support:enabled

Tracing support:disabled

Built on:   Sun Oct 19 14:11:10 PDT 2014

Built by:   r...@gcn-20-88.sdsc.edu [CMAKE]

Build OS/arch:  Linux 2.6.32-431.29.2.el6.x86_64 x86_64

Build CPU vendor:   GenuineIntel

Build CPU brand:Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Build CPU family:   6   Model: 45   Stepping: 6

Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
sse4.2 ssse3 tdt x2apic

C compiler: /opt/mvapich2/intel/ib/bin/mpicc Intel 13.0.0.20121010

C compiler flags:-mavx-std=gnu99 -w3 -wd111 -wd177 -wd181 -wd193
-wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418
-wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346
-wd11074 -wd11076  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
-ansi-alias

C++ compiler:   /opt/mvapich2/intel/ib/bin/mpicxx Intel 13.0.0.20121010

C++ compiler flags:  -mavx-w3 -wd111 -wd177 -wd181 -wd193 -wd271 -wd304
-wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419 -wd1572
-wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074 -wd11076
-wd1782 -wd2282  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
-ansi-alias

Boost version:  1.55.0 (internal)



Our Cluster:

Command line:

  mdrun --version


Gromacs version:VERSION 5.0.2

Precision:  single

Memory model:   64 bit

MPI library:MPI

OpenMP support: enabled

GPU support:disabled

invsqrt routine:gmx_software_invsqrt(x)

SIMD instructions:  SSE4.1

FFT library:fftw-3.3.3-sse2

RDTSCP usage:   enabled

C++11 compilation:  enabled

TNG support:enabled

Tracing support:disabled

Built on:   Sun Mar  8 19:06:35 EDT 2015

Built by:   sdasett@user001 [CMAKE]

Build OS/arch:  Linux 2.6.32-358.23.2.el6.x86_64 x86_64

Build CPU vendor:   GenuineIntel

Build CPU brand:Intel(R) Xeon(R) CPU   X7542  @ 2.67GHz

Build CPU family:   6   Model: 46   Stepping: 6

Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr
nonstop_tsc pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 x2apic

C compiler: /software/openmpi/1.8.1_gcc/bin/mpicc GNU 4.8.1

C compiler flags:-msse4.1-Wno-maybe-uninitialized -Wextra
-Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
-Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
-fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast
-Wno-array-bounds

C++ compiler:   /software/openmpi/1.8.1_gcc/bin/mpicxx GNU 4.8.1

C++ compiler flags:  -msse4.1   -std=c++0x  -Wextra
-Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function
-O3 -DNDEBUG -fomit-frame-pointer -funroll-all-loops
-fexcess-precision=fast  -Wno-array-bounds

Boost version:  1.55.0 (internal)



Thanks,

On Wed, Nov 18, 2015 at 11:05 AM, Szilárd Páll 
wrote:

> I don't see much similarity - except the type of error - between your issue
> and the one reported on redmine 1184. In that case Intel Harpertown (I
> assume Grodon is much newer), with thread-MPI was used.
>
> --
> Szilárd
>
> On Wed, Nov 18, 2015 at 4:45 PM, Siva Dasetty 
> wrote:
>
> > Thank you Mark. Yes I have already taken this issue to gordon and while
> am
> > awaiting for their response I am just wondering if this issue has
> anything
> > to do with the bug#1184: http://redmine.gromacs.org/issues/1184.
> >
> >
> > Thank you again for your quick response.
> >
> > On Wed, Nov 18, 2015 at 10:31 AM, Mark Abraham  >
> > wrote:
> >
> > > Hi,
> > >
> > > These are good issues to take up with the support staff of gordon.
> mdrun
> > > tries to be a good citizen and by default stays out of the way if some
> > > other part of the software stack is already managing process affinity.
> As
> > > you can see, doing it right is crucial for good performance. But mdrun
> > -pin
> > > on always works everywhere we know about.
> > >
> > > Mark
> > >
> > > On Wed, Nov 18, 2015 at 3:28 PM Siva Dasetty 
> > > wrote:
> > >
> > > > Dear all,
> > > >
> > > > I am running simulations using version 5.0.2 (default in gordon) and
> I
> > am
> > > > having a performance loss from 180 ns/day to 8 ns/day compared to the
> > > same
> > > > simulations that I previously ran in a different cluster.
> > > >
> > > > In both the clusters I am using a single node and 16 cpus (no gpus)
> and
> > > the
> > > > following is the 

[gmx-users] Umbrella sampling tutorial - V-rescale or Berendsen thermostat

2015-11-18 Thread Martin Nors Pedersen

Hello everyone

I am following the Justin Lemkul's tutorial for Umbrella sampling: 
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/umbrella/index.html 



I am at the step where i want to perfom the NPT equilibration. Using the 
input file provided in the tutorial, I use grompp to prepare the .tpr 
file. I get a note, however, which states that the selected thermostat 
in the provided npt.mdp is not optimal:


" The Berendsen thermostat does not generate the correct kinetic energy
 distribution. You might want to consider using the V-rescale 
thermostat. "


I just want to ask if there is some specific reason for choosing the 
Berendsen thermostat over of the V-rescale in the NPT equilibration? 
According to the manual ( 
http://manual.gromacs.org/online/mdp_opt.html#tc ), they are similar 
except for a stochastic term which makes the V-rescale thermostat 
better. But I do not know if this has any (significant) influence in the 
NPT equilibration.


Cheers!
Martin


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Affinity setting for 1/16 threads failed. Version 5.0.2

2015-11-18 Thread Szilárd Páll
I don't see much similarity - except the type of error - between your issue
and the one reported on redmine 1184. In that case Intel Harpertown (I
assume Grodon is much newer), with thread-MPI was used.

--
Szilárd

On Wed, Nov 18, 2015 at 4:45 PM, Siva Dasetty  wrote:

> Thank you Mark. Yes I have already taken this issue to gordon and while am
> awaiting for their response I am just wondering if this issue has anything
> to do with the bug#1184: http://redmine.gromacs.org/issues/1184.
>
>
> Thank you again for your quick response.
>
> On Wed, Nov 18, 2015 at 10:31 AM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > These are good issues to take up with the support staff of gordon. mdrun
> > tries to be a good citizen and by default stays out of the way if some
> > other part of the software stack is already managing process affinity. As
> > you can see, doing it right is crucial for good performance. But mdrun
> -pin
> > on always works everywhere we know about.
> >
> > Mark
> >
> > On Wed, Nov 18, 2015 at 3:28 PM Siva Dasetty 
> > wrote:
> >
> > > Dear all,
> > >
> > > I am running simulations using version 5.0.2 (default in gordon) and I
> am
> > > having a performance loss from 180 ns/day to 8 ns/day compared to the
> > same
> > > simulations that I previously ran in a different cluster.
> > >
> > > In both the clusters I am using a single node and 16 cpus (no gpus) and
> > the
> > > following is the command line I used:
> > >
> > > mdrun_mpi -s  -v -deffnm  -nb cpu -cpi  file>
> > > -append -pin on
> > >
> > >
> > > Following is reported in the log file:
> > >
> > >
> > > WARNING: Affinity setting for 1/16 threads failed.
> > >
> > >  This can cause performance degradation! If you think your
> > setting
> > > are correct, contact the GROMACS developers.
> > >
> > >
> > > I even tried running a simulation without the flag -pin on and there is
> > no
> > > change in the performance.
> > >
> > >
> > > Are there any other options that I can try to recover the performance?
> > >
> > >
> > >
> > > Additional Information:
> > >
> > >
> > > The other difference I could see is in the compilers:
> > >
> > >
> > > In gordon (8ns/day):
> > >
> > >
> > > C compiler: /opt/mvapich2/intel/ib/bin/mpicc Intel 13.0.0.20121010
> > >
> > > C compiler flags:-mavx-std=gnu99 -w3 -wd111 -wd177 -wd181
> -wd193
> > > -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418
> > > -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346
> > > -wd11074 -wd11076  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> > > -ansi-alias
> > >
> > > C++ compiler:   /opt/mvapich2/intel/ib/bin/mpicxx Intel
> > 13.0.0.20121010
> > >
> > > C++ compiler flags:  -mavx-w3 -wd111 -wd177 -wd181 -wd193 -wd271
> > -wd304
> > > -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419
> -wd1572
> > > -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074
> -wd11076
> > > -wd1782 -wd2282  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> > > -ansi-alias
> > >
> > >
> > > In our cluster (180 ns/day):
> > >
> > >
> > > C compiler: /software/openmpi/bin/mpicc GNU 4.8.1
> > >
> > > C compiler flags:-msse4.1-Wno-maybe-uninitialized -Wextra
> > -Wno-miss
> > >
> > > ing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> > -Wno-unused
> > >
> > > -Wunused-value -Wunused-parameter  -O3 -DNDEBUG -fomit-frame-pointer
> > -funro
> > >
> > > ll-all-loops -fexcess-precision=fast  -Wno-array-bounds
> > >
> > > C++ compiler: /software/openmpi/bin/mpicxx GNU 4.8.1
> > >
> > > C++ compiler flags:  -msse4.1-Wextra
> -Wno-missing-field-initializers
> > -W
> > >
> > > pointer-arith -Wall -Wno-unused-function  -O3 -DNDEBUG
> > -fomit-frame-pointer
> > >
> > >  -funroll-all-loops -fexcess-precision=fast  -Wno-array-bounds
> > >
> > >
> > >
> > >
> > > Thanks in advance for your help,
> > >
> > > --
> > > Siva Dasetty
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
>
> --
> Siva Dasetty
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't 

Re: [gmx-users] cannot open traj.xtc

2015-11-18 Thread Justin Lemkul



On 11/18/15 2:45 AM, surbhi mahajan wrote:

Dear users,
I am M.Sc student , I have been doing my simulations on lipids , I am stuck
at the analysis part in which i want to calculate deuterium order parameter
after I give the command:
g_order -s md_0_1.tpr -n sn1.ndx -d z -od deuter_sn1.xvg
I get the error cannot open file traj.xtc. please help me in solving this
error.


You didn't specify a trajectory file name (-f argument) so g_order looks for the 
default name (traj.xtc) and it doesn't exit.  Specify a proper input file name.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Computing of reaction coordinate by g_wham

2015-11-18 Thread Justin Lemkul



On 11/18/15 4:08 AM, Damiano Buratto wrote:



  Dear Gromacs users,
please, is there someone that can help me to understand how g_wham compute the 
reaction coordinate of pulled group?
I would like to compute the PMF for the passage of a molecule through a channel 
(with spatial restraint) using the umbrella sampling (US) technique. I built 
the windows for the US by looking at the centre of mass (COM) of the pulled 
molecule in a way that any window is at 1A of distance from the former. When 
the US was finished, I run g_wham (with reference group on the channel) and I 
saw that the histograms of the windows are not centred where I supposed they 
should be (the centre are not at 1A of distance one from the other, and the 
value is different from the COM of the pulled group less the COM of the 
reference group). Moreover, if I run g_wham in the verbose mode I can see that 
the position associated with each US window is a bit different from the value I 
expected (but in agreement with the centre of the histograms).


The reaction coordinate and window positions are read from the .tpr files you 
provide to g_wham.  If these aren't what you expected, you likely set something 
up wrong.  grompp provides lots of useful information when you're building the 
.tpr files, so start by checking to make sure that information matches your 
expectations/needs.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun std::bad_alloc whilst using PLUMED

2015-11-18 Thread Nash, Anthony
Thanks Mark,

I threw an email across to the plumed group this morning. I was surprised
to get a reply almost immediately. It *could* be the memory allocation
required to define the grid spacing in PLUMED.

Thanks
Anthony

Dr Anthony Nash
Department of Chemistry
University College London





On 17/11/2015 22:24, "gromacs.org_gmx-users-boun...@maillist.sys.kth.se on
behalf of Mark Abraham"  wrote:

>Hi,
>
>GROMACS is apparently the first to notice that memory is a problem, but
>you
>should also be directing questions about memory use with different kinds
>of
>CVs to the PLUMED people. mdrun knows nothing at all about the PLUMED CVs.
>The most likely explanation is that they have some data structure that
>works OK on small scale problems, but which doesn't do well as the number
>of atoms, CVs, CV complexity, and/or ranks increases.
>
>Mark
>
>On Tue, Nov 17, 2015 at 11:05 PM Nash, Anthony  wrote:
>
>> Hi all,
>>
>> I am using PLUMED 2.2 and gromacs 5.0.4. For a while I had been testing
>> the viability of three collective variables for plumed, two dihedral
>> angles and one centre of mass distance. After observing my dimer rotate
>> about each other I decided it needed an intrahelical distance between
>>two
>> of the dihedral atoms (A,B,C,D), per helix, to sample the CV space
>>whilst
>> maintaining the Œregular¹ alpha-helical structure (the dihedral sampling
>> was coming from the protein uncoiling rather than rotating). Note: it is
>> likely that I will change these distances to the built-in alpha helical
>> CV.
>>
>> The moment I increased the number of CVs from three to five, gromacs
>> throws out a memory error. When I remove the 5th CV it still crashes.
>>When
>> I remove the 4th it stops crashing.
>>
>> ‹‹‹
>> CLUSTER OUTPUT FILE
>> ‹‹‹
>>
>>
>> starting mdrun 'NEU_MUT in POPC in water'
>> 5000 steps, 10.0 ps.
>>
>> ---
>> Program: gmx mdrun, VERSION 5.0.4
>>
>> Memory allocation failed:
>> std::bad_alloc
>>
>> For more information and tips for troubleshooting, please check the
>>GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> Halting parallel program mdrun_mpi_d on CPU 0 out of 12
>>
>>
>>
>>
>> It halts all 12 processes and the job dies. I increased the memory so I
>>am
>> using 43.2 GB of RAM distributed over 12 processes. The job still fails
>> (but then, allocation of memory is very different to not having any
>>memory
>> at all).
>>
>> The contents of the gromacs.log file only reports the initialisation of
>> gromacs followed by the initialisation of plumed. After this I would
>>have
>> expected the regular MD stepping output. I¹ve included the plumed
>> initialisation below. I would appreciate any help. I suspect the problem
>> lies with the 4th and 5th CV although systematically removing them and
>> playing around with the parameters hasn¹t yielded anything yet. Please
>> ignore what parameter values I have set. Besides the atom number
>> everything else is a result of me trying to work out where certain
>>ranges
>> of values is causing PLUMED to exit and gromacs to crash. PLUMED input
>> file below:
>>
>>
>> ‹‹‹
>> PLUMED INPUTFILE
>> ‹‹‹
>>
>> phi: TORSION ATOMS=214,230,938,922
>> psi: TORSION ATOMS=785,801,367,351
>>
>> c1: COM ATOMS=1-571
>> c2: COM ATOMS=572-1142
>> COMdist: DISTANCE ATOMS=c1,c2
>>
>> d1: DISTANCE ATOMS=214,367
>> d2: DISTANCE ATOMS=938,785
>>
>> UPPER_WALLS ARG=COMdist AT=2.5 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=COMuwall
>> LOWER_WALLS ARG=COMdist AT=1.38 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=COMlwall
>>
>> UPPER_WALLS ARG=d1 AT=1.260 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d1uwall
>> LOWER_WALLS ARG=d1 AT=1.228 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d1lwall
>>
>> UPPER_WALLS ARG=d2 AT=1.228 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d2uwall
>> LOWER_WALLS ARG=d2 AT=1.196 KAPPA=1000 EXP=2.0 EPS=1.0 OFFSET=0
>> LABEL=d2lwall
>>
>> METAD ...
>> LABEL=metad
>> ARG=phi,psi,COMdist,d1,d2
>> PACE=1
>> HEIGHT=0.2
>> SIGMA=0.06,0.06,0.06,0.06,0.06
>> FILE=HILLS_neu_mut_meta_A
>> BIASFACTOR=10.0
>> TEMP=310.0
>> GRID_MIN=-pi,-pi,0,0,0
>> GRID_MAX=pi,pi,2.5,2.5,2.5
>> GRID_SPACING=0.01,0.01,0.01,0.01,0.01
>> ... METAD
>>
>>
>> PRINT STRIDE=100
>> 
>>ARG=phi,psi,COMdist,COMlwall.bias,COMuwall.bias,d1,d1lwall.bias,d1uwall.b
>>ia
>> s,d2,d2lwall.bias,d2uwall.bias,metad.bias FILE=COLVAR_neu_mut_meta_A
>>
>>
>>
>> ‹‹‹
>> GROMACS LOGFILE
>> ‹‹‹
>>
>> Center of mass motion removal mode is Linear
>> We have the following groups for center of mass motion removal:
>>   0:  rest
>> There are: 53575 Atoms
>> Charge group distribution at step 0:  4474 4439 4268 4913 4471 

[gmx-users] Fwd: Select waters in a specific pore with gmx select

2015-11-18 Thread Mahboobe Sadr
Dear all

How can i select water molecules between a specific residue in the top and
another residue in the bottom of a specific pore in the tetramer AQP5?

I used gmx select but i couldn't find the proper expression to select it.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] surfactant.itp file

2015-11-18 Thread mahdiyeh poorsargol
Hello
I'm a beginner in Gromacs and I want to work with the surfactant (for
example SDS surfactant). I converted the surfactant.pdb file to the
surfactant.itp file using TOPOLGEN, but for S atom, atomtype not
defined!
please help me. thankyou

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass
typeBchargeB  massB
.
.
   30   opls_136  1 LIG  C 10 -0.120   12.01100
31   opls_140  1LIG  H 10  0.0601.00800
32   opls_140  1LIG  H 10  0.0601.00800
33   opls_136  1LIG  C 11 -0.120   12.01100
34   opls_140  1LIG  H 11  0.0601.00800
35   opls_140  1LIG  H 11  0.0601.00800
36   opls_136  1LIG  C 12 -0.120   12.01100
37   opls_140  1LIG  H 12  0.0601.00800
38   opls_140  1LIG  H 12  0.0601.00800
39  1   0  S   12  32 0.000
0.0
40   opls_236  1LIG  O 12 -0.500   15.99940
41   opls_154  1LIG  O 12 -0.683   15.99940
42   opls_155  1LIG  H 12  0.4181.00800
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] surfactant.itp file

2015-11-18 Thread mahdiyeh poorsargol
-- Forwarded message --
From: mahdiyeh poorsargol 
Date: Wed, 18 Nov 2015 12:26:07 +0330
Subject: surfactant.itp file
To: gmx-us...@gromacs.org

Hello
I'm a beginner in Gromacs and I want to work with the surfactant (for
example SDS surfactant). I converted the surfactant.pdb file to the
surfactant.itp file using TOPOLGEN, but for S atom, atomtype not
defined!
please help me. thankyou

[ atoms ]
;   nr   type  resnr residue  atom   cgnr charge   mass
typeBchargeB  massB
.
.
   30   opls_136  1 LIG  C 10 -0.120   12.01100
31   opls_140  1LIG  H 10  0.0601.00800
32   opls_140  1LIG  H 10  0.0601.00800
33   opls_136  1LIG  C 11 -0.120   12.01100
34   opls_140  1LIG  H 11  0.0601.00800
35   opls_140  1LIG  H 11  0.0601.00800
36   opls_136  1LIG  C 12 -0.120   12.01100
37   opls_140  1LIG  H 12  0.0601.00800
38   opls_140  1LIG  H 12  0.0601.00800
39  1   0  S   12  32 0.0000.0
40   opls_236  1LIG  O 12 -0.500   15.99940
41   opls_154  1LIG  O 12 -0.683   15.99940
42   opls_155  1LIG  H 12  0.4181.00800
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Running ultiple job in core?

2015-11-18 Thread RJ
Hi there,


I have 24 threads in my PC with one GTX 980Ti GPU. I would like to run two 
simulation job by assigning 12 threads for each job. I have tried using "-ntomp 
12 -ntmpi 1" for mentioning the threads and gpu. However, i couldnt get the 
similar speed as if run them alone with 12 threads. I have even used "-pin on" 
but haven't get any significant changes in the speed?


Its much appreciated if you could provide the hints on it. Thanks a lot.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] surfactant.itp file

2015-11-18 Thread mahdiyeh poorsargol
I have a OPLS-AA SDS, but I don't know how include in the TOPOLGEN!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] LINC warning

2015-11-18 Thread ??????
Hi All:

I have some strange problems with LINC warning in my simulations. 
mdrun gave a  lot of LINC warnings and the simulation explored after 5 ns 
simulations, below is one of these warnings:
-
time 7397.74 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.008123, max 0.064785 (between atoms 1756 and 1755)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
   1761   1759   30.70.1269   0.1267  0.1250
-
I am a little confused, because this happened after several ns simulations, so 
I would suppose this is not an energy minimization problem. I searched the 
mailing list and I found that someone proposed that this may because of the 
pressure coupling. 

below is the temperature and pressure coupling I am using:
--
; Berendsen temperature coupling is on in two groups
Tcoupl  =  V-rescale ; Berendsen  
tc-grps=  Lipids Solvent 
tau_t   =  0.10.1 
ref_t   =  330.0330.0  

Pcoupl  =  Berendsen
Pcoupltype  =  semiisotropic   
tau_p   =  1.0 1.0 
compressibility =  4.6e-5 4.6e-5
ref_p   =  1.0 1.0
---

I am using gromacs407 with reaction field for the long range electrostatic 
interactions for some reasons. I am wondering if anyone could gave me some 
suggestions of how to solve this problem. 
Any help will be highly appreciated. 

Cheers, 
RXG
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] surfactant.itp file

2015-11-18 Thread mahdiyeh poorsargol
I have a OPLS-AA SDS, but I don't know how include in the TOPOLGEN!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.