[gmx-users] tpo is removed in simulation

2016-03-15 Thread Mehreen Jan

respected sir
5.0.7
43A1p


my TPO is removed in simulation while sep is remain attached with residue ???
kindly privide me any guide line
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Distance between two specific residues

2016-03-15 Thread Abid Channa
Dear Gromacs users,
I am trying to calculate distance between two specific residues of active site 
of my protein in MD. I have made separate .index file of  two specific groups. 
Then I am trying this command 
" gmx distance -f md_0_1.xtc -s md_0_1.tpr -oav distance.xvg -select -n 
index.ndx -tu ns ". When I am selecting my residues numbers for option -select 
in above command. error  appeared that pairs are not found in your selection 
residues (1st residue "11 atoms" and 2nd residue "15 atoms" ). Kindly guide me 
how I calculate distance between two specific residues in MD.
Thanks,
Regards, 
Abid Ali Channa,
Junior Research Fellow,
Lab No.  P-133, Computational Chemistry Unit,
 Dr .Panjwani Center for Molecular Medicine and Drug Research (PCMD),
 International Center for Chemical and Biological Sciences  (ICCBS),
 University of Karachi-75270.Karachi-Pakistan.UAN # (92-21) 111-222-292 Ext. 
(309)
 Cell # +923013553051.
http://www.iccs.edu/
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] tpo is removed after 10ns... help help plz

2016-03-15 Thread Mehreen Jan

respected sir!
version 5.0.7
43A1p
respected sir kindly provide me guide line about TPO 
my TPO is breakdown when i generated PDB of my protein while SEP and PTR is 
remain attached i am surprised what happens with TPO ???
 kindly provide me any idea what happens TPO?? why its breakdown ???

thank you!
mehreen jan
 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How Gromacs H_bond module compute hydrogen bonding correlation function practically and in detail?

2016-03-15 Thread 자연과학부
Dear all gromacs users and developers


I make my own code based on definition introduced on paper of Luzar, Chandler 
and Van der Spoel and compute correlation function so that It can have initial 
value of 1 and approach to 0 as time goes on. I plot computed correlation 
function together with correlation function that h_bond module compute(2.jpg, 
blue - my correlation, green - gromacs_no_corrected and red-gromacs_corrected ) 
and I found that they have almost same pattern but has totally different y 
value except for t = 0. So, It seems I'm close to the result but I'm still not 
understanding some important thing...


What I want to know is actually that how practically gromacs h_bond module 
computes hydrogen bond correlation from trajectory. I read related papers so 
many times and I know the procedure that we define existence function and 
compute the correlation function of them and also I know the definition. But, 
still I'm had difficulty in getting exactly same correlation function as what 
h_bond modules compute.



So, If Anybody know that in very detail and practically how gromacs h_bond 
module compute the hydrogen bond correlation function, Please let me know...


I'm struggling with this problems more than 3 months and I'm exhausted.. Again,

What I want is not general but detailed and practical procedure that gromacs 
h_bond module actually take.


Thank you for reading my question and I'm look forward to your reply !!



-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] The cut-off length is longer than half the shortest box vector or longer than the smallest box diagonal element

2016-03-15 Thread Poncho Arvayo Zatarain
Hello, i ḿ working in a lipid bilayer using charmm36 forcefield and trying to 
do NVT equilibration but when i use grompp for nvt i receive the folowing 
error:  The cut-off length is longer than half the shortest box vector or 
longer than the smallest box diagonal element. Increase the box size or 
decrease rlist. I read about this at Manual of Gromacs and decrease the rlist 
but the error is still there. The other solution is to increase the box size 
but, it is safe to do this? or what can i do? i can attach my nvt.mdp file if 
you wantThanks 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gpu-performance reduces during FEP calculation

2016-03-15 Thread jagannath mondal
Hi
  It is a small ligand decoupling from its native protein and there are
orientational restraints near high decoupled region.
I am using gcc and gfortran compilers .
Jagannath

On Tue, Mar 15, 2016 at 6:54 PM, Szilárd Páll 
wrote:

> No, the free energy kernels run on the CPU. This large change must mean
> that you have a relatively large fraction of the system participating in
> perturbed interactions.
>
> What compiler are you using? The free energy kernel is a bit "sensitive"
> (to put it mildly) and I do remember seeing much better performance with
> some compilers than others.
>
> --
> Szilárd
>
> On Tue, Mar 15, 2016 at 2:18 PM, jagannath mondal 
> wrote:
>
> > Dear Gromacs users
> >   I am trying to perform Free energy peturbation calculation in presence
> of
> > distance, angle and dihedral-restraint for a protein-ligand system.
> > However, I am finding, on turning on the FEP calculation, the performance
> > of gromacs5.1.1. in a gpu-based workstation significantly gets reduced.
> the
> > gpu/cpu ratio reduces from 1 to 0.234 on turning on FEP module. I was
> > wondering whether FEP-module is still not using gpu-based optimization.
> > Jagannath
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Problem with the mdrun_openmpi on cluster

2016-03-15 Thread James Starlight
assuming that below command produce what I am looking for
-bash-4.1$ tail -n 3 eq_npt.log
   (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:   1978.319102.192 21.487  1.117
Finished mdrun on node 0 Tue Mar 15 16:23:03 2016



what combination of the shell commands will be useful to extract from
that information that digit 21.487 and put it into the specified log.
Then Iàd like to extract inloop several of such data from 10
independent runs, and calculate the average

2016-03-15 14:45 GMT+01:00 James Starlight :
> Right, thanks so much!
>
> 2016-03-15 13:58 GMT+01:00 Mark Abraham :
>> Hi,
>>
>>
>> On Tue, Mar 15, 2016 at 11:57 AM James Starlight 
>> wrote:
>>
>>> just performed some benchmarks with full atomic system- short md of
>>> the water soluble protein still using mpiexec -np 46 mdrun_openmpi of
>>> the GMX 4.5 and there were no such errors with DD so it seems that the
>>> problem indeed in MARTINI atoms representation.
>>>
>>
>> Not really. The bonded interactions have a longer physical range in a CG
>> model, and that limits the current implementation of domain decomposition.
>>
>>
>>> BTW how I could quickly check some info about performance of the
>>> simulation7 what logs should I expect7 If somebody has already done it
>>>
>>
>> Depends on your simulation and hardware, so nobody has anything that is
>> obviously comparable.
>>
>>
>>> I will be very thankful for some usefull combination of shell commands
>>> which will extract performance information from sim log.
>>>
>>
>> Start with tail -n 50 md.log ;-)
>>
>> Mark
>>
>> Thanks in advance!!
>>>
>>> J.
>>>
>>> 2016-03-14 18:27 GMT+01:00 Justin Lemkul :
>>> >
>>> >
>>> > On 3/14/16 1:26 PM, James Starlight wrote:
>>> >>
>>> >> For that system I have not defined virtual sites.
>>> >
>>> >
>>> > That disagrees with the error message, which explicitly complains about
>>> > vsites.
>>> >
>>> >> BTW the same simulation on local desctop using 2 cores from core2 duo
>>> runs
>>> >> OK =)
>>> >>
>>> >
>>> > Because you're not invoking DD there.
>>> >
>>> >> so one of the solution probably is to try to use more recent gmx 5.0
>>> >> to see what will happenes
>>> >>
>>> >
>>> > Good idea.
>>> >
>>> > -Justin
>>> >
>>> >
>>> >> 2016-03-14 18:22 GMT+01:00 Justin Lemkul :
>>> >>>
>>> >>>
>>> >>>
>>> >>> On 3/14/16 1:19 PM, James Starlight wrote:
>>> 
>>> 
>>>  I tried to increase size on the system providding much bigger bilayer
>>>  in the system
>>> 
>>>  for this task I obtained another error also relevant to DD
>>> 
>>>  Program g_mdrun_openmpi, VERSION 4.5.7
>>>  Source code file:
>>>  /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec_con.c, line: 693
>>> 
>>>  Fatal error:
>>>  DD cell 0 2 1 could only obtain 0 of the 1 atoms that are connected
>>>  via vsites from the neighboring cells. This probably means your vsite
>>>  lengths are too long compared to the domain decomposition cell size.
>>>  Decrease the number of domain decomposition grid cells.
>>>  For more information and tips for troubleshooting, please check the
>>>  GROMACS
>>>  website at http://www.gromacs.org/Documentation/Errors
>>>  ---
>>> 
>>>  "It's So Fast It's Slow" (F. Black)
>>> 
>>>  Error on node 9, will try to stop all the nodes
>>>  Halting parallel program g_mdrun_openmpi on CPU 9 out of 64
>>> 
>>> 
>>>  BTW I checked the bottom of the syste,.gro file and found the next
>>>  sizes which are seems too small for my syste, consisted for several
>>>  hundreds of lipid, arent it7
>>> 
>>>  15.0  15.0  15.0   0.0   0.0   0.0
>>>  0.0
>>>  0.0   0.0
>>> 
>>> >>>
>>> >>> No, that seems fine.  But if your box is set up wrong, that's your
>>> fault
>>> >>> from the command below :)
>>> >>>
>>> 
>>>  for my case that gro file was produced automatically using MARTINI
>>>  method
>>> 
>>>  ./insane.py -f test.pdb -o system.gro -p system.top -pbc cubic -box
>>>  15,15,15 -l DPPC:4 -l DOPC:3 -l CHOL:3 -salt 0.15 -center -sol W
>>> 
>>> 
>>>  Will be very thankful for any help!!
>>> 
>>> >>>
>>> >>> So you've got a system that is a CG model, with virtual sites?  That's
>>> >>> going
>>> >>> to create all kinds of havoc.  Please do try Googling your error,
>>> because
>>> >>> this difficulty has come up before specifically in the case of CG
>>> >>> systems,
>>> >>> which have longer-than-normal bonded interactions and requires some
>>> mdrun
>>> >>> tuning.
>>> >>>
>>> >>> -Justin
>>> >>>
>>> >>> --
>>> >>> ==
>>> >>>
>>> >>> Justin A. Lemkul, Ph.D.
>>> >>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>>> >>>
>>> >>> Department of 

Re: [gmx-users] Problem with the mdrun_openmpi on cluster

2016-03-15 Thread James Starlight
Right, thanks so much!

2016-03-15 13:58 GMT+01:00 Mark Abraham :
> Hi,
>
>
> On Tue, Mar 15, 2016 at 11:57 AM James Starlight 
> wrote:
>
>> just performed some benchmarks with full atomic system- short md of
>> the water soluble protein still using mpiexec -np 46 mdrun_openmpi of
>> the GMX 4.5 and there were no such errors with DD so it seems that the
>> problem indeed in MARTINI atoms representation.
>>
>
> Not really. The bonded interactions have a longer physical range in a CG
> model, and that limits the current implementation of domain decomposition.
>
>
>> BTW how I could quickly check some info about performance of the
>> simulation7 what logs should I expect7 If somebody has already done it
>>
>
> Depends on your simulation and hardware, so nobody has anything that is
> obviously comparable.
>
>
>> I will be very thankful for some usefull combination of shell commands
>> which will extract performance information from sim log.
>>
>
> Start with tail -n 50 md.log ;-)
>
> Mark
>
> Thanks in advance!!
>>
>> J.
>>
>> 2016-03-14 18:27 GMT+01:00 Justin Lemkul :
>> >
>> >
>> > On 3/14/16 1:26 PM, James Starlight wrote:
>> >>
>> >> For that system I have not defined virtual sites.
>> >
>> >
>> > That disagrees with the error message, which explicitly complains about
>> > vsites.
>> >
>> >> BTW the same simulation on local desctop using 2 cores from core2 duo
>> runs
>> >> OK =)
>> >>
>> >
>> > Because you're not invoking DD there.
>> >
>> >> so one of the solution probably is to try to use more recent gmx 5.0
>> >> to see what will happenes
>> >>
>> >
>> > Good idea.
>> >
>> > -Justin
>> >
>> >
>> >> 2016-03-14 18:22 GMT+01:00 Justin Lemkul :
>> >>>
>> >>>
>> >>>
>> >>> On 3/14/16 1:19 PM, James Starlight wrote:
>> 
>> 
>>  I tried to increase size on the system providding much bigger bilayer
>>  in the system
>> 
>>  for this task I obtained another error also relevant to DD
>> 
>>  Program g_mdrun_openmpi, VERSION 4.5.7
>>  Source code file:
>>  /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec_con.c, line: 693
>> 
>>  Fatal error:
>>  DD cell 0 2 1 could only obtain 0 of the 1 atoms that are connected
>>  via vsites from the neighboring cells. This probably means your vsite
>>  lengths are too long compared to the domain decomposition cell size.
>>  Decrease the number of domain decomposition grid cells.
>>  For more information and tips for troubleshooting, please check the
>>  GROMACS
>>  website at http://www.gromacs.org/Documentation/Errors
>>  ---
>> 
>>  "It's So Fast It's Slow" (F. Black)
>> 
>>  Error on node 9, will try to stop all the nodes
>>  Halting parallel program g_mdrun_openmpi on CPU 9 out of 64
>> 
>> 
>>  BTW I checked the bottom of the syste,.gro file and found the next
>>  sizes which are seems too small for my syste, consisted for several
>>  hundreds of lipid, arent it7
>> 
>>  15.0  15.0  15.0   0.0   0.0   0.0
>>  0.0
>>  0.0   0.0
>> 
>> >>>
>> >>> No, that seems fine.  But if your box is set up wrong, that's your
>> fault
>> >>> from the command below :)
>> >>>
>> 
>>  for my case that gro file was produced automatically using MARTINI
>>  method
>> 
>>  ./insane.py -f test.pdb -o system.gro -p system.top -pbc cubic -box
>>  15,15,15 -l DPPC:4 -l DOPC:3 -l CHOL:3 -salt 0.15 -center -sol W
>> 
>> 
>>  Will be very thankful for any help!!
>> 
>> >>>
>> >>> So you've got a system that is a CG model, with virtual sites?  That's
>> >>> going
>> >>> to create all kinds of havoc.  Please do try Googling your error,
>> because
>> >>> this difficulty has come up before specifically in the case of CG
>> >>> systems,
>> >>> which have longer-than-normal bonded interactions and requires some
>> mdrun
>> >>> tuning.
>> >>>
>> >>> -Justin
>> >>>
>> >>> --
>> >>> ==
>> >>>
>> >>> Justin A. Lemkul, Ph.D.
>> >>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>> >>>
>> >>> Department of Pharmaceutical Sciences
>> >>> School of Pharmacy
>> >>> Health Sciences Facility II, Room 629
>> >>> University of Maryland, Baltimore
>> >>> 20 Penn St.
>> >>> Baltimore, MD 21201
>> >>>
>> >>> jalem...@outerbanks.umaryland.edu | (410) 706-7441
>> >>> http://mackerell.umaryland.edu/~jalemkul
>> >>>
>> >>> ==
>> >>>
>> >>> --
>> >>> Gromacs Users mailing list
>> >>>
>> >>> * Please search the archive at
>> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> >>> posting!
>> >>>
>> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >>>
>> >>> * For (un)subscribe requests visit
>> >>> 

Re: [gmx-users] gpu-performance reduces during FEP calculation

2016-03-15 Thread Szilárd Páll
No, the free energy kernels run on the CPU. This large change must mean
that you have a relatively large fraction of the system participating in
perturbed interactions.

What compiler are you using? The free energy kernel is a bit "sensitive"
(to put it mildly) and I do remember seeing much better performance with
some compilers than others.

--
Szilárd

On Tue, Mar 15, 2016 at 2:18 PM, jagannath mondal 
wrote:

> Dear Gromacs users
>   I am trying to perform Free energy peturbation calculation in presence of
> distance, angle and dihedral-restraint for a protein-ligand system.
> However, I am finding, on turning on the FEP calculation, the performance
> of gromacs5.1.1. in a gpu-based workstation significantly gets reduced. the
> gpu/cpu ratio reduces from 1 to 0.234 on turning on FEP module. I was
> wondering whether FEP-module is still not using gpu-based optimization.
> Jagannath
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gpu-performance reduces during FEP calculation

2016-03-15 Thread jagannath mondal
Dear Gromacs users
  I am trying to perform Free energy peturbation calculation in presence of
distance, angle and dihedral-restraint for a protein-ligand system.
However, I am finding, on turning on the FEP calculation, the performance
of gromacs5.1.1. in a gpu-based workstation significantly gets reduced. the
gpu/cpu ratio reduces from 1 to 0.234 on turning on FEP module. I was
wondering whether FEP-module is still not using gpu-based optimization.
Jagannath
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with the mdrun_openmpi on cluster

2016-03-15 Thread Mark Abraham
Hi,


On Tue, Mar 15, 2016 at 11:57 AM James Starlight 
wrote:

> just performed some benchmarks with full atomic system- short md of
> the water soluble protein still using mpiexec -np 46 mdrun_openmpi of
> the GMX 4.5 and there were no such errors with DD so it seems that the
> problem indeed in MARTINI atoms representation.
>

Not really. The bonded interactions have a longer physical range in a CG
model, and that limits the current implementation of domain decomposition.


> BTW how I could quickly check some info about performance of the
> simulation7 what logs should I expect7 If somebody has already done it
>

Depends on your simulation and hardware, so nobody has anything that is
obviously comparable.


> I will be very thankful for some usefull combination of shell commands
> which will extract performance information from sim log.
>

Start with tail -n 50 md.log ;-)

Mark

Thanks in advance!!
>
> J.
>
> 2016-03-14 18:27 GMT+01:00 Justin Lemkul :
> >
> >
> > On 3/14/16 1:26 PM, James Starlight wrote:
> >>
> >> For that system I have not defined virtual sites.
> >
> >
> > That disagrees with the error message, which explicitly complains about
> > vsites.
> >
> >> BTW the same simulation on local desctop using 2 cores from core2 duo
> runs
> >> OK =)
> >>
> >
> > Because you're not invoking DD there.
> >
> >> so one of the solution probably is to try to use more recent gmx 5.0
> >> to see what will happenes
> >>
> >
> > Good idea.
> >
> > -Justin
> >
> >
> >> 2016-03-14 18:22 GMT+01:00 Justin Lemkul :
> >>>
> >>>
> >>>
> >>> On 3/14/16 1:19 PM, James Starlight wrote:
> 
> 
>  I tried to increase size on the system providding much bigger bilayer
>  in the system
> 
>  for this task I obtained another error also relevant to DD
> 
>  Program g_mdrun_openmpi, VERSION 4.5.7
>  Source code file:
>  /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec_con.c, line: 693
> 
>  Fatal error:
>  DD cell 0 2 1 could only obtain 0 of the 1 atoms that are connected
>  via vsites from the neighboring cells. This probably means your vsite
>  lengths are too long compared to the domain decomposition cell size.
>  Decrease the number of domain decomposition grid cells.
>  For more information and tips for troubleshooting, please check the
>  GROMACS
>  website at http://www.gromacs.org/Documentation/Errors
>  ---
> 
>  "It's So Fast It's Slow" (F. Black)
> 
>  Error on node 9, will try to stop all the nodes
>  Halting parallel program g_mdrun_openmpi on CPU 9 out of 64
> 
> 
>  BTW I checked the bottom of the syste,.gro file and found the next
>  sizes which are seems too small for my syste, consisted for several
>  hundreds of lipid, arent it7
> 
>  15.0  15.0  15.0   0.0   0.0   0.0
>  0.0
>  0.0   0.0
> 
> >>>
> >>> No, that seems fine.  But if your box is set up wrong, that's your
> fault
> >>> from the command below :)
> >>>
> 
>  for my case that gro file was produced automatically using MARTINI
>  method
> 
>  ./insane.py -f test.pdb -o system.gro -p system.top -pbc cubic -box
>  15,15,15 -l DPPC:4 -l DOPC:3 -l CHOL:3 -salt 0.15 -center -sol W
> 
> 
>  Will be very thankful for any help!!
> 
> >>>
> >>> So you've got a system that is a CG model, with virtual sites?  That's
> >>> going
> >>> to create all kinds of havoc.  Please do try Googling your error,
> because
> >>> this difficulty has come up before specifically in the case of CG
> >>> systems,
> >>> which have longer-than-normal bonded interactions and requires some
> mdrun
> >>> tuning.
> >>>
> >>> -Justin
> >>>
> >>> --
> >>> ==
> >>>
> >>> Justin A. Lemkul, Ph.D.
> >>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
> >>>
> >>> Department of Pharmaceutical Sciences
> >>> School of Pharmacy
> >>> Health Sciences Facility II, Room 629
> >>> University of Maryland, Baltimore
> >>> 20 Penn St.
> >>> Baltimore, MD 21201
> >>>
> >>> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> >>> http://mackerell.umaryland.edu/~jalemkul
> >>>
> >>> ==
> >>>
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a
> >>> mail to gmx-users-requ...@gromacs.org.
> >
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Ruth L. Kirschstein NRSA Postdoctoral Fellow
> >
> > Department of 

Re: [gmx-users] Fourier dihedral potential: typo or correct?

2016-03-15 Thread Mark Abraham
Hi,

Yes, that's a typo. I have fixed it for future versions.

Mark

On Tue, Mar 15, 2016 at 1:49 AM Parvez Mh  wrote:

> Dear all:
>
> In manual-5.06, page:82,
>
>
> [image: Inline image 1]
> Shouldn't the last term be C4(1-cos(4Phi)) ?
>
> --Masrul
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] QMmethod

2016-03-15 Thread Roman Zeiss
Hi,

 

a short question regarding the mdp option "QMmethod" when using ORCA for QM/MM simulations. If I understand the source code (gromacs/mdlib/qm_orca.cpp) of the ORCA interface correctly, it looks like GROMACS is not telling ORCA what QMmethod to use. 

Does the QMmethod mdp-option change anything when using ORCA or do I have to define it seprarately in my .ORCAINFO file?

 

Thank you very much and best regards,

Roman
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Problem with the mdrun_openmpi on cluster

2016-03-15 Thread James Starlight
just performed some benchmarks with full atomic system- short md of
the water soluble protein still using mpiexec -np 46 mdrun_openmpi of
the GMX 4.5 and there were no such errors with DD so it seems that the
problem indeed in MARTINI atoms representation.

BTW how I could quickly check some info about performance of the
simulation7 what logs should I expect7 If somebody has already done it
I will be very thankful for some usefull combination of shell commands
which will extract performance information from sim log.

Thanks in advance!!

J.

2016-03-14 18:27 GMT+01:00 Justin Lemkul :
>
>
> On 3/14/16 1:26 PM, James Starlight wrote:
>>
>> For that system I have not defined virtual sites.
>
>
> That disagrees with the error message, which explicitly complains about
> vsites.
>
>> BTW the same simulation on local desctop using 2 cores from core2 duo runs
>> OK =)
>>
>
> Because you're not invoking DD there.
>
>> so one of the solution probably is to try to use more recent gmx 5.0
>> to see what will happenes
>>
>
> Good idea.
>
> -Justin
>
>
>> 2016-03-14 18:22 GMT+01:00 Justin Lemkul :
>>>
>>>
>>>
>>> On 3/14/16 1:19 PM, James Starlight wrote:


 I tried to increase size on the system providding much bigger bilayer
 in the system

 for this task I obtained another error also relevant to DD

 Program g_mdrun_openmpi, VERSION 4.5.7
 Source code file:
 /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec_con.c, line: 693

 Fatal error:
 DD cell 0 2 1 could only obtain 0 of the 1 atoms that are connected
 via vsites from the neighboring cells. This probably means your vsite
 lengths are too long compared to the domain decomposition cell size.
 Decrease the number of domain decomposition grid cells.
 For more information and tips for troubleshooting, please check the
 GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 "It's So Fast It's Slow" (F. Black)

 Error on node 9, will try to stop all the nodes
 Halting parallel program g_mdrun_openmpi on CPU 9 out of 64


 BTW I checked the bottom of the syste,.gro file and found the next
 sizes which are seems too small for my syste, consisted for several
 hundreds of lipid, arent it7

 15.0  15.0  15.0   0.0   0.0   0.0   0.0
 0.0   0.0

>>>
>>> No, that seems fine.  But if your box is set up wrong, that's your fault
>>> from the command below :)
>>>

 for my case that gro file was produced automatically using MARTINI
 method

 ./insane.py -f test.pdb -o system.gro -p system.top -pbc cubic -box
 15,15,15 -l DPPC:4 -l DOPC:3 -l CHOL:3 -salt 0.15 -center -sol W


 Will be very thankful for any help!!

>>>
>>> So you've got a system that is a CG model, with virtual sites?  That's
>>> going
>>> to create all kinds of havoc.  Please do try Googling your error, because
>>> this difficulty has come up before specifically in the case of CG
>>> systems,
>>> which have longer-than-normal bonded interactions and requires some mdrun
>>> tuning.
>>>
>>> -Justin
>>>
>>> --
>>> ==
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>>>
>>> Department of Pharmaceutical Sciences
>>> School of Pharmacy
>>> Health Sciences Facility II, Room 629
>>> University of Maryland, Baltimore
>>> 20 Penn St.
>>> Baltimore, MD 21201
>>>
>>> jalem...@outerbanks.umaryland.edu | (410) 706-7441
>>> http://mackerell.umaryland.edu/~jalemkul
>>>
>>> ==
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a
>>> mail to gmx-users-requ...@gromacs.org.
>
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
> mail to 

Re: [gmx-users] How Gromacs h_bond module compute hydrogen correlation function ?

2016-03-15 Thread Erik Marklund
Hi again,

I had a look at your post at ResearchGate, where you ask the same question and 
provide both source code and output from your. Two things differ between 
gromacs’ calculation and yours. First of all, gromacs normalises the 
correlation function so that it is 1 at t=0. This is common practice and allows 
for a probability interpretation of the ACF. Secondly, you may have noticed 
that the gromacs output contains several datasets, one of which is the ACF with 
a correction that compensates for finite size effects, and one "raw” ACF that 
resembles yours, apart from said normalisation. Note that the ACF will not 
approach zero in a periodic system, since pairs that would diffuse apart in a 
large system will easily find each other again because the diffusion wraps 
around the pbc. Hope that helps.

Kind regards,
Erik

> On 14 Mar 2016, at 10:22, Erik Marklund  wrote:
> 
> Hi,
> 
> Gromacs uses FFTs to calculate C(t), exploiting that convolutions (such as an 
> autocorrelation) turns into simple muliplications in Fourier space. If you 
> are interested in the details, have a look at gmx_hbond.c. In the function 
> do_hbac(), look for the last call to low_do_autocorr() and the code around 
> it. It should be under “case AC_LUZAR:”. There may be other calls to 
> low_do_autocorr(), depending on your version, but they concern other kinetic 
> models and alternative bond definitions that aren’t fully supported.
> 
> Kind regards,
> Erik
> 
>> On 12 Mar 2016, at 07:13, 백호용 (자연과학부)  wrote:
>> 
>> Dear gromacs users and developers, I have a question about the methodology 
>> that gromacs h_bond module compute H_bond correlation function, C(t) from 
>> trajectory.
>> 
>> I want to compute hydrogen bond life time between a carbonyl oxygen of 
>> single Etoac and water. For example, I made a system which contain a single 
>> Etoac molecules and about 2000 water molecules. then I used h_bond module to 
>> get hbmap.xpm and hbond.ndx and I analyzed those two so that I can get 
>> existence function of each carbonyl oxygen - water pair. In detail, during 
>> 500 ps simulation, 23 water molecules formed hydrogen bond with carbonyl 
>> oxygen of Etoac at least one time. So, I computed 23 existence functions, 
>> each having 500 ps length. Then I use those 23 set of existence function to 
>> compute h_bond correlation function.
>> 
>> I'll upload Matlab code (corr_031116.m )with which I computed correlation 
>> function and please tell me what the problem is..
>> 
>> So, What I want to know is that How Gromacs practically compute Correlation 
>> function, in detail..
>> 
>> Thank you for reading my email and Thank you for your reply in advance..
>> 
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Trajectory correction for pbc before H-bond analysis

2016-03-15 Thread Erik Marklund
Dear Agnivo,

No. g_hbond / gmx bond takes pbc into account, so you normally don’t need to 
bother with preprocessing your trajectory.

Kind regards,
Erik

> On 15 Mar 2016, at 01:09, Agnivo Gosai  wrote:
> 
> Dear Users,
> 
> I always do pbc correction for the trajectory before doing RMSD, Radius of
> Gyration and COM separation analysis for my protein-ligand complex.
> 
> Is pbc correction recommended before doing H-bond analysis ? I have 20 long
> simulations and I need to do a quick H-bond analysis for them. So, I was
> thinking if I could skip that step for now.
> 
> 
> Thanks & Regards
> Agnivo Gosai
> Grad Student, Iowa State University.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Query regarding energy minimization step (step 12: Water molecule starting at atom 13529 can not be settled.)

2016-03-15 Thread shrikant kaushik
Dear all gromacs users,
 I am trying to run simulation for my Phophorylated threonin
(​​TPO at 172 position in A chain og 4CFF
pdb id), I have generate topology using Prodrg2server. Then i try to run
energy minimization step its showing following error--
1.  step 12: Water molecule starting at atom 13529 can not be settled.
2. Check for bad contacts and/or reduce the timestep if appropriate.
Currently i`m using GROMACS-5.1.2 and GROMOS 54a7 Force Field.​
 4cff_tpo topology problen

​
I have attached the files i used in simulation.

SHRI KANT
M Tech (Computational Biology)
Centre for Biotechnology
Anna University, Chennai
600025

On Sat, Mar 12, 2016 at 7:23 PM, shrikant kaushik <
shrikant92pharm...@gmail.com> wrote:

> Thank you!
>
> SHRI KANT
> M Tech (Computational Biology)
> Centre for Biotechnology
> Anna University, Chennai
> 600025
>
> On Fri, Mar 11, 2016 at 6:24 PM, Justin Lemkul  wrote:
>
>>
>>
>> On 3/11/16 2:50 AM, shrikant kaushik wrote:
>>
>>> Dear all gromacs users,
>>>[shrikant@Ares Simulation]$ gmx-5.1.2
>>> mdrun -v -deffnm em
>>> :-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:
>>>
>>>  GROMACS is written by:
>>>   Emile Apol  Rossen Apostolov  Herman J.C. BerendsenPar
>>> Bjelkmar
>>>   Aldert van Buuren   Rudi van Drunen Anton Feenstra   Sebastian
>>> Fritsch
>>>Gerrit Groenhof   Christoph Junghans   Anca HamuraruVincent
>>> Hindriksen
>>>   Dimitrios KarkoulisPeter KassonJiri Kraus  Carsten
>>> Kutzner
>>>  Per Larsson  Justin A. Lemkul   Magnus Lundborg   Pieter
>>> Meulenhoff
>>> Erik Marklund  Teemu Murtola   Szilard Pall   Sander
>>> Pronk
>>> Roland Schulz Alexey Shvetsov Michael Shirts Alfons
>>> Sijbers
>>> Peter TielemanTeemu Virolainen  Christian WennbergMaarten
>>> Wolf
>>> and the project leaders:
>>>  Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>>>
>>> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
>>> Copyright (c) 2001-2015, The GROMACS development team at
>>> Uppsala University, Stockholm University and
>>> the Royal Institute of Technology, Sweden.
>>> check out http://www.gromacs.org for more information.
>>>
>>> GROMACS is free software; you can redistribute it and/or modify it
>>> under the terms of the GNU Lesser General Public License
>>> as published by the Free Software Foundation; either version 2.1
>>> of the License, or (at your option) any later version.
>>>
>>> GROMACS:  gmx mdrun, VERSION 5.1.2
>>> Executable:   /usr/local/gromacs-5.1.2/bin/gmx-5.1.2
>>> Data prefix:  /usr/local/gromacs-5.1.2
>>> Command line:
>>>gmx-5.1.2 mdrun -v -deffnm em
>>>
>>>
>>> Running on 1 node with total 4 cores, 4 logical cores
>>> Hardware detected:
>>>CPU info:
>>>  Vendor: GenuineIntel
>>>  Brand:  Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
>>>  SIMD instructions most likely to fit this hardware: AVX_256
>>>  SIMD instructions selected at GROMACS compile time: AVX_256
>>>
>>> Reading file em.tpr, VERSION 5.1.2 (single precision)
>>> Using 1 MPI thread
>>> Using 4 OpenMP threads
>>>
>>>
>>> Steepest Descents:
>>> Tolerance (Fmax)   =  1.0e+03
>>> Number of steps=5
>>> Step=0, Dmax= 1.0e-02 nm, Epot=  1.64486e+07 Fmax= 3.27208e+07, atom=
>>> 4176
>>> Step=1, Dmax= 1.0e-02 nm, Epot=  1.55865e+07 Fmax= 3.13386e+07, atom=
>>> 4176
>>> Step=2, Dmax= 1.2e-02 nm, Epot=  1.46513e+07 Fmax= 2.97319e+07, atom=
>>> 4176
>>> Step=3, Dmax= 1.4e-02 nm, Epot=  1.36016e+07 Fmax= 2.78772e+07, atom=
>>> 4176
>>> Step=4, Dmax= 1.7e-02 nm, Epot=  1.24230e+07 Fmax= 2.57543e+07, atom=
>>> 4176
>>> Step=5, Dmax= 2.1e-02 nm, Epot=  1.3e+07 Fmax= 2.33503e+07, atom=
>>> 4176
>>> Step=6, Dmax= 2.5e-02 nm, Epot=  9.67180e+06 Fmax= 2.06643e+07, atom=
>>> 4176
>>> Step=7, Dmax= 3.0e-02 nm, Epot=  8.12309e+06 Fmax= 1.77136e+07, atom=
>>> 4176
>>> Step=8, Dmax= 3.6e-02 nm, Epot=  6.50058e+06 Fmax= 1.45408e+07, atom=
>>> 4176
>>> Step=9, Dmax= 4.3e-02 nm, Epot=  4.86276e+06 Fmax= 1.12217e+07, atom=
>>> 4176
>>> Step=   10, Dmax= 5.2e-02 nm, Epot=  3.30460e+06 Fmax= 7.87434e+06, atom=
>>> 4176
>>> Step=   11, Dmax= 6.2e-02 nm, Epot=  2.04717e+06 Fmax= 9.25255e+06, atom=
>>> 75560
>>> Wrote pdb files with previous and current coordinates
>>>
>>> ---
>>> Program gmx mdrun, VERSION 5.1.2
>>> Source code file:
>>> /usr/local/src/gromacs-5.1.2/src/gromacs/mdlib/constr.cpp, line: 555
>>>
>>> Fatal error:
>>>
>>> step 12: Water molecule starting at atom 13529 can not be settled.
>>> Check for bad contacts and/or reduce the timestep if appropriate.
>>>
>>> For more information and tips for