[gmx-users] Barostat relaxation times

2016-09-22 Thread Hartling, Kathryn
UNRESTRICTED / ILLIMITÉE

Dear Gromacs users, 

 

Does anyone have any advice on how to choose an appropriate barostat relaxation 
time? 

 

I'm quite new to molecular dynamics, and I am running some simple simulations 
of a box of water using a Nose-Hoover thermostat, a Parrinello-Rahman barostat, 
and several different flexible water models. I've been looking online and in 
the literature for some guidance on how to appropriately choose the barostat 
relaxation time, but what I've found so far is somewhat conflicting (partially 
due to wide differences in applications). Does anyone have advice on choosing a 
barostat relaxation time for a flexible water system, or perhaps an idea of a 
reference on the subject?

 

Also, what is the minimum amount of time that a system like this should be 
equilibrated before performing a production run?

 

Thanks, 

 

Katy

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] about CHARMM force field in GROMACS

2016-09-22 Thread Justin Lemkul



On 9/22/16 1:56 PM, jing liang wrote:

Hi,

in GROMACS web site there is a recommendation when using CHARMM27 set of
parameters:

constraints = h-bonds
cutoff-scheme = Verlet
vdwtype = cutoff
vdw-modifier = force-switch
rlist = 1.2
rvdw = 1.2
rvdw-switch = 1.0
coulombtype = PME
rcoulomb = 1.2
DispCorr = no

Is there some issue with the simulations if one uses different values for
cutoff distances?



Cutoffs are part of the force field.  If you start adjusting them, you can throw 
things into imbalance and turn the simulation into garbage.  With PME, the value 
of rcoulomb becomes somewhat irrelevant, though.  Verlet will adjust rlist as 
needed to buffer the neighbor list, in a manner comparable to what we do in 
CHARMM with an explicit vdW cutoff and neighbor list cutoff.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] about CHARMM force field in GROMACS

2016-09-22 Thread jing liang
Hi,

in GROMACS web site there is a recommendation when using CHARMM27 set of
parameters:

constraints = h-bonds
cutoff-scheme = Verlet
vdwtype = cutoff
vdw-modifier = force-switch
rlist = 1.2
rvdw = 1.2
rvdw-switch = 1.0
coulombtype = PME
rcoulomb = 1.2
DispCorr = no

Is there some issue with the simulations if one uses different values for
cutoff distances?

thanks.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running Gromacs in parallel

2016-09-22 Thread jkrieger
Thanks again

> On Wed, Sep 21, 2016 at 9:55 PM,   wrote:
>> Thanks Sz.
>>
>> Do you think going up to from version 5.0.4 to 5.1.4 would really make
>> such a big difference?
>
> Note that I was recommending using a modern compiler + the latest
> release (which is called 2016 not 5.1.4!). It's hard to guess the
> improvements, but from 5.0->2016 you should see double-digit
> percentage improvements and going from gcc 4.4 to 5.x or 6.0 wil also
> have a significant improvement.

I was still thinking of 2016 as too new to be used for simulations I might
want to publish. I will try it when I can then.

>
>> Here is a log file from a single md run (that has finished unlike the
>> metadynamics) with the number of OpenMP threads matching how many
>> threads
>> there are on each node. This has been restarted a number of times with
>> different launch configurations being mostly the number of nodes and the
>> node type (either 8 CPUs or 24 CPUs).
>> https://www.dropbox.com/s/uxzsj3pm31n66nz/md.log?dl=0
>
> You seem to be using a single MPI rank per node ion these runs. That
> will almost never be optimal, especially not when DD is not limited.

Yes, I only realised that recently and I thought it might be useful to see
this log seeing as it is a complete run and has the bit at the bottom.
Here is a multiple walker metadynamics log, includes some other
combinations I tried.

https://www.dropbox.com/s/td7ps45dzz1otwz/from_cluster_metad0.log?dl=0

>
>> From timesteps when checkpoints were written I can see that these
>> configurations make quite a difference and per CPU, having 8 OpenMP
>> threads per MPI process becomes a much worse idea stepping from 4 nodes
>> to
>> 6 nodes, i.e. having more CPUs makes mixed paralellism less favourable
>> as
>> suggested in figure 8. Yes, the best may not lie at 1 OpenMP thread per
>> MPI rank and may vary depending on the number of CPUs as well.
>
> Sure, but 8 threads panning over two sockets will definitely be
> suboptimal. Start with trying fewer and consider using separate PME
> ranks especially if you have ethernet.

ok

>
>> Also, I can
>> see that for the same number of CPUs, the 24-thread nodes are better
>> than
>> the 8-thread nodes but I can't get so many of them as they are also more
>> popular for RELION users.
>
> FYI those are 2x6-core CPUs with Hyperthreading, so 2x12 hardware
> threads. Also the two generations newer, so it's not surprising that
> they are much faster. Still, 24 threads/node is too much. Use less.
>
>> What can I infer from the information at the
>> end?
>
> Before starting to interpret that, it's worth fixing the above issues ;)
> Otherwise, what's clear is that PME is taking a considerable amount of
> time, especially given the long cut-off.
>
> Cheers,
> --
> Szilárd
>
>
>>
>> Best wishes
>> James
>>
>>> Hi,
>>>
>>> On Wed, Sep 21, 2016 at 5:44 PM,   wrote:
 Hi Szilárd,

 Yes I had looked at it but not with our cluster in mind. I now have a
 couple of GPU systems (both have an 8-core i7-4790K CPU with one Titan
 X
 GPU on one system and two Titan X GPUs on the other), and have been
 thinking about about getting the most out of them. I listened to
 Carsten's
 BioExcel webinar this morning and it got me thinking about the cluster
 as
 well. I've just had a quick look now and it suggests Nrank = Nc and
 Nth
 =
 1 for high core count, which I think worked slightly less well for me
 but
 I can't find the details so I may be remembering wrong.
>>>
>>> That's not unexpected, the reported values are specific to the
>>> hardware and benchmark systems and only give a rough idea where the
>>> ranks/threads balance should be.

 I don't have log files from a systematic benchmark of our cluster as
 it
 isn't really available enough for doing that.
>>>
>>> That's not really necessary, even logs from a single production run
>>> can hint possible improvements.
>>>
 I haven't tried gmx tune_pme
 on there either. I do have node-specific installations of
 gromacs-5.0.4
 but I think they were done with gcc-4.4.7 so there's room for
 improvement
 there.
>>>
>>> If that's the case, I'd simply recommend using a modern compiler and
>>> if you can a recent GROMACS version, you'll gain more performance than
>>> from most launch config tuning.
>>>
 The cluster nodes I have been using have the following cpu specs
 and 10Gb networking. It could be that using 2 OpenMP threads per MPI
 rank
 works nicely because it matches the CPU configuration and makes better
 use
 of hyperthreading.
>>>
>>> Or because of the network. Or for some other reason. Again, comparing
>>> the runs' log files could tell more :)
>>>
 Architecture:  x86_64
 CPU op-mode(s):32-bit, 64-bit
 Byte Order:Little Endian
 CPU(s):8
 On-line CPU(s) list:   0-7
 

[gmx-users] Chloroform Density problem

2016-09-22 Thread Surahit Chewle
Dear Users,

I am trying to solvate small organic molecule (8 molecules) of 20 atoms
each =160 atoms in Chloroform using Genbox.

according to the box volume that is created by the editconf (truncated
octahedron of 6nm) that is 166.28 nm, it should have roughly 1200 molecules
of chloroform. (not corrected for 160 atoms that are already present in the
system)

after playing around with these numbers in bold and underlined in the
solvent PDB file, I reached to the right density, but the number of
molecules is way too much for the given volume.
can someone guide what is the correct way where I can have right density
and right amount of solvent molecules for the given sized box in the
system?
I also tried to add the solvent molecules without solute in the box, but
the density and number correlation is still out of the order in that case.

TITLE chloroform
REMARKTHIS IS A SIMULATION BOX
CRYST1   *25.500   25.500   25.500*  90.00  90.00  90.00 P 1   1
MODEL1
ATOM  1  CL1 SOL 1  45.150  33.290  30.640  1.00  0.00
ATOM  2  C1  SOL 1  44.870  33.620  28.910  1.00  0.00

best,
Surahit vc
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running Gromacs in parallel

2016-09-22 Thread Szilárd Páll
On Wed, Sep 21, 2016 at 9:55 PM,   wrote:
> Thanks Sz.
>
> Do you think going up to from version 5.0.4 to 5.1.4 would really make
> such a big difference?

Note that I was recommending using a modern compiler + the latest
release (which is called 2016 not 5.1.4!). It's hard to guess the
improvements, but from 5.0->2016 you should see double-digit
percentage improvements and going from gcc 4.4 to 5.x or 6.0 wil also
have a significant improvement.

> Here is a log file from a single md run (that has finished unlike the
> metadynamics) with the number of OpenMP threads matching how many threads
> there are on each node. This has been restarted a number of times with
> different launch configurations being mostly the number of nodes and the
> node type (either 8 CPUs or 24 CPUs).
> https://www.dropbox.com/s/uxzsj3pm31n66nz/md.log?dl=0

You seem to be using a single MPI rank per node ion these runs. That
will almost never be optimal, especially not when DD is not limited.

> From timesteps when checkpoints were written I can see that these
> configurations make quite a difference and per CPU, having 8 OpenMP
> threads per MPI process becomes a much worse idea stepping from 4 nodes to
> 6 nodes, i.e. having more CPUs makes mixed paralellism less favourable as
> suggested in figure 8. Yes, the best may not lie at 1 OpenMP thread per
> MPI rank and may vary depending on the number of CPUs as well.

Sure, but 8 threads panning over two sockets will definitely be
suboptimal. Start with trying fewer and consider using separate PME
ranks especially if you have ethernet.

> Also, I can
> see that for the same number of CPUs, the 24-thread nodes are better than
> the 8-thread nodes but I can't get so many of them as they are also more
> popular for RELION users.

FYI those are 2x6-core CPUs with Hyperthreading, so 2x12 hardware
threads. Also the two generations newer, so it's not surprising that
they are much faster. Still, 24 threads/node is too much. Use less.

> What can I infer from the information at the
> end?

Before starting to interpret that, it's worth fixing the above issues ;)
Otherwise, what's clear is that PME is taking a considerable amount of
time, especially given the long cut-off.

Cheers,
--
Szilárd


>
> Best wishes
> James
>
>> Hi,
>>
>> On Wed, Sep 21, 2016 at 5:44 PM,   wrote:
>>> Hi Szilárd,
>>>
>>> Yes I had looked at it but not with our cluster in mind. I now have a
>>> couple of GPU systems (both have an 8-core i7-4790K CPU with one Titan X
>>> GPU on one system and two Titan X GPUs on the other), and have been
>>> thinking about about getting the most out of them. I listened to
>>> Carsten's
>>> BioExcel webinar this morning and it got me thinking about the cluster
>>> as
>>> well. I've just had a quick look now and it suggests Nrank = Nc and Nth
>>> =
>>> 1 for high core count, which I think worked slightly less well for me
>>> but
>>> I can't find the details so I may be remembering wrong.
>>
>> That's not unexpected, the reported values are specific to the
>> hardware and benchmark systems and only give a rough idea where the
>> ranks/threads balance should be.
>>>
>>> I don't have log files from a systematic benchmark of our cluster as it
>>> isn't really available enough for doing that.
>>
>> That's not really necessary, even logs from a single production run
>> can hint possible improvements.
>>
>>> I haven't tried gmx tune_pme
>>> on there either. I do have node-specific installations of gromacs-5.0.4
>>> but I think they were done with gcc-4.4.7 so there's room for
>>> improvement
>>> there.
>>
>> If that's the case, I'd simply recommend using a modern compiler and
>> if you can a recent GROMACS version, you'll gain more performance than
>> from most launch config tuning.
>>
>>> The cluster nodes I have been using have the following cpu specs
>>> and 10Gb networking. It could be that using 2 OpenMP threads per MPI
>>> rank
>>> works nicely because it matches the CPU configuration and makes better
>>> use
>>> of hyperthreading.
>>
>> Or because of the network. Or for some other reason. Again, comparing
>> the runs' log files could tell more :)
>>
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):8
>>> On-line CPU(s) list:   0-7
>>> Thread(s) per core:2
>>> Core(s) per socket:2
>>> Socket(s): 2
>>> NUMA node(s):  2
>>> Vendor ID: GenuineIntel
>>> CPU family:6
>>> Model: 26
>>> Model name:Intel(R) Xeon(R) CPU   E5530  @ 2.40GHz
>>> Stepping:  5
>>> CPU MHz:   2393.791
>>> BogoMIPS:  4787.24
>>> Virtualization:VT-x
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache:  256K
>>> L3 cache:  8192K
>>> NUMA node0 CPU(s): 0,2,4,6
>>> NUMA node1 CPU(s): 1,3,5,7

[gmx-users] xpm2ps

2016-09-22 Thread Nikhil Maroli
Dear all,

I have generated *.xpm file using dssp and trying to convert it into eps
i used
gmx xpm2ps -f dssp.xpm -o abc.eps -do new.m2p -di new.m2p

and my new.m2p file is given as

https://drive.google.com/file/d/0BxaQk_pcR9viRWtVdlhRdU90SlU/view?
usp=sharing

When i visualize the eps file using GIMP the x-axis and y-axis tails are
missing as given in Fig

https://drive.google.com/file/d/0BxaQk_pcR9viV3ROSGdHdDUtZDQ/view?usp=sharing

Can anyone tell me what modification i need to make in the *.m2p file to
obtain perfect eps file

My *.xpm file is given

https://drive.google.com/file/d/0BxaQk_pcR9vicU14eXRObzhla2c/view?
usp=sharing






-- 
Regards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] car_to_files.py in AMBER16 is not available.

2016-09-22 Thread Justin Lemkul



On 9/22/16 4:49 AM, Jinfeng Huang wrote:

Dear gromacs users,


   I follow the tutotial of "Setting Up A Hydroxyaptite Slab in Water Box" 
(http://ambermd.org/tutorials/advanced/tutorial27/hap_water.htm) and met a problem. The 
pyMSMT package in Amber16 does not have car_to_files.py procedure and so as the pyMSMT 
package in https://github.com/Amber-MD/pymsmt/releases/tag/v2.0c.


   How can I be avaliable to  the car_to_files.py procedure?




Sounds like a question for the AMBER mailing list, since this has nothing to do 
with GROMACS.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx sans crashes or return nan's

2016-09-22 Thread Evan Lowry
How big is the system? This happened to me when my system required too much
memory to finish the computation. You could try running it on a subset for
your whole system to see if it works or just run it on a subgroup (not on
the group "System") to confirm if this is the problem.

Another thing to try is setting the - endq flag to a smaller value than the
default, thereby reducing the size of the computation.

Evan L.

On Sep 22, 2016 2:47 AM, "Jakub Krajniak" 
wrote:

> Hi,
>
> I've tried to run gmx sans tool on our trajectory but unfortunately it
> crashes or returns -nan.
> The crash is when it is compiled in Release mode. I have used the latest
> 2016 release.
>
> GROMACS:  gmx sans, version 2016
>
> gmx_mpi sans -f traj_comp.xtc -s topol.tpr -dt 10 -nt 10
> Select a group: 0
> Selected 0: 'System'
> Reading frame   0 time0.000   Segmentation fault (core dumped)
>
> With CMAKE_BUILD_TYPE=Debug, the run does not cause the segmentation fault
> error but the sq.xvg and pr.xvg contain -nan ; This
> is independent of selected groups.
>
> Maybe someone know what could be the reason of such behavior? Here are the
> .tpr and .xtc files:
> https://www.dropbox.com/s/29ftayq9uhfipjz/sans_50_50.zip?dl=0
>
> Best regards,
>
> Jakub
>
> --
> Jakub Krajniak
> KU Leuven, Dept. Computer Science lokaal: 01.41
> +32 477 68 84 04 / +32 16 37 39 92
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Free Energy of Binding Question

2016-09-22 Thread Hannes Loeffler
On Wed, 21 Sep 2016 12:00:29 +
Abdülkadir KOÇAK  wrote:

> In terms of endstates, the state A is the real ligand complexed with
> Protein in water... I did not define dummy atoms for the ligand as
> the state B, which I believe I should have...

I'm not quite sure what you mean here.  I think any recent Gromacs
version (maybe from 4.x?) allows you to use the couple-* parameters in
the mdp file.  In this way you do not need a modified topology.  On the
contrary, you would use a topology just as for standard MD simulation.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] car_to_files.py in AMBER16 is not available.

2016-09-22 Thread Jinfeng Huang
Dear gromacs users,


   I follow the tutotial of "Setting Up A Hydroxyaptite Slab in Water Box" 
(http://ambermd.org/tutorials/advanced/tutorial27/hap_water.htm) and met a 
problem. The pyMSMT package in Amber16 does not have car_to_files.py procedure 
and so as the pyMSMT package in 
https://github.com/Amber-MD/pymsmt/releases/tag/v2.0c.


   How can I be avaliable to  the car_to_files.py procedure?


It's highly appreciated if there are any suggestions.
Jingfeng




 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx sans crashes or return nan's

2016-09-22 Thread Jakub Krajniak

Hi,

I've tried to run gmx sans tool on our trajectory but unfortunately it 
crashes or returns -nan.
The crash is when it is compiled in Release mode. I have used the latest 
2016 release.


GROMACS:  gmx sans, version 2016

gmx_mpi sans -f traj_comp.xtc -s topol.tpr -dt 10 -nt 10
Select a group: 0
Selected 0: 'System'
Reading frame   0 time0.000   Segmentation fault (core dumped)

With CMAKE_BUILD_TYPE=Debug, the run does not cause the segmentation 
fault error but the sq.xvg and pr.xvg contain -nan ; This

is independent of selected groups.

Maybe someone know what could be the reason of such behavior? Here are 
the .tpr and .xtc files:

https://www.dropbox.com/s/29ftayq9uhfipjz/sans_50_50.zip?dl=0

Best regards,

Jakub

--
Jakub Krajniak
KU Leuven, Dept. Computer Science lokaal: 01.41
+32 477 68 84 04 / +32 16 37 39 92

# This file was created Thu Sep 22 09:28:16 2016
# Created by:
#:-) GROMACS - gmx sans, 2016 (-:
# 
# Executable:   
/user/leuven/307/vsc30783/vsc_data/software/2015a/GROMACS/2016/bin/gmx_mpi
# Data prefix:  /user/leuven/307/vsc30783/vsc_data/software/2015a/GROMACS/2016
# Working dir:  /ddn1/vol1/site_scratch/leuven/307/vsc30783/sans_50_50
# Command line:
#   gmx_mpi sans -f traj_comp.xtc -s topol.tpr -dt 10 -nt 10 -sq 
/user/leuven/307/vsc30783/sq.xvg -pr /user/leuven/307/vsc30783/pr.xvg
# gmx sans is part of G R O M A C S:
#
# Green Red Orange Magenta Azure Cyan Skyblue
#
@title "G(r)"
@xaxis  label "Distance (nm)"
@yaxis  label "Probability"
@TYPE xy
  0.10  -nan
  0.30  -nan
  0.50  -nan
  0.70  -nan
  0.90  -nan
  1.10  -nan
  1.30  -nan
  1.50  -nan
  1.70  -nan
  1.90  -nan
  2.10  -nan
  2.30  -nan
  2.50  -nan
  2.70  -nan
  2.90  -nan
  3.10  -nan
  3.30  -nan
  3.50  -nan
  3.70  -nan
  3.90  -nan
  4.10  -nan
  4.30  -nan
  4.50  -nan
  4.70  -nan
  4.90  -nan
  5.10  -nan
  5.30  -nan
  5.50  -nan
  5.70  -nan
  5.90  -nan
  6.10  -nan
  6.30  -nan
  6.50  -nan
  6.70  -nan
  6.90  -nan
  7.10  -nan
  7.30  -nan
  7.50  -nan
  7.70  -nan
  7.90  -nan
  8.10  -nan
  8.30  -nan
  8.50  -nan
  8.70  -nan
  8.90  -nan
  9.10  -nan
  9.30  -nan
  9.50  -nan
  9.70  -nan
  9.90  -nan
 10.10  -nan
 10.30  -nan
 10.50  -nan
 10.70  -nan
 10.90  -nan
 11.10  -nan
 11.30  -nan
 11.50  -nan
 11.70  -nan
 11.90  -nan
 12.10  -nan
 12.30  -nan
 12.50  -nan
 12.70  -nan
 12.90  -nan
 13.10  -nan
 13.30  -nan
 13.50  -nan
 13.70  -nan
 13.90  -nan
 14.10  -nan
 14.30  -nan
 14.50  -nan
 14.70  -nan
 14.90  -nan
 15.10  -nan
 15.30  -nan
 15.50  -nan
 15.70  -nan
 15.90  -nan
 16.10  -nan
 16.30  -nan
 16.50  -nan
 16.70  -nan
 16.90  -nan
 17.10  -nan
 17.30  -nan
 17.50  -nan
 17.70  -nan
 17.90  -nan
 18.10  -nan
 18.30  -nan
 18.50  -nan
 18.70  -nan
 18.90  -nan
 19.10  -nan
 19.30  -nan
 19.50  -nan
 19.70  -nan
 19.90  -nan
# This file was created Thu Sep 22 09:28:16 2016
# Created by:
#:-) GROMACS - gmx sans, 2016 (-:
# 
# Executable:   
/user/leuven/307/vsc30783/vsc_data/software/2015a/GROMACS/2016/bin/gmx_mpi
# Data prefix:  /user/leuven/307/vsc30783/vsc_data/software/2015a/GROMACS/2016
# Working dir:  /ddn1/vol1/site_scratch/leuven/307/vsc30783/sans_50_50
# Command line:
#   gmx_mpi sans -f traj_comp.xtc -s topol.tpr -dt 10 -nt 10 -sq 
/user/leuven/307/vsc30783/sq.xvg -pr /user/leuven/307/vsc30783/pr.xvg
# gmx sans is part of G R O M A C S:
#
# God Rules Over Mankind, Animals, Cosmos and Such
#
@title "I(q)"
@xaxis  label "q (nm^-1)"
@yaxis  label "s(q)/s(0)"
@TYPE xy
  0.00  1.00
  0.01  -nan
  0.02  -nan
  0.03  -nan
  0.04  -nan
  0.05  -nan
  0.06  -nan
  0.07  -nan
  0.08  -nan
  0.09  -nan
  0.10  -nan
  0.11  -nan
  0.12  -nan
  0.13  -nan
  0.14  -nan
  0.15  -nan
  0.16  -nan
  0.17  -nan
  0.18  -nan
  0.19  -nan
  0.20  -nan
  0.21  -nan
  0.22  -nan
  0.23  -nan
  0.24  -nan
  0.25  -nan
  0.26  -nan
  0.27  -nan
  0.28  -nan
  0.29  -nan
  0.30  -nan
  

Re: [gmx-users] g_membed failure

2016-09-22 Thread Sophia Kuriakidi
Thank you so much Tom, I will try that!

2016-09-21 21:11 GMT+03:00 Thomas Piggot :

> g_membed is now part of mdrun, so you would need to use mdrun with the
> -membed option. From mdrun -h:
>
> /"The option -membed does what used to be g_membed, i.e. embed a protein
> into a//
> //membrane. This module requires a number of settings that are provided in
> a//
> //data file that is the argument of this option. For more details in
> membrane//
> //embedding, see the documentation in the user guide. The options -mn and
> -mp//
> //are used to provide the index and topology files used for the
> embedding."/
>
> Cheers
>
> Tom
>
>
> On 21/09/16 18:36, Sophia Kuriakidi wrote:
>
>> Thank you for your responses!
>>
>> Sotirios:"Also the way this worked for me was to use an index file. I made
>> an index of the prot + lig + crystallographic waters and I used it in both
>> grompp and g_membed. In the latter I just used the group and then selected
>> the POPC. You must also include the group's name in the mdp in order for
>> it
>> to work."
>> I also have grouped the ligand with the protein (but not any waters) and I
>> included the index in the mdp file.
>>
>> Thomas:"My guess is that you probably also have an older version of the
>> g_membed program installed on your system and as you are trying to use a
>> more recent tpr (from version 5.1.2), this might be what is causing the
>> segmentation fault. That said, if I try a tpr from GROMACS 5.0.6 with
>> g_membed 4.5.7 it does give me a warning about a mismatch of versions so I
>> could be wrong (but what you say you are doing shouldn't be possible)."
>>
>> It seems that this is the case because I am using 5.1.2. How could I
>> resolve this problem? How coould I use g_membed in 5.1.2? Or how I could
>> alternatively insert my protein into a membrane bilayer?
>>
>> Thanks again!
>>
>>
>>
>> 2016-09-14 13:47 GMT+03:00 Thomas Piggot :
>>
>> Hi,
>>>
>>> In more recent versions of GROMACS (4.6.x and above IIRC), the g_membed
>>> feature is only available using mdrun (see mdrun -h) and so the g_membed
>>> command should either no longer work at all or print you a note to tell
>>> you
>>> to use mdrun (depending upon version).
>>>
>>> My guess is that you probably also have an older version of the g_membed
>>> program installed on your system and as you are trying to use a more
>>> recent
>>> tpr (from version 5.1.2), this might be what is causing the segmentation
>>> fault. That said, if I try a tpr from GROMACS 5.0.6 with g_membed 4.5.7
>>> it
>>> does give me a warning about a mismatch of versions so I could be wrong
>>> (but what you say you are doing shouldn't be possible).
>>>
>>> Cheers
>>>
>>> Tom
>>>
>>>
>>> On 14/09/16 08:40, Sotirios Dionysios I. Papadatos wrote:
>>>
>>> Hi, run some diagnostics, don't use the -xyinit etc

 Try the basics gmx g_membed -f -p ... etc

 Also the way this worked for me was to use an index file. I made an
 index
 of the prot + lig + crystallographic waters and I used it in both grompp
 and g_membed. In the latter I just used the

 group and then selected the POPC. You must also include the group's name
 in the mdp in order for it to work.

 
 From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
 gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Sophia
 Kuriakidi 
 Sent: Tuesday, September 13, 2016 9:18:12 PM
 To: gromacs.org_gmx-users@maillist.sys.kth.se
 Subject: [gmx-users] g_membed failure

 Hi all,
 I am trying to use g_membed in order to embed my protein in a lipid
 bilayer
 (I am using dppc). I am using the tutorial of Appendix A of this paper:

 *http://wwwuser.gwdg.de/~ggroenh/submitted/Membed_rev.pdf
 *

 I am creating  an input.tpr using this command:

 grompp -f sample.mdp -c merged.gro -p merged.top -o input.tpr

 and it works fine. Then when I am trying to use g_membed by typping
 this:

 g membed -f input.tpr -p merged.top -xyinit 0.1 -xyend 1.0 -nxy 1000

or this

g membed -f input.tpr -p merged.top -xyinit 0.1 -xyend 1.0 -nxy 1000
 -zinit 1.1 -zend 1.0 -nz 100

 I just get the g_membed manual printed out...

 $ g_membed -f input.tpr -p merged.top -xyinit 0.1 -xyend 1.0 -nxy 1000
 Option Filename  Type Description
 
 -f  input.tpr  InputRun input file: tpr tpb tpa
 -n  index.ndx  Input, Opt.  Index file
 -p merged.top  In/Out, Opt! Topology file
 -o   traj.trr  Output   Full precision trajectory: trr trj
 cpt
 -x   traj.xtc  Output, Opt. Compressed trajectory (portable xdr
 format)
 -cpi  state.cpt  

Re: [gmx-users] Interest of c alpha atoms for least square fitting?

2016-09-22 Thread Erik Marklund
Yes

> On 22 Sep 2016, at 06:45, Seera Suryanarayana  wrote:
> 
> Dear gromacs users,
> 
> Can I give my interest of c alpha atoms for least square fitting in gmx rms
> for RMSD calculation?
> 
> Thanks in advance
> Surya
> Graduate student
> India.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.