[gmx-users] g(r) does not go to 1 at long r -- bug in g_rdf?

2012-11-16 Thread Pablo Englebienne

Hi,

I tried to calculate the radial distribution functions for a simple system: a 
5nm a side cubic box with 10 Ne atoms and 10 Ar atoms, simulated for 100ns in 
NVT @ 300K. I was expecting to get an RDF with a peak, stabilizing to 1.0 at 
long distances.

This was the case for the Ne-Ar RDF, but not for the Ne-Ne or Ar-Ar RDFs, which 
stabilize to about 0.9. I believe this is due to a problem in the normalization 
of the histograms with respect to the number of pairs available: there are N*N 
pairs for the Ne-Ar, while N*(N-1) for the Ne-Ne and Ar-Ar case.

Did somebody else find an issue like this? I think that the issue may become 
not evident for a relatively large system, as N*N ~ N*(N-1) for large N.

I put the relevant files if somebody wishes to reproduce it here: 
https://gist.github.com/4085292

I'll appreciate input on this and I can also file a bug if deemed necessary.

Take care,
Pablo

--

Dr. Pablo Englebienne
Postdoctoral Researcher

*TU Delft / 3mE / Process  Energy*
/Engineering Thermodynamics (ETh) group/

Building 46
Leeghwaterstraat 44, room 030
2628 CA Delft
The Netherlands

*T* +31 (0)15 27 86662 tel:+31152786662
*E* p.englebie...@tudelft.nl mailto:p.englebie...@tudelft.nl

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g(r) does not go to 1 at long r -- bug in g_rdf?

2012-11-16 Thread David van der Spoel

On 2012-11-16 09:12, Pablo Englebienne wrote:

Hi,

I tried to calculate the radial distribution functions for a simple
system: a 5nm a side cubic box with 10 Ne atoms and 10 Ar atoms,
simulated for 100ns in NVT @ 300K. I was expecting to get an RDF with a
peak, stabilizing to 1.0 at long distances.

This was the case for the Ne-Ar RDF, but not for the Ne-Ne or Ar-Ar
RDFs, which stabilize to about 0.9. I believe this is due to a problem
in the normalization of the histograms with respect to the number of
pairs available: there are N*N pairs for the Ne-Ar, while N*(N-1) for
the Ne-Ne and Ar-Ar case.

Did somebody else find an issue like this? I think that the issue may
become not evident for a relatively large system, as N*N ~ N*(N-1) for
large N.

I put the relevant files if somebody wishes to reproduce it here:
https://gist.github.com/4085292

I'll appreciate input on this and I can also file a bug if deemed
necessary.

Take care,
Pablo

Thanks for reporting. Can you please make a redmine issue of this and 
assign it to me?


--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Strange form of RDF curve

2012-11-16 Thread David van der Spoel

On 2012-11-15 18:53, shch406 wrote:

Dear Gromacs users,

I tried g_rdf function and have obtained a strange result:
usually the RDF curve looks like relaxing oscillations around 1.0 constant
level,
but in my case it appears to be oscillation around exponent going from 0.0
at zero distance to 1.0 at large distances.

Is the RDF obtained correct?

I used the command as follows:

g_rdf -f MT.trr -s MT.tpr -n rs.ndx -o MT.RD.xvg -bin 0.05 -pbc -rdf res_cog

where file MT.trr contains ~150 ps of equilibrated trajectory of 582 residue
protein in water;
The reference group was chosen Water and the  1 group was taken from
index file rs.ndx.
The latter group contains two tip NHH groups of charged arginine. (This
residue was inspected
on exposing to solvent and showed one of the largest solvent accessible
surface).

Thanks in advance,
Igor Shchechkin


I think you need to switch the arguments, first side chain then water.



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Strange-form-of-RDF-curve-tp5003001.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.




--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: g(r) does not go to 1 at long r -- bug in g_rdf?

2012-11-16 Thread penglebienne
Thanks David, I filed bug #1036.

Regards,
Pablo 



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/g-r-does-not-go-to-1-at-long-r-bug-in-g-rdf-tp5003015p5003018.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fe(2+) nonbonded parameters

2012-11-16 Thread Steven Neumann
On Thu, Nov 15, 2012 at 5:51 PM, Justin Lemkul jalem...@vt.edu wrote:


 On 11/15/12 12:47 PM, Steven Neumann wrote:

 So what would you do to get those parameters asap?


 Get what parameters?  The ones shown below (except Cu2+) have no citation
 and no one has vouched for their authenticity.  As such, the decision was
 made to delete them to prevent anyone from blindly using them, hoping that
 they are right.  Given this information, it would be unwise to use them
 unless, as I said, you know where they came from and believe them to be
 suitable.

 -Justin

I found the source of the Fe(2+) parameters below from QM/MC simulations:

http://www.sciencedirect.com/science/article/pii/S0009261407014388

Please, see table 1. I think it is a reasonable source for the usage
Fe(2+) in aqueous solution with protein.
Would you comment please?


Steven



 On Thu, Nov 15, 2012 at 5:19 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/15/12 12:18 PM, Steven Neumann wrote:


 Dear Gmx Users,

 Maybe someone before was simulating Fe(2+) in water and protein system
 using Charmm27 ff. I am looking for nonbonded parametrs. I found in
 OPLSAA:

 ; These ion atomtypes are NOT part of OPLS, but since they are
 ; needed for some proteins or tutorial Argon simulations we have added
 them.
Cu2+   Cu2+   29  63.54600 2.000   A2.08470e-01
 4.76976e+00
Fe2+   Fe2+   26  55.84700 2.000   A2.59400e-01
 5.43920e-02
Zn2+   Zn2+   30  65.37000 2.000   A1.95200e-01
 9.78219e-01
Ar Ar 18  39.94800 0.000   A3.41000e-01
 2.74580e-02

 But not sure whether I can use them?


 These are undocumented parameters that are being removed for the next
 release. Don't use them unless you can find where they came from and you
 trust that source.

 -Justin

 --
 

 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 

 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Umbrella sampling question

2012-11-16 Thread Erik Marklund
Hi,

Blindly defining the center of mass for a group of atoms is not possible in a 
periodic system such as a typical simulation box. You need some clue as to 
which periodic copy of every atom that is to be chosen. By providing 
pull_pbcatom0 you tell gromacs to, for every atom in grp0, use the periodic 
copy closest to the atom given by pull_pbcatom0. If you have large pullgroups 
this is necessary to define the inter-group distance in a way that makes sense. 
If you get different results depending on that setting you really need to 
figure out which atom is a good center for your calculations. The default 
behavior is to use the atom whose *index* is in the center of the group. If you 
for example have a dimeric protein this may correspond to the C-terminus of the 
first chain or the N-terminus of the second one, which in turn often doesn't 
coincide with the geometrical center of the group. I suggest you try yet 
another choice of pull_pbcatom0 that is also close to the center to see if that 
also give rise to a different distance. As mentioned, the choice of 
pull_pbcatom0 should not matter as long as the choice allows to figure out how 
to handle the periodicity.

Best,

Erik

15 nov 2012 kl. 19.56 skrev Gmx QA:

 Hi Chris
 
 Seems my confusion was that I assumed that the distances in the
 profile.xvg-file should correspond to something I could measure with
 g_dist. Turns out it does not.
 Thank you for helping me sorting out this, I got it now :-)
 
 About pull_pbcatom0 though. My box is  2*1.08 nm in all directions:
 $ tail -n 1 conf0.gro
  12.45770  12.45770  17.99590
 
 I am still not sure what pull_pbcatom0 does. You said it should not have
 any effect on my results, but changing it does result in a different
 initial distance reported by grompp.
 
 In my initial attempts at this, I did not specify anything for
 pull_pbcatom0, but in the grompp output I get this
 
 Pull group  natoms  pbc atom  distance at start reference at t=0
   0 21939 10970
   1 1 0   2.083 2.083
 Estimate for the relative computational load of the PME mesh part: 0.10
 This run will generate roughly 761 Mb of data
 
 
 Then, following the advice in the thread I referred to earlier, I set
 pull_pbcatom0 explicitly in the mdp-file to be an atom close to the COM of
 the Protein. Then I get from grompp
 
 Pull group  natoms  pbc atom  distance at start reference at t=0
   0 21939  7058
   1 1 0   1.808 1.808
 Estimate for the relative computational load of the PME mesh part: 0.10
 This run will generate roughly 761 Mb of data
 
 As you can see, the initial distance is different (2.083 vs 1.808), and
 1.808 is the same as the distance reported by g_dist. Do you have any
 comments here as to why this is?
 
 Thanks
 /PK
 
 
 What you reported is not what you did. It appears that grompp, gtraj, and
 g_dist report the same value.
 Please also report the value that you get from your pullx.xvg file that you
 get from mdrun, which I suspect
 will also be the same.
 
 The difference that you report is actually between the first FRAME of your
 trajectory from g_dist
 and the first LINE of the file from the g_wham output. I see no reason to
 assume that the values in the
 output of g_wham must be time-ordered. Also, I have never used g_wham
 myself (I use an external program
 to do wham) and so I can not say if you are using it correctly.
 
 My overall conclusion is that you need to investigate g_wham output not
 worry about a new run at this stage.
 
 Regarding pull_pbcatom0, there is lots of information on the mailing list
 about this. It is a global atom number
 that defines the unit cell for selection of which periodic image of each
 molecule will be used for the pulling.
 If all of your box dimensions are  2*1.08 nm, then pull_pbcatom0 will not
 affect your results.
 
 Chris.
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

---
Erik Marklund, PhD
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,75124 Uppsala, Sweden
phone:+46 18 471 6688fax: +46 18 511 755
er...@xray.bmc.uu.se
http://www2.icm.uu.se/molbio/elflab/index.html

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 

[gmx-users] Bug (?) with FEP while using particle decomposition in charge transformation

2012-11-16 Thread Alexey Zeifman
Dear all,

I have faced a very strange results while using the FEP together with particle 
decomposition in charge transformation. Situation is as follows:

gromacs-4.5.5, single precision, mpi-run on 32 nodes;
only charges are modified in the topology file (VdW and massess remains the 
same for A and B state); 
particle decomposition (-pd) option is used.

The combination of this conditions lead to a very strange *.xvg file like this 
(lambda=0):

@TYPE xy
@ subtitle T = 300 (K), \xl\f{} = 0
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend dH/d\xl\f{} \xl\f{} 0
@ s1 legend \xD\f{}H \xl\f{} 0.1
0. 7.74067 -2.30013
0.0200 8.83184 -2.08346
0.0400 6.61338 -2.43848
0.0600 3.85357 -2.58843
0.0800 9.49265 -2.03448
0.1000 2.4197 -2.7838
0.1200 4.81406 -2.64074
0.1400 3.79622 -2.83082
0.1600 4.9235 -2.53202
0.1800 7.24284 -2.41357
0.2000 3.79017 -2.68861
0.2200 7.84849 -2.26953
0.2400 10.1044 -2.11132
0.2600 1.89623 -3.10296
0.2800 11.6953 -2.08701
0.3000 11.103 -1.96233
0.3200 4.68676 -2.62905
0.3400 9.83457 -2.11386
0.3600 6.12451 -2.27113
0.3800 8.56351 -2.17592
0.4000 11.0368 -1.93516
0.4200 4.80366 -2.49621
0.4400 9.58472 -2.08431
0.4600 5.04113 -2.60035

The usual file (obtained without -pd) looks like this:

@    title dH/d\xl\f{}, \xD\f{}H
@    xaxis  label Time (ps)
@    yaxis  label (kJ/mol)
@TYPE xy
@ subtitle T = 300 (K), \xl\f{} = 0
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend dH/d\xl\f{} \xl\f{} 0
@ s1 legend \xD\f{}H \xl\f{} 0.1
0. 7.77708 0.777688
0.0200 10.322 1.03217
0.0400 8.82036 0.882043
0.0600 9.46574 0.946536
0.0800 10.2726 1.02731
0.1000 12.0395 1.20386
0.1200 6.85707 0.685689
0.1400 8.99015 0.899035
0.1600 8.68244 0.868208
0.1800 8.98454 0.898397
0.2000 9.21256 0.921202
0.2200 7.16041 0.716125
0.2400 5.1431 0.514307
0.2600 3.34832 0.334797
0.2800 3.14324 0.314328
0.3000 1.83755 0.183755
0.3200 3.91737 0.391783
0.3400 5.79081 0.579126
0.3600 4.92766 0.492766
0.3800 8.06557 0.806605
0.4000 7.00964 0.700994
0.4200 12.1488 1.21481

So the signs of the dH/dlambda are opposite while using particle decomposition. 
There is no such problem while running VdW transformation.

Is it a Gromacs bug or I'm doing something wrong?

With best regards,
Alexey Zeifman
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] problem in running md simulation

2012-11-16 Thread ananyachatterjee

Hi all,

As suggested by venhat I have energy minimised it till 1000Kj/mol but 
even now I am getting the same error, saying


Warning: 1-4 interaction between 3230 and 3233 at distance 3.573 which 
is larger than the 1-4 table size 2.400 nm

These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

t = 90.674 ps: Water molecule starting at atom 236548 can not be 
settled.

Check for bad contacts and/or reduce the timestep.
Wrote pdb files with previous and current coordinates

---
Program mdrun, VERSION 4.0.7
Source code file: ../../../../src/mdlib/nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

can anyone suggest me what to do now.

Ananya Chatterejee



 On Thu, 15 Nov 2012 11:38:17 +0530, Venkat Reddy wrote:
I think the system is not well energy minimized. Do it for 
1000kj/mol. Also
check for bad contacts in your starting structure using Ramachandran 
plot.
One more important thing is that, you have to generate an index file 
with

Protein_GTP as one group and water_Ions as another. Then change your
 tc-groups as

tc-grps = Protein_GTP   Water_Ions
tau_t   =  0.1 0.1 ; time constant, in ps
ref_t   =   300   300


On Thu, Nov 15, 2012 at 10:50 AM, ananyachatterjee 
ananyachatter...@iiserkol.ac.in wrote:


Hi all,

I was running a md simulation of protein complexed with GTP in 
water,
neutralised with Mg2+ and Cl- ions.I have also em the system to 
2000kj/mol
and also equilibrated the water molecules in 300K temperature and 1 
bar
pressure. And then run the md simulation using md parameters as 
follow:


title   = Protein-ligand complex
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 50   ; 2 * 500 = 1000 ps (1 ns)
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500  ; save coordinates every 1ps
nstvout = 500   ; save coordinates every 1ps
nstenergy   = 500  ; save energies every 1 ps
nstlog  = 500  ; update log file every 1 ps
nstxtcout   = 500  ; write .xtc trajectory every 1 ps
energygrps  = Protein GTP SOL MG2+
; Bond parameters
constraints = none; No constrains
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
rvdw= 1.4   ; short-range van der Waals cutoff (in nm)
; Temperature coupling
tcoupl  = v-rescale  ; modified Berendsen 
thermostat
tc-grps = Protein GTP   SOL  MG2+  CL-; two coupling groups 
- more

accurate
tau_t   = 0.1 0.1   0.1  0.1  0.1 ; time constant, in ps
ref_t   = 300 300   300  300  300; reference 
temperature, one

for each group, in K
; Pressure coupling
pcoupl  = Parrinello-Rahman ; pressure coupling is 
on for

NPT
pcoupltype  = isotropic ; uniform scaling of box
vectors
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in 
bar
compressibility = 4.5e-5; isothermal 
compressibility

of water, bar^-1
refcoord_scaling= com
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC


Now I am getting the following error.


Warning: 1-4 interaction between 3231 and 3234 at distance 10.730 
which is

larger than the 1-4 table size 2.400 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

t = 226.610 ps: Water molecule starting at atom 236548 can not be 
settled.

Check for bad contacts and/or reduce the timestep.
Wrote pdb files with previous and current coordinates

--**-
Program mdrun, VERSION 4.0.7
Source code file: ../../../../src/mdlib/nsgrid.**c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

kindly help me I am not getting where I am getting wrong.



--
Ananya Chatterjee,
Senior Research Fellow (SRF),
Department of biological Science,
IISER-Kolkata.
--
gmx-users mailing listgmx-users@gromacs.org

http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**

Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore 
posting!

* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 

Re: [gmx-users] problem in running md simulation

2012-11-16 Thread Kavyashree M
Hi Ananya,

Can you try with rvwd 0.9nm and rcolumb with 1.4nm..?
vdw interaction decreases as 1/r^6, while columbic
interaction decreases as (1/r).. so it would be better if
you consider columbic interaction for longer distance
than vdw interaction..

bye
kavya

On Fri, Nov 16, 2012 at 8:32 PM, ananyachatterjee 
ananyachatter...@iiserkol.ac.in wrote:

 Hi all,

 As suggested by venhat I have energy minimised it till 1000Kj/mol but even
 now I am getting the same error, saying

 Warning: 1-4 interaction between 3230 and 3233 at distance 3.573 which is
 larger than the 1-4 table size 2.400 nm
 These are ignored for the rest of the simulation
 This usually means your system is exploding,
 if not, you should increase table-extension in your mdp file
 or with user tables increase the table size

 t = 90.674 ps: Water molecule starting at atom 236548 can not be settled.
 Check for bad contacts and/or reduce the timestep.
 Wrote pdb files with previous and current coordinates

 --**-
 Program mdrun, VERSION 4.0.7
 Source code file: ../../../../src/mdlib/nsgrid.**c, line: 348

 Fatal error:
 Number of grid cells is zero. Probably the system and box collapsed.

 can anyone suggest me what to do now.

 Ananya Chatterejee



  On Thu, 15 Nov 2012 11:38:17 +0530, Venkat Reddy wrote:

 I think the system is not well energy minimized. Do it for 1000kj/mol.
 Also
 check for bad contacts in your starting structure using Ramachandran plot.
 One more important thing is that, you have to generate an index file with
 Protein_GTP as one group and water_Ions as another. Then change your
  tc-groups as

 tc-grps = Protein_GTP   Water_Ions
 tau_t   =  0.1 0.1 ; time constant, in ps
 ref_t   =   300   300


 On Thu, Nov 15, 2012 at 10:50 AM, ananyachatterjee 
 ananyachatter...@iiserkol.ac.**in ananyachatter...@iiserkol.ac.in
 wrote:

  Hi all,

 I was running a md simulation of protein complexed with GTP in water,
 neutralised with Mg2+ and Cl- ions.I have also em the system to
 2000kj/mol
 and also equilibrated the water molecules in 300K temperature and 1 bar
 pressure. And then run the md simulation using md parameters as follow:

 title   = Protein-ligand complex
 ; Run parameters
 integrator  = md; leap-frog integrator
 nsteps  = 50   ; 2 * 500 = 1000 ps (1 ns)
 dt  = 0.002 ; 2 fs
 ; Output control
 nstxout = 500  ; save coordinates every 1ps
 nstvout = 500   ; save coordinates every 1ps
 nstenergy   = 500  ; save energies every 1 ps
 nstlog  = 500  ; update log file every 1 ps
 nstxtcout   = 500  ; write .xtc trajectory every 1 ps
 energygrps  = Protein GTP SOL MG2+
 ; Bond parameters
 constraints = none; No constrains
 ; Neighborsearching
 ns_type = grid  ; search neighboring grid cells
 nstlist = 5 ; 10 fs
 rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
 rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
 rvdw= 1.4   ; short-range van der Waals cutoff (in nm)
 ; Temperature coupling
 tcoupl  = v-rescale  ; modified Berendsen thermostat
 tc-grps = Protein GTP   SOL  MG2+  CL-; two coupling groups -
 more
 accurate
 tau_t   = 0.1 0.1   0.1  0.1  0.1 ; time constant, in ps
 ref_t   = 300 300   300  300  300; reference temperature, one
 for each group, in K
 ; Pressure coupling
 pcoupl  = Parrinello-Rahman ; pressure coupling is on for
 NPT
 pcoupltype  = isotropic ; uniform scaling of box
 vectors
 tau_p   = 2.0   ; time constant, in ps
 ref_p   = 1.0   ; reference pressure, in bar
 compressibility = 4.5e-5; isothermal compressibility
 of water, bar^-1
 refcoord_scaling= com
 ; Periodic boundary conditions
 pbc = xyz   ; 3-D PBC


 Now I am getting the following error.


 Warning: 1-4 interaction between 3231 and 3234 at distance 10.730 which
 is
 larger than the 1-4 table size 2.400 nm
 These are ignored for the rest of the simulation
 This usually means your system is exploding,
 if not, you should increase table-extension in your mdp file
 or with user tables increase the table size

 t = 226.610 ps: Water molecule starting at atom 236548 can not be
 settled.
 Check for bad contacts and/or reduce the timestep.
 Wrote pdb files with previous and current coordinates

 ---
 Program mdrun, VERSION 4.0.7
 Source code file: ../../../../src/mdlib/nsgrid.c, line: 348

 Fatal error:
 Number of grid cells is zero. Probably the system and box collapsed.

 kindly help me I am not getting where I am getting wrong.



 --
 Ananya Chatterjee,
 Senior Research Fellow (SRF),
 Department of biological Science,
 IISER-Kolkata.
 --
 gmx-users mailing list

Re: [gmx-users] problem in running md simulation

2012-11-16 Thread Justin Lemkul



On 11/16/12 10:10 AM, Kavyashree M wrote:

Hi Ananya,

Can you try with rvwd 0.9nm and rcolumb with 1.4nm..?
vdw interaction decreases as 1/r^6, while columbic
interaction decreases as (1/r).. so it would be better if
you consider columbic interaction for longer distance
than vdw interaction..



One should not make haphazard changes to cutoffs.  They are part of the force 
field.  Changing them without basis can invalidate the force field model.


-Justin


bye
kavya

On Fri, Nov 16, 2012 at 8:32 PM, ananyachatterjee 
ananyachatter...@iiserkol.ac.in wrote:


Hi all,

As suggested by venhat I have energy minimised it till 1000Kj/mol but even
now I am getting the same error, saying

Warning: 1-4 interaction between 3230 and 3233 at distance 3.573 which is
larger than the 1-4 table size 2.400 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

t = 90.674 ps: Water molecule starting at atom 236548 can not be settled.
Check for bad contacts and/or reduce the timestep.
Wrote pdb files with previous and current coordinates

--**-
Program mdrun, VERSION 4.0.7
Source code file: ../../../../src/mdlib/nsgrid.**c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

can anyone suggest me what to do now.

Ananya Chatterejee



  On Thu, 15 Nov 2012 11:38:17 +0530, Venkat Reddy wrote:


I think the system is not well energy minimized. Do it for 1000kj/mol.
Also
check for bad contacts in your starting structure using Ramachandran plot.
One more important thing is that, you have to generate an index file with
Protein_GTP as one group and water_Ions as another. Then change your
  tc-groups as

tc-grps = Protein_GTP   Water_Ions
tau_t   =  0.1 0.1 ; time constant, in ps
ref_t   =   300   300


On Thu, Nov 15, 2012 at 10:50 AM, ananyachatterjee 
ananyachatter...@iiserkol.ac.**in ananyachatter...@iiserkol.ac.in
wrote:

  Hi all,


I was running a md simulation of protein complexed with GTP in water,
neutralised with Mg2+ and Cl- ions.I have also em the system to
2000kj/mol
and also equilibrated the water molecules in 300K temperature and 1 bar
pressure. And then run the md simulation using md parameters as follow:

title   = Protein-ligand complex
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 50   ; 2 * 500 = 1000 ps (1 ns)
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500  ; save coordinates every 1ps
nstvout = 500   ; save coordinates every 1ps
nstenergy   = 500  ; save energies every 1 ps
nstlog  = 500  ; update log file every 1 ps
nstxtcout   = 500  ; write .xtc trajectory every 1 ps
energygrps  = Protein GTP SOL MG2+
; Bond parameters
constraints = none; No constrains
; Neighborsearching
ns_type = grid  ; search neighboring grid cells
nstlist = 5 ; 10 fs
rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
rvdw= 1.4   ; short-range van der Waals cutoff (in nm)
; Temperature coupling
tcoupl  = v-rescale  ; modified Berendsen thermostat
tc-grps = Protein GTP   SOL  MG2+  CL-; two coupling groups -
more
accurate
tau_t   = 0.1 0.1   0.1  0.1  0.1 ; time constant, in ps
ref_t   = 300 300   300  300  300; reference temperature, one
for each group, in K
; Pressure coupling
pcoupl  = Parrinello-Rahman ; pressure coupling is on for
NPT
pcoupltype  = isotropic ; uniform scaling of box
vectors
tau_p   = 2.0   ; time constant, in ps
ref_p   = 1.0   ; reference pressure, in bar
compressibility = 4.5e-5; isothermal compressibility
of water, bar^-1
refcoord_scaling= com
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC


Now I am getting the following error.


Warning: 1-4 interaction between 3231 and 3234 at distance 10.730 which
is
larger than the 1-4 table size 2.400 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

t = 226.610 ps: Water molecule starting at atom 236548 can not be
settled.
Check for bad contacts and/or reduce the timestep.
Wrote pdb files with previous and current coordinates

---
Program mdrun, VERSION 4.0.7
Source code file: ../../../../src/mdlib/nsgrid.c, line: 348

Fatal error:
Number of grid cells is zero. Probably the system and box collapsed.

kindly help me I am not getting where I am getting wrong.



--
Ananya Chatterjee,
Senior 

Re: [gmx-users] GPU warnings

2012-11-16 Thread Szilárd Páll
Hi Thomas,

The output you get means that you don't have any of the macros we try to
use although your man pages seem to be referring to them. Hence, I'm really
clueless why is this happening. Could you please file a bug report on
redmine.gromacs.org and add both the initial output as well as my patch and
the resulting output. Don't forget to specify version of software you were
using.

Thanks,
--
Szilárd

On Thu, Nov 15, 2012 at 3:53 PM, Thomas Evangelidis teva...@gmail.comwrote:

 Hi Szilárd,

 This is the warning message I get this time:

 WARNING: Oversubscribing the available -66 logical CPU cores with 1
 thread-MPI threads.

  This will cause considerable performance loss!

 I have also attached the md.log file.

 thanks,
 Thomas



 On 14 November 2012 19:48, Szilárd Páll szilard.p...@cbr.su.se wrote:

 Hi Thomas,

 Could you please try applying the attached patch (git apply
 hardware_detect.patch in the 4.6 source root) and let me know what the
 output is?

 This should show which sysconf macro is used and what its return value is
 as well as indicate if none of the macros are in fact defined by your
 headers.

 Thanks,

 --
 Szilárd



 On Sat, Nov 10, 2012 at 5:24 PM, Thomas Evangelidis teva...@gmail.comwrote:



 On 10 November 2012 03:21, Szilárd Páll szilard.p...@cbr.su.se wrote:

 Hi,

 You must have an odd sysconf version! Could you please check what is
 the sysconf system variable's name in the sysconf man page (man sysconf)
 where it says something like:

 _SC_NPROCESSORS_ONLN
  The number of processors currently online.

 The first line should be one of the
 following: _SC_NPROCESSORS_ONLN, _SC_NPROC_ONLN,
 _SC_NPROCESSORS_CONF, _SC_NPROC_CONF, but I guess yours is something
 different.


 The following text is taken from man sysconf:

These values also exist, but may not be standard.

 - _SC_PHYS_PAGES
   The number of pages of physical memory.  Note that it is
 possible for the product of this value and the value of _SC_PAGE_SIZE to
 overflow.

 - _SC_AVPHYS_PAGES
   The number of currently available pages of physical memory.

 - _SC_NPROCESSORS_CONF
   The number of processors configured.

 - _SC_NPROCESSORS_ONLN
   The number of processors currently online (available).




 Can you also check what your glibc version is?


 $ yum list installed | grep glibc
 glibc.i6862.15-57.fc17
 @updates
 glibc.x86_64  2.15-57.fc17
 @updates
 glibc-common.x86_64   2.15-57.fc17
 @updates
 glibc-devel.i686  2.15-57.fc17
 @updates
 glibc-devel.x86_642.15-57.fc17
 @updates
 glibc-headers.x86_64  2.15-57.fc17
 @updates





 On Fri, Nov 9, 2012 at 5:51 PM, Thomas Evangelidis 
 teva...@gmail.comwrote:




  I get these two warnings when I run the dhfr/GPU/dhfr-solv-PME.bench
  benchmark with the following command line:
 
  mdrun_intel_cuda5 -v -s topol.tpr -testverlet
 
  WARNING: Oversubscribing the available 0 logical CPU cores with 1
  thread-MPI threads.
 
  0 logical CPU cores? Isn't this bizarre? My CPU is Intel Core
 i7-3610QM
 

 That is bizzarre. Could you run with -debug 1 and have a look at the
 mdrun.debug output which should contain a message like:
 Detected N processors, will use this as the number of supported
 hardware
 threads.

 I'm wondering, is N=0 in your case!?

 It says Detected 0 processors, will use this as the number of
 supported hardware threads.



  (2.3 GHz). Unlike Albert, I don't see any performance loss, I get
 13.4
  ns/day on a single core with 1 GPU and 13.2 ns/day with GROMACS
 v4.5.5 on 4
  cores (8 threads) without the GPU. Yet, I don't see any performance
 gain
  with more that 4 -nt threads.
 
  mdrun_intel_cuda5 -v -nt 2 -s topol.tpr -testverlet : 15.4 ns/day
  mdrun_intel_cuda5 -v -nt 3 -s topol.tpr -testverlet : 16.0 ns/day
  mdrun_intel_cuda5 -v -nt 4 -s topol.tpr -testverlet : 16.3 ns/day
  mdrun_intel_cuda5 -v -nt 6 -s topol.tpr -testverlet : 16.2 ns/day
  mdrun_intel_cuda5 -v -nt 8 -s topol.tpr -testverlet : 15.4 ns/day
 

 I guess there is not much point in not using all cores, is it? Note
 that
 the performance drops after 4 threads because Hyper-Threading with
 OpenMP
 doesn't always help.


 
  I have also attached my log file (from mdrun_intel_cuda5 -v -s
 topol.tpr
  -testverlet) in case you find it helpful.
 

 I don't see it attached.



 I have attached both mdrun_intel_cuda5.debug and md.log files.  They
 will possibly be filtered by the mailing list but will be delivered to 
 your
 email.

 thanksm
 Thomas





 --

 ==

 Thomas Evangelidis

 PhD student
 University of Athens
 Faculty of Pharmacy
 Department of Pharmaceutical Chemistry
 Panepistimioupoli-Zografou
 157 71 Athens
 GREECE

 email: tev...@pharm.uoa.gr

   teva...@gmail.com


 

Re: [gmx-users] GPU warnings

2012-11-16 Thread Szilárd Páll
Hi Albert,

Apologies for hijacking your thread. Do you happen to have Fedora 17 as
well?

--
Szilárd


On Sun, Nov 4, 2012 at 10:55 AM, Albert mailmd2...@gmail.com wrote:

 hello:

  I am running Gromacs 4.6 GPU on a workstation with two GTX 660 Ti (2 x
 1344 CUDA cores), and I got the following warnings:

 thank you very much.

 ---**messages--**-

 WARNING: On node 0: oversubscribing the available 0 logical CPU cores per
 node with 2 MPI processes.
  This will cause considerable performance loss!

 2 GPUs detected on host boreas:
   #0: NVIDIA GeForce GTX 660 Ti, compute cap.: 3.0, ECC:  no, stat:
 compatible
   #1: NVIDIA GeForce GTX 660 Ti, compute cap.: 3.0, ECC:  no, stat:
 compatible

 2 GPUs auto-selected to be used for this run: #0, #1

 Using CUDA 8x8x8 non-bonded kernels
 Making 1D domain decomposition 1 x 2 x 1

 * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *
 We have just committed the new CPU detection code in this branch,
 and will commit new SSE/AVX kernels in a few days. However, this
 means that currently only the NxN kernels are accelerated!
 In the mean time, you might want to avoid production runs in 4.6.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Umbrella sampling question

2012-11-16 Thread Gmx QA
Thanks Erik!

/PK

2012/11/16 Erik Marklund er...@xray.bmc.uu.se

 Hi,

 Blindly defining the center of mass for a group of atoms is not possible
 in a periodic system such as a typical simulation box. You need some clue
 as to which periodic copy of every atom that is to be chosen. By providing
 pull_pbcatom0 you tell gromacs to, for every atom in grp0, use the periodic
 copy closest to the atom given by pull_pbcatom0. If you have large
 pullgroups this is necessary to define the inter-group distance in a way
 that makes sense. If you get different results depending on that setting
 you really need to figure out which atom is a good center for your
 calculations. The default behavior is to use the atom whose *index* is in
 the center of the group. If you for example have a dimeric protein this may
 correspond to the C-terminus of the first chain or the N-terminus of the
 second one, which in turn often doesn't coincide with the geometrical
 center of the group. I suggest you try yet another choice of pull_pbcatom0
 that is also close to the center to see if that also give rise to a
 different distance. As mentioned, the choice of pull_pbcatom0 should not
 matter as long as the choice allows to figure out how to handle the
 periodicity.

 Best,

 Erik

 15 nov 2012 kl. 19.56 skrev Gmx QA:

  Hi Chris
 
  Seems my confusion was that I assumed that the distances in the
  profile.xvg-file should correspond to something I could measure with
  g_dist. Turns out it does not.
  Thank you for helping me sorting out this, I got it now :-)
 
  About pull_pbcatom0 though. My box is  2*1.08 nm in all directions:
  $ tail -n 1 conf0.gro
   12.45770  12.45770  17.99590
 
  I am still not sure what pull_pbcatom0 does. You said it should not have
  any effect on my results, but changing it does result in a different
  initial distance reported by grompp.
 
  In my initial attempts at this, I did not specify anything for
  pull_pbcatom0, but in the grompp output I get this
 
  Pull group  natoms  pbc atom  distance at start reference at t=0
0 21939 10970
1 1 0   2.083 2.083
  Estimate for the relative computational load of the PME mesh part: 0.10
  This run will generate roughly 761 Mb of data
 
 
  Then, following the advice in the thread I referred to earlier, I set
  pull_pbcatom0 explicitly in the mdp-file to be an atom close to the COM
 of
  the Protein. Then I get from grompp
 
  Pull group  natoms  pbc atom  distance at start reference at t=0
0 21939  7058
1 1 0   1.808 1.808
  Estimate for the relative computational load of the PME mesh part: 0.10
  This run will generate roughly 761 Mb of data
 
  As you can see, the initial distance is different (2.083 vs 1.808), and
  1.808 is the same as the distance reported by g_dist. Do you have any
  comments here as to why this is?
 
  Thanks
  /PK
 
 
  What you reported is not what you did. It appears that grompp, gtraj, and
  g_dist report the same value.
  Please also report the value that you get from your pullx.xvg file that
 you
  get from mdrun, which I suspect
  will also be the same.
 
  The difference that you report is actually between the first FRAME of
 your
  trajectory from g_dist
  and the first LINE of the file from the g_wham output. I see no reason to
  assume that the values in the
  output of g_wham must be time-ordered. Also, I have never used g_wham
  myself (I use an external program
  to do wham) and so I can not say if you are using it correctly.
 
  My overall conclusion is that you need to investigate g_wham output not
  worry about a new run at this stage.
 
  Regarding pull_pbcatom0, there is lots of information on the mailing list
  about this. It is a global atom number
  that defines the unit cell for selection of which periodic image of each
  molecule will be used for the pulling.
  If all of your box dimensions are  2*1.08 nm, then pull_pbcatom0 will
 not
  affect your results.
 
  Chris.
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 ---
 Erik Marklund, PhD
 Dept. of Cell and Molecular Biology, Uppsala University.
 Husargatan 3, Box 596,75124 Uppsala, Sweden
 phone:+46 18 471 6688fax: +46 18 511 755
 er...@xray.bmc.uu.se
 http://www2.icm.uu.se/molbio/elflab/index.html

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe 

Re: [gmx-users] problem in running md simulation

2012-11-16 Thread Kavyashree M
Oh, I am sorry That is right. But its difficult
to find The specific cutoff values to be used
for different protocols of cutoff, switch and shift..
different values are stated in different papers..
And original force field paper (eg OPLSAA) does
not explicitly specify these values.
Any references regarding this will be helpful for
the users.

bye
kavya

On Fri, Nov 16, 2012 at 8:52 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/16/12 10:10 AM, Kavyashree M wrote:

 Hi Ananya,

 Can you try with rvwd 0.9nm and rcolumb with 1.4nm..?
 vdw interaction decreases as 1/r^6, while columbic
 interaction decreases as (1/r).. so it would be better if
 you consider columbic interaction for longer distance
 than vdw interaction..


 One should not make haphazard changes to cutoffs.  They are part of the
 force field.  Changing them without basis can invalidate the force field
 model.

 -Justin

  bye
 kavya

 On Fri, Nov 16, 2012 at 8:32 PM, ananyachatterjee 
 ananyachatter...@iiserkol.ac.**in ananyachatter...@iiserkol.ac.in
 wrote:

  Hi all,

 As suggested by venhat I have energy minimised it till 1000Kj/mol but
 even
 now I am getting the same error, saying

 Warning: 1-4 interaction between 3230 and 3233 at distance 3.573 which is
 larger than the 1-4 table size 2.400 nm
 These are ignored for the rest of the simulation
 This usually means your system is exploding,
 if not, you should increase table-extension in your mdp file
 or with user tables increase the table size

 t = 90.674 ps: Water molecule starting at atom 236548 can not be settled.
 Check for bad contacts and/or reduce the timestep.
 Wrote pdb files with previous and current coordinates

 ---
 Program mdrun, VERSION 4.0.7
 Source code file: ../../../../src/mdlib/nsgrid.c, line: 348

 Fatal error:
 Number of grid cells is zero. Probably the system and box collapsed.

 can anyone suggest me what to do now.

 Ananya Chatterejee



   On Thu, 15 Nov 2012 11:38:17 +0530, Venkat Reddy wrote:

  I think the system is not well energy minimized. Do it for 1000kj/mol.
 Also
 check for bad contacts in your starting structure using Ramachandran
 plot.
 One more important thing is that, you have to generate an index file
 with
 Protein_GTP as one group and water_Ions as another. Then change your
   tc-groups as

 tc-grps = Protein_GTP   Water_Ions
 tau_t   =  0.1 0.1 ; time constant, in ps
 ref_t   =   300   300


 On Thu, Nov 15, 2012 at 10:50 AM, ananyachatterjee 
 ananyachatter...@iiserkol.ac.in 
 ananyachatter...@iiserkol.ac.**inananyachatter...@iiserkol.ac.in
 
 wrote:

   Hi all,


 I was running a md simulation of protein complexed with GTP in water,
 neutralised with Mg2+ and Cl- ions.I have also em the system to
 2000kj/mol
 and also equilibrated the water molecules in 300K temperature and 1 bar
 pressure. And then run the md simulation using md parameters as follow:

 title   = Protein-ligand complex
 ; Run parameters
 integrator  = md; leap-frog integrator
 nsteps  = 50   ; 2 * 500 = 1000 ps (1 ns)
 dt  = 0.002 ; 2 fs
 ; Output control
 nstxout = 500  ; save coordinates every 1ps
 nstvout = 500   ; save coordinates every 1ps
 nstenergy   = 500  ; save energies every 1 ps
 nstlog  = 500  ; update log file every 1 ps
 nstxtcout   = 500  ; write .xtc trajectory every 1 ps
 energygrps  = Protein GTP SOL MG2+
 ; Bond parameters
 constraints = none; No constrains
 ; Neighborsearching
 ns_type = grid  ; search neighboring grid cells
 nstlist = 5 ; 10 fs
 rlist   = 0.9   ; short-range neighborlist cutoff (in nm)
 rcoulomb= 0.9   ; short-range electrostatic cutoff (in nm)
 rvdw= 1.4   ; short-range van der Waals cutoff (in nm)
 ; Temperature coupling
 tcoupl  = v-rescale  ; modified Berendsen
 thermostat
 tc-grps = Protein GTP   SOL  MG2+  CL-; two coupling groups -
 more
 accurate
 tau_t   = 0.1 0.1   0.1  0.1  0.1 ; time constant, in ps
 ref_t   = 300 300   300  300  300; reference temperature,
 one
 for each group, in K
 ; Pressure coupling
 pcoupl  = Parrinello-Rahman ; pressure coupling is on
 for
 NPT
 pcoupltype  = isotropic ; uniform scaling of box
 vectors
 tau_p   = 2.0   ; time constant, in ps
 ref_p   = 1.0   ; reference pressure, in
 bar
 compressibility = 4.5e-5; isothermal
 compressibility
 of water, bar^-1
 refcoord_scaling= com
 ; Periodic boundary conditions
 pbc = xyz   ; 3-D PBC


 Now I am getting the following error.


 Warning: 1-4 interaction between 3231 and 3234 at distance 10.730 which
 is
 larger than the 1-4 table size 2.400 nm
 These are ignored for the rest of the simulation
 This usually means your system is exploding,
 if not, 

[gmx-users] NMA Fatal Error

2012-11-16 Thread Yao Yao
Hi Gmxers,

I am doing a protein NMA with the mdp file like,

===


define  = -DEFLEXIBLE
constraints = none
integrator  = nm ;
emtol   = 0.1
emstep  = 0.1
nsteps  = 4000  ; Maximum number of (minimization) steps to perform

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 0 ; Frequency to update the neighbor list and long range 
forces
ns_type = simple    ; Method to determine neighbor list (simple, 
grid)
;vdwtype = switch
vdwtype = cut-off
rlist   = 0.0   ; Cut-off for making neighbor list (short range forces)
;coulombtype    = PME-switch    ; Treatment of long range electrostatic 
interactions
coulombtype = cut-off
;rcoulomb   = 1.2   ; Short-range electrostatic cut-off
rcoulomb    = 0.0
;rvdw   = 1.2   ; Short-range Van der Waals cut-off
rvdw    = 0.0
pme_order   = 4
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
optimize_fft    = yes
pbc = no
=
 However, it shows 


Fatal error:
Constraints present with Normal Mode Analysis, this combination is not 
supported

Since I put Constaints none, I really do not get it. Can someone help me?

thanks,

Yao

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] NMA Fatal Error

2012-11-16 Thread Justin Lemkul



On 11/16/12 3:43 PM, Yao Yao wrote:

Hi Gmxers,

I am doing a protein NMA with the mdp file like,

===


define  = -DEFLEXIBLE
constraints = none
integrator  = nm ;
emtol   = 0.1
emstep  = 0.1
nsteps  = 4000  ; Maximum number of (minimization) steps to perform

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 0 ; Frequency to update the neighbor list and long range 
forces
ns_type = simple; Method to determine neighbor list (simple, 
grid)
;vdwtype = switch
vdwtype = cut-off
rlist   = 0.0   ; Cut-off for making neighbor list (short range forces)
;coulombtype= PME-switch; Treatment of long range electrostatic 
interactions
coulombtype = cut-off
;rcoulomb   = 1.2   ; Short-range electrostatic cut-off
rcoulomb= 0.0
;rvdw   = 1.2   ; Short-range Van der Waals cut-off
rvdw= 0.0
pme_order   = 4
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
optimize_fft= yes
pbc = no
=
  However, it shows


Fatal error:
Constraints present with Normal Mode Analysis, this combination is not 
supported

Since I put Constaints none, I really do not get it. Can someone help me?



The problem is a typo.  You've set -DEFLEXIBLE instead of -DFLEXIBLE so 
rather than having flexible water as intended, you've got rigid water via the 
SETTLE algorithm, and constraints are still present.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Force vs distance plot in pulling simulation?

2012-11-16 Thread Justin Lemkul



On 11/16/12 10:45 AM, Gmx QA wrote:

Hello gmx-users,

I've performed a pulling simulation and obtained a force-vs-time plot and a
distance-vs-time plot (xvg-files).
Is it common to somehow combine these to get a force-vs-distance-plot using
a hacked-together script, or how do people that have experience with
pulling generally make such a plot? I have read a bunch of papers where
such figures are presented, but there does not seem to be any built-in way
in gromacs to make them. I could be wrong, of course.



The only solution is to write a simple script that parses out the columns you 
want.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Fe(2+) nonbonded parameters

2012-11-16 Thread Justin Lemkul



On 11/16/12 4:01 AM, Steven Neumann wrote:

On Thu, Nov 15, 2012 at 5:51 PM, Justin Lemkul jalem...@vt.edu wrote:



On 11/15/12 12:47 PM, Steven Neumann wrote:


So what would you do to get those parameters asap?



Get what parameters?  The ones shown below (except Cu2+) have no citation
and no one has vouched for their authenticity.  As such, the decision was
made to delete them to prevent anyone from blindly using them, hoping that
they are right.  Given this information, it would be unwise to use them
unless, as I said, you know where they came from and believe them to be
suitable.

-Justin


I found the source of the Fe(2+) parameters below from QM/MC simulations:

http://www.sciencedirect.com/science/article/pii/S0009261407014388



Thanks for finding this.  I will encourage a citation to be added in the OPLS 
force field about this.



Please, see table 1. I think it is a reasonable source for the usage
Fe(2+) in aqueous solution with protein.
Would you comment please?



It's up to you whether you believe the parameters are reliable enough for 
whatever it is that you're doing.  I can't assess that for you.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] partial charges and radius setting

2012-11-16 Thread Justin Lemkul



On 11/16/12 3:12 AM, Rajiv Gandhi wrote:

Dear Gmx users,

I want to know how to set the particular value of effective partial
charges CO ligand in topology file ?

For non bonded interaction(12-6 Lennard-Jones potential function) how do i
set a radius and well depth parameter for CO ?



Parameterization methodology depends on the force field you're using.

http://www.gromacs.org/Documentation/How-tos/Parameterization

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Dihedral form

2012-11-16 Thread Mark Abraham
And get the use of radians right!

Mark
On Nov 15, 2012 4:01 PM, Erik Marklund er...@xray.bmc.uu.se wrote:

 Hi,

 15 nov 2012 kl. 15.41 skrev Laura Leay:

  Thanks Eric,
 
  Just to clarify (I hope this notation is in fact clear):
 
  E=0.5k [ 1 - cos( n*phi - n*phi_o +180 ) ]  = 0.5k [ 1 + cos(n*phi -
 n*phi_o)]
 ^ this whole equation is Dreiding ^this whole
 equation is Dreiding converted to the form in Gromacs
 
  This would mean that:
0.5k in Dreiding = k in Gromacs
n in Dreiding = n in Gromacs
n*phi_o +180 in Dreiding (original form) is phi_s in the Gromacs
 notation from the original post
 

 I think that's correct.

 
  I hope this makes sense!
 
  Laura
 
 
  
  From: gmx-users-boun...@gromacs.org [gmx-users-boun...@gromacs.org] on
 behalf of Erik Marklund [er...@xray.bmc.uu.se]
  Sent: 15 November 2012 13:37
  To: Discussion list for GROMACS users
  Subject: Re: [gmx-users] Dihedral form
 
  You could shift the reference angle by pi, which changes the sign of the
 cosine.
 
  Best,
 
  Erik
 
  15 nov 2012 kl. 14.25 skrev Laura Leay:
 
  All,
 
  I would like to parameterise the Dreiding force field for use with
 Gromacs. One thing I am not sure about is how to parameterise the dihedrals
 
  The Dreiding paper has the form;
 
  E= 0.5k { 1 - cos[ n( phi - phi_o)]}
 
  However I cannot find this form in the Gromacs manual. The closest I
 can find in the Gromacs manual is:
 
  E = k [ 1 + cos(n*phi - phi_s) ]
 
 
  Does anyone know of a way to use the Dreiding form in Gromacs, or to
 convert to a form that is more suitable for use with Gromacs?
 
  Many thanks,
  Laura
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  ---
  Erik Marklund, PhD
  Dept. of Cell and Molecular Biology, Uppsala University.
  Husargatan 3, Box 596,75124 Uppsala, Sweden
  phone:+46 18 471 6688fax: +46 18 511 755
  er...@xray.bmc.uu.se
  http://www2.icm.uu.se/molbio/elflab/index.html
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 ---
 Erik Marklund, PhD
 Dept. of Cell and Molecular Biology, Uppsala University.
 Husargatan 3, Box 596,75124 Uppsala, Sweden
 phone:+46 18 471 6688fax: +46 18 511 755
 er...@xray.bmc.uu.se
 http://www2.icm.uu.se/molbio/elflab/index.html

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] NMA proctocol

2012-11-16 Thread Yao Yao
Hi Gmxers,


For Normal Mode Analysis (NMA), even I did run several rounds to aim for 
machine precision, with smaller emtol stepwise,
it still did not converge to 0.001 and the Fmax is about 0.5 or so. It is just 
lysozyme in 200 water molecules.

I wonder if there is a systematic way to guarantee the convergence, or I have 
to await luck to come.
Because I am pretty sure if I continue with NMA, I will get translational and 
rotational modes in final eigenfrequencies.

thanks,

Yao   



On 11/16/12 3:43 PM, Yao Yao wrote:
 Hi Gmxers,

 I am doing a protein NMA with the mdp file like,

 ===


 define          = -DEFLEXIBLE
 constraints     = none
 integrator      = nm ;
 emtol           = 0.1
 emstep          = 0.1
 nsteps          = 4000  ; Maximum number of (minimization) steps to perform

 ; Parameters describing how to find the neighbors of each atom and how to 
 calculate the interactions
 nstlist         = 0 ; Frequency to update the neighbor list and long range 
 forces
 ns_type         = simple        ; Method to determine neighbor list (simple, 
 grid)
 ;vdwtype 
        = switch
 vdwtype         = cut-off
 rlist           = 0.0   ; Cut-off for making neighbor list (short range 
 forces)
 ;coulombtype    = PME-switch    ; Treatment of long range electrostatic 
 interactions
 coulombtype     = cut-off
 ;rcoulomb       = 1.2   ; Short-range electrostatic cut-off
 rcoulomb        = 0.0
 ;rvdw           = 1.2   ; Short-range Van der Waals cut-off
 rvdw            = 0.0
 pme_order       = 4
 fourierspacing  = 0.12
 fourier_nx  = 0
 fourier_ny  = 0
 fourier_nz  = 0
 optimize_fft    = yes
 pbc             = no

 =
   However, it shows


 Fatal error:
 Constraints present with Normal Mode Analysis, this combination is not 
 supported

 Since I put Constaints none, I really do not get it. Can someone help me?


The problem is a typo.  You've set -DEFLEXIBLE instead of -DFLEXIBLE so 
rather than having flexible water as intended, you've got rigid water via the 
SETTLE algorithm, and constraints are still present.

-Justin

-- 


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] NMA proctocol

2012-11-16 Thread Justin Lemkul



On 11/16/12 6:44 PM, Yao Yao wrote:

Hi Gmxers,


For Normal Mode Analysis (NMA), even I did run several rounds to aim for 
machine precision, with smaller emtol stepwise,
it still did not converge to 0.001 and the Fmax is about 0.5 or so. It is just 
lysozyme in 200 water molecules.



Are you using double precision?  In any case, it may not be possible to reach 
such a low Fmax.  I'm no NMA expert, but generally isn't an Fmax  1 or so 
considered acceptable?



I wonder if there is a systematic way to guarantee the convergence, or I have 
to await luck to come.
Because I am pretty sure if I continue with NMA, I will get translational and 
rotational modes in final eigenfrequencies.



Again, no expert on NMA, but I doubt you ever prevent the emergence of global 
translation and rotation, but you simply neglect the first 6 eigenvectors (as is 
stated in g_nmtraj, for instance).


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-16 Thread Roland Schulz
Hi Raf,

which version of Gromacs did you use? If you used branch nbnxn_hybrid_acc
please use branch release-4-6 instead and see whether that fixes your
issue. If not please open a bug and upload your log file and your tpr.

Roland


On Thu, Nov 15, 2012 at 5:13 PM, Raf Ponsaerts 
raf.ponsae...@med.kuleuven.be wrote:

 Hi Szilárd,

 I assume I get the same segmentation fault error as Sebastian (don't
 shoot if not so). I have 2 NVIDA GTX580 cards (and 4x12-core amd64
 opteron 6174).

 in brief :
 Program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 0x7fffc07f8700 (LWP 32035)]
 0x761de301 in nbnxn_make_pairlist.omp_fn.2 ()
 from /usr/local/gromacs/bin/../lib/libmd.so.6

 Also -nb cpu with Verlet cutoff-scheme results in this error...

 gcc 4.4.5 (Debian 4.4.5-8), Linux kernel 3.1.1
 CMake 2.8.7

 If I attach the mdrun.debug output file to this mail, the mail to the
 list gets bounced by the mailserver (because mdrun.debug  50 Kb).

 Hoping this might help,

 regards,

 raf
 ===
 compiled code :
 commit 20da7188b18722adcd53088ec30e5f256af62f20
 Author: Szilard Pall pszil...@cbr.su.se
 Date:   Tue Oct 2 00:29:33 2012 +0200

 ===
 (gdb) exec mdrun
 (gdb) run -debug 1 -v -s test.tpr

 Reading file test.tpr, VERSION 4.6-dev-20121002-20da718 (single
 precision)
 [New Thread 0x73844700 (LWP 31986)]
 [Thread 0x73844700 (LWP 31986) exited]
 [New Thread 0x73844700 (LWP 31987)]
 [Thread 0x73844700 (LWP 31987) exited]
 Changing nstlist from 10 to 50, rlist from 2 to 2.156

 Starting 2 tMPI threads
 [New Thread 0x73844700 (LWP 31992)]
 Using 2 MPI threads
 Using 24 OpenMP threads per tMPI thread

 2 GPUs detected:
   #0: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat:
 compatible
   #1: NVIDIA GeForce GTX 580, compute cap.: 2.0, ECC:  no, stat:
 compatible

 2 GPUs auto-selected to be used for this run: #0, #1


 Back Off! I just backed up ctab14.xvg to ./#ctab14.xvg.1#
 Initialized GPU ID #1: GeForce GTX 580
 [New Thread 0x73043700 (LWP 31993)]

 Back Off! I just backed up dtab14.xvg to ./#dtab14.xvg.1#

 Back Off! I just backed up rtab14.xvg to ./#rtab14.xvg.1#
 [New Thread 0x71b3c700 (LWP 31995)]
 [New Thread 0x7133b700 (LWP 31996)]
 [New Thread 0x70b3a700 (LWP 31997)]
 [New Thread 0x7fffebfff700 (LWP 31998)]
 [New Thread 0x7fffeb7fe700 (LWP 31999)]
 [New Thread 0x7fffeaffd700 (LWP 32000)]
 [New Thread 0x7fffea7fc700 (LWP 32001)]
 [New Thread 0x7fffe9ffb700 (LWP 32002)]
 [New Thread 0x7fffe97fa700 (LWP 32003)]
 [New Thread 0x7fffe8ff9700 (LWP 32004)]
 [New Thread 0x7fffe87f8700 (LWP 32005)]
 [New Thread 0x7fffe7ff7700 (LWP 32006)]
 [New Thread 0x7fffe77f6700 (LWP 32007)]
 [New Thread 0x7fffe6ff5700 (LWP 32008)]
 [New Thread 0x7fffe67f4700 (LWP 32009)]
 [New Thread 0x7fffe5ff3700 (LWP 32010)]
 [New Thread 0x7fffe57f2700 (LWP 32011)]
 [New Thread 0x7fffe4ff1700 (LWP 32012)]
 [New Thread 0x7fffe47f0700 (LWP 32013)]
 [New Thread 0x7fffe3fef700 (LWP 32014)]
 [New Thread 0x7fffe37ee700 (LWP 32015)]
 [New Thread 0x7fffe2fed700 (LWP 32016)]
 [New Thread 0x7fffe27ec700 (LWP 32017)]
 Initialized GPU ID #0: GeForce GTX 580
 Using CUDA 8x8x8 non-bonded kernels
 [New Thread 0x7fffe1feb700 (LWP 32018)]
 [New Thread 0x7fffe0ae4700 (LWP 32019)]
 [New Thread 0x7fffcbfff700 (LWP 32020)]
 [New Thread 0x7fffcb7fe700 (LWP 32021)]
 [New Thread 0x7fffcaffd700 (LWP 32022)]
 [New Thread 0x7fffca7fc700 (LWP 32023)]
 [New Thread 0x7fffc9ffb700 (LWP 32024)]
 [New Thread 0x7fffc97fa700 (LWP 32025)]
 [New Thread 0x7fffc8ff9700 (LWP 32026)]
 [New Thread 0x7fffc3fff700 (LWP 32027)]
 [New Thread 0x7fffc37fe700 (LWP 32028)]
 [New Thread 0x7fffc2ffd700 (LWP 32029)]
 [New Thread 0x7fffc27fc700 (LWP 32031)]
 [New Thread 0x7fffc1ffb700 (LWP 32032)]
 [New Thread 0x7fffc17fa700 (LWP 32033)]
 [New Thread 0x7fffc0ff9700 (LWP 32034)]
 [New Thread 0x7fffc07f8700 (LWP 32035)]
 [New Thread 0x7fffbfff7700 (LWP 32036)]
 [New Thread 0x7fffbf7f6700 (LWP 32037)]
 [New Thread 0x7fffbeff5700 (LWP 32038)]
 [New Thread 0x7fffbe7f4700 (LWP 32039)]
 [New Thread 0x7fffbdff3700 (LWP 32040)]
 [New Thread 0x7fffbd7f2700 (LWP 32042)]
 [New Thread 0x7fffbcff1700 (LWP 32043)]
 Making 1D domain decomposition 2 x 1 x 1

 * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING *
 We have just committed the new CPU detection code in this branch,
 and will commit new SSE/AVX kernels in a few days. However, this
 means that currently only the NxN kernels are accelerated!
 In the mean time, you might want to avoid production runs in 4.6.


 Back Off! I just backed up traj.trr to ./#traj.trr.1#

 Back Off! I just backed up traj.xtc to ./#traj.xtc.1#

 Back Off! I just backed up ener.edr to ./#ener.edr.1#
 starting mdrun 'Protein in water'
 10 steps,200.0 ps.

 Program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 0x7fffc07f8700 (LWP 32035)]
 0x761de301 in nbnxn_make_pairlist.omp_fn.2 ()
 from /usr/local/gromacs/bin/../lib/libmd.so.6
 (gdb)

 

[gmx-users] Re: hydrophobic contacts

2012-11-16 Thread Raj
Hi all

thanks for your valuable suggestions. But still i'm not clear. I have tried
using the .tpr file with make_ndx but the index group is displayed is
similar to that of the one with .gro file. I have manually identified the
residues and when i ran the g_mindist with the ligand 0f 18 atoms against
the residues of 174 atoms i'm getting contacts ranging from 87 to 40. please
help me with stepwise instruction as I cant follow the thing. Thanks in
advance



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/hydrophobic-contacts-tp4998153p5003043.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists