[gmx-users] strange lincs warning with version 4.6

2012-11-23 Thread sebastian

Dear GROMCS user,

I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my 
local desktop (2*GTX 670 + i7) and everything works as smooth as 
possible. The outcomes are very reasonable and match the outcome of the 
4.5.5 version without GPU acceleration. On our cluster (M2090+2*Xeon 
X5650) I installed the  VERSION 4.6-dev-20121120-0290409. Using the same 
.tpr file used for runs with my desktop I get lincs warnings that the 
watermolecules can't be settled.

My .mdp file looks like:

 ;
title= ttt
cpp =  /lib/cpp
include = -I../top
constraints =  hbonds
integrator  =  md
cutoff-scheme   =  verlet

;define  =  -DPOSRES; for possition restraints

dt  =  0.002; ps !
nsteps  =  1  \
nstcomm =  25; frequency for center of mass 
motion removal

nstcalcenergy   =  25
nstxout =  10; frequency for writting the 
trajectory
nstvout =  10; frequency for writting the 
velocity
nstfout =  10; frequency to write forces to 
output trajectory

nstlog  =  1; frequency to write the log file
nstenergy   =  1; frequency to write energies to 
energy file

nstxtcout   =  1

xtc_grps=  System

nstlist =  25; Frequency to update the neighbor list
ns_type =  grid; Make a grid in the box and only 
check atoms in neighboring grid cells when constructing a new neighbor
rlist   =  1.4; cut-off distance for the 
short-range neighbor list


coulombtype =  PME; Fast Particle-Mesh Ewald 
electrostatics
rcoulomb=  1.4; cut-off distance for the coulomb 
field

vdwtype =  cut-off
rvdw=  1.4; cut-off distance for the vdw field
fourierspacing  =  0.12; The maximum grid spacing for 
the FFT grid

pme_order   =  6; Interpolation order for PME
optimize_fft=  yes
pbc=  xyz
Tcoupl  =  v-rescale
tc-grps =  System
tau_t   =  0.1
ref_t   =  300

energygrps  =  Protein Non-Protein

Pcoupl  =  no;berendsen
tau_p   =  0.1
compressibility =  4.5e-5
ref_p   =  1.0
nstpcouple=  5
refcoord_scaling=  all
Pcoupltype  =  isotropic
gen_vel =  no;yes
gen_temp=  300
gen_seed=  -1


Thanks a lot

Sebastian
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re,i don't know how can i determine emtol

2012-11-23 Thread Ivan Gladich

On 11/22/2012 09:32 PM, Ali Alizadeh wrote:


1- In your opinion, Can i simulate that system?

In my (humble) opinion:

1)Of course you can simulate that system...however  I doubt that, without
starting from the exact initial configuration with the exactly same
set-up,
you can get the same results (i.e. see the nucleation).
The onset of ice nucleation is a random process and requires very long
simulation (the paper that you posted was analysing micro-second
trajectories!!).
There is the risk that you could try several different initial
configurations at several temperature without getting anything.
   However, read carefully that paper, I do not remember all the details.

If you are interested in ice crystal growth, I would suggest to start
with
an initial water/ice system: at temperature below the melting one you
will
see formation of new ice starting from the initial ice matrix.

I do not know how  can ice crystal in initial step, i study papers and
i know ice , crystals formation and dissociation of crystals is very
difficult,
and we can use this method(as you said above) for melting or
freezing(if i say correct)

My problem is this case, how can i have a crystal in initial step,



You have three options

1) Surfing the web searching for some structure.

2)Try to implement the proton disordering algorithm of Buch et al.,
(V. Buch, P. Sandler and J. Sadlej, J. Phys. Chem. B, 1998, 102,
8641–8653)
  which specifies orientations of water molecules such that ice
Ih Bernal–Fowler constraints for each molecule are satisfied.
(J. D. Bernal and R. H. Fowler, J. Chem. Phys., 1933, 1, 515–548)

3) Someone kindly offer you his own structure, e.g.

http://marge.uochb.cas.cz/~gladich/Teaching.html

That's right,

Is it possible that i know how can you construct it?
You use TIP5P, and i do not know your force field?

In fact i should  construct this structure by self,
Can you give some advices so that i can construct it?



I constructed the structure using the method 2) with the Buch's algorithm.
Concerning TIP5P and TIP5P-Ew, here below the references:

1) Mahoney and Jorgensen, J. Chem. Phys., 2000, VOLUME 112, NUMBER 20
2) S. W. Rick, J. Chem. Phys., 2004, 120, 6085–6093.

All the best
Ivan

--
--
Ivan Gladich, Ph.D.
Postdoctoral Fellow
Academy of Sciences of the Czech Republic
Institute of Organic Chemistry and Biochemistry AS CR, v.v.i.
Flemingovo nám. 2.
166 10 Praha 6
Czech Republic

Tel: +420775504164
e-mail: ivan.glad...@uochb.cas.cz
web page:http://www.molecular.cz/~gladich/
-

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Query about GPU version of Gromacs

2012-11-23 Thread Pruthvi Bejugam
Hai all,

   Can any body suggest me which is most stable version of
GPU-GROMCS. and as far as Gromacs site is concerned currently all the
versions are unstable. If there is no stable version for GPU is available
then when would a stable version for GPU would be released.

and another query is when i was compiling this

gromacs-4.5-GPU-beta2_linux-X86_64.tar.gz (Unstable)

 GPU version of Gromacs i get path error where it is showing that gcc
compiler is not at the
given path even though path for all the gcc compilers remains the same.

any suggestions would be greatly appreciated.

Thank you,

--
PruthviRaj Bejugam,
Junior Research Fellow,
Lab. No.9 (New Building)
Computational and Systems Biology Lab,
National Centre for Cell Science,
Ganeshkhindh, Pune, INDIA 411007
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] REGARDING DOUBT

2012-11-23 Thread Subramaniam Boopathi
dear sir,
 i have give input that is* ./pdb2gmx_d -f activesite16-22.pdb
-water tip3p -ignh*
but i face some* errror like There is a dangling bond at at least one of
the terminal ends and the force field does not provide terminal entries or
files. Edit a .n.tdb and/or .c.tdb fi*le. i have made necessary correction
in the .rtp file acccording to my pdb file.

with regards
S.BOOPATHI
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] REGARDING DOUBT

2012-11-23 Thread Justin Lemkul



On 11/23/12 7:41 AM, Subramaniam Boopathi wrote:

dear sir,
  i have give input that is* ./pdb2gmx_d -f activesite16-22.pdb
-water tip3p -ignh*
but i face some* errror like There is a dangling bond at at least one of
the terminal ends and the force field does not provide terminal entries or
files. Edit a .n.tdb and/or .c.tdb fi*le. i have made necessary correction
in the .rtp file acccording to my pdb file.



What changes did you make?  Do you have any unnatural amino acids or capping 
groups?  What are your terminal residues?  What force field are you using?


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Energy minimization with walls

2012-11-23 Thread harshaljain950
Hi all,

I am simulating a box filled only with water and having walls at Z=0 and
Z=Z_box. I am confused that whether  energy minimization process removes the
bad contacts between wall atoms and solvent or not ?

any kind of help would be really appreciated



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Energy-minimization-with-walls-tp5003229.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Query about GPU version of Gromacs

2012-11-23 Thread Mark Abraham
Hi,

The GPU functionality available in GROMACS 4.5 based on OpenMM is likely to
be deprecated because we lack the resources to continue to support it - as
you can see from the limited documentation you can see for it on our
website. I would advise against attempting to build or use anything other
than an official GROMACS release.

The good news is that the upcoming GROMACS 4.6 release will have native GPU
support for a much larger range of algorithms than have been available
previously. Unfortunately, we still can't provide a timeline for that
release yet. Portability, reliability, high performance and rapid
time-to-market tend to be mutually exclusive, I'm afraid!

Mark

On Fri, Nov 23, 2012 at 10:53 AM, Pruthvi Bejugam pruthvi.n...@gmail.comwrote:

 Hai all,

Can any body suggest me which is most stable version of
 GPU-GROMCS. and as far as Gromacs site is concerned currently all the
 versions are unstable. If there is no stable version for GPU is available
 then when would a stable version for GPU would be released.

 and another query is when i was compiling this

 gromacs-4.5-GPU-beta2_linux-X86_64.tar.gz (Unstable)

  GPU version of Gromacs i get path error where it is showing that gcc
 compiler is not at the
 given path even though path for all the gcc compilers remains the same.

 any suggestions would be greatly appreciated.

 Thank you,

 --
 PruthviRaj Bejugam,
 Junior Research Fellow,
 Lab. No.9 (New Building)
 Computational and Systems Biology Lab,
 National Centre for Cell Science,
 Ganeshkhindh, Pune, INDIA 411007
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] how to repeat simulation correctly?

2012-11-23 Thread Mark Abraham
On Thu, Nov 22, 2012 at 10:13 AM, Felipe Pineda, PhD 
luis.pinedadecas...@lnu.se wrote:

 Would non-deterministic be correct to characterize the nature of MD as
 well? There is also deterministic chaos ...


An MD simulation is normally deterministic, inasmuch as the inputs and
algorithm determine the output, even if the model physics being simulated
is not determinstic (e.g. stochastic elements to the integration). One
needs to be clear about which aspect is of interest. The simulation is
often not reproducible, either, because of run-time effects like dynamic
load balancing or differences in hardware/compilers/libraries.


 And what about the outcome of starting several trajectories from the same
 equilibrated frame as continuation runs, i.e., using its velocities? Could
 they be considered independent and used to extract that valuable statistics
 mentioned in a previous posting?


Of course, sampling a coloured ball from a bag of balls, putting it back in
without letting go, and taking it back out doesn't create a new sample from
the bag. If all the balls were slowly changing colour and you were trying
to sample the distribution of colours, then that in-and-out process might
be a way to create a new sample, but it depends on the timescales
involved...

So yes, you could start from the same set of positions and velocities and
rely on run-time irreproducibility to introduce differences, and chaos to
amplify those over simulation time, in order to reach points from which you
could make statistically independent simulations. Empirically, you'll need
less simulation time to reach that point if you take active steps to make a
significant difference, like changing the velocity of every atom. You need
to re-equilibrate each time you perturb, but that's generally cheaper than
the alternatives.

Mark




 Felipe


 On 11/22/2012 10:04 AM, Erik Marklund wrote:

 Stochastic and chaotic are not identical. Chaotic means that differences
 in the initial state will grow exponentially over time.

 Erik

 22 nov 2012 kl. 09.52 skrev Felipe Pineda, PhD:

  Won't this same stochastic nature of MD provide for different,
 independent trajectories even if restarted from a previous, equilibrated
 frame even without resetting velocities, i.e., as a continuation run using
 the velocities recorded in the gro file of the selected snapshot?

 Felipe

 On 11/22/2012 12:55 AM, Mark Abraham wrote:

 Generating velocities from a new random seed is normally regarded as
 good
 enough. By the time you equilibrate, the chaotic nature of MD starts to
 work for you.

 Mark
 On Nov 21, 2012 1:04 PM, Felipe Pineda, PhD 
 luis.pinedadecas...@lnu.se
 wrote:

  So how would you repeat the (let be it converged) simulation from
 different starting conditions in order to add that valuable statistics
 you
 mention?

 I think this was Albert's question

 Felipe

 On 11/21/2012 12:41 PM, Mark Abraham wrote:

  If a simulation ensemble doesn't converge reliably over a given time
 scale,
 then it's not converged over that time scale. Repeating it from
 different
 starting conditions still adds valuable statistics, but can't be a
 replicate. Independent replicated observations of the same phenomenon
 allow
 you to assess how likely it is that your set of observations reflect
 the
 underlying phenomenon. The problem in sampling-dependent MD is
 usually in
 making an observation (equating a converged simulation with an
 observation).

 Mark

 On Wed, Nov 21, 2012 at 8:12 AM, Albert mailmd2...@gmail.com wrote:

   hello:

 I am quite confused on how to repeat our MD in Gromacs. If we
 started
 from the same equilibrated .gro file with gen_vel= no in
 md.mdp,
 we may get exactly the same results which cannot be treated as
 reasonable
 repeated running. However, if we use gen_vel=yes for each round of
 running, sometimes our simulation may not converged at our simulated
 time
 scale and we may get two results with large differences.

 So I am just wondering how to perform repeated MD in Gromacs in a
 correct way so that our results can be acceptably repeated?

 thank you very much.
 Albert
 --

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

 --**-
 Erik Marklund, PhD
 Dept. of Cell and Molecular Biology, Uppsala University.
 Husargatan 3, Box 596,75124 Uppsala, Sweden
 phone:+46 18 471 6688fax: +46 18 511 755
 er...@xray.bmc.uu.se
 

[gmx-users] On the usage of SD integrator as the thermostat

2012-11-23 Thread Christopher Neale
I use the SD integrator with tau_t = 1.0 ps for all of my work, including 
proteins in aqueous solution
or embedded in a lipid membrane.

Any value of tau-t is correct, and none will give you the proper dynamics, 
but I find that the diffusion of
both water and lipids is quite reasonable when using tau_t=1.0 ps.

I arrived at 1.0 ps after some help from Berk Hess on this list. I suggest that 
you search out those old posts.

Chris.

-- original message --

In manual I've found possibility of the usage of the sd (langeven's
dynamics) integrator as the thermostat.

It's known that friction coefficient in the Langeven's equations is
defined as m/Tau_t. So the  high values of tau t can be appropriate
for the modeling of the thermostat without t_coupl. Also I know that
friction coefficient for such simulation must  corresponds to the
viscosity of the system.  In Gromacs manual I've found that Tau-t= 2.0
ps can be appropriate value for such simulations. Does this value
suitable for water-soluble system only ? What Tau_t should I use for
modeling of the membrane proteins in the lipid-water environment which
has higher viscosity ?


Thanks for help,

James
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] strange lincs warning with version 4.6

2012-11-23 Thread Szilárd Páll
Hi,

On Fri, Nov 23, 2012 at 9:40 AM, sebastian 
sebastian.wa...@physik.uni-freiburg.de wrote:

 Dear GROMCS user,

 I installed the git gromacs VERSION 4.6-dev-20121117-7a330e6-dirty on my
 local desktop


Watch out, the dirty version suffix means you have changed something in the
source.


 (2*GTX 670 + i7) and everything works as smooth as possible. The outcomes
 are very reasonable and match the outcome of the 4.5.5 version without GPU
 acceleration. On our


What does outcome mean? If that means performance, than something is
wrong, you should see a considerable performance increase (PME, non-bonded,
bondeds have all gotten a lot faster).


 cluster (M2090+2*Xeon X5650) I installed the  VERSION
 4.6-dev-20121120-0290409. Using the same .tpr file used for runs with my
 desktop I get lincs warnings that the watermolecules can't be settled.


The group kernels have not stabilized yet and there have been some fixes
lately. Could you please the latest version and check again.

Additionally, you could try running our regression tests suite (
git.gromacs.org/regressiontests) to see if at least the tests pass with the
binaries you compiled.

Cheers,
--
Szilárd


 My .mdp file looks like:

  ;
 title= ttt
 cpp =  /lib/cpp
 include = -I../top
 constraints =  hbonds
 integrator  =  md
 cutoff-scheme   =  verlet

 ;define  =  -DPOSRES; for possition restraints

 dt  =  0.002; ps !
 nsteps  =  1  \
 nstcomm =  25; frequency for center of mass motion
 removal
 nstcalcenergy   =  25
 nstxout =  10; frequency for writting the
 trajectory
 nstvout =  10; frequency for writting the
 velocity
 nstfout =  10; frequency to write forces to
 output trajectory
 nstlog  =  1; frequency to write the log file
 nstenergy   =  1; frequency to write energies to
 energy file
 nstxtcout   =  1

 xtc_grps=  System

 nstlist =  25; Frequency to update the neighbor
 list
 ns_type =  grid; Make a grid in the box and only
 check atoms in neighboring grid cells when constructing a new neighbor
 rlist   =  1.4; cut-off distance for the
 short-range neighbor list

 coulombtype =  PME; Fast Particle-Mesh Ewald
 electrostatics
 rcoulomb=  1.4; cut-off distance for the coulomb
 field
 vdwtype =  cut-off
 rvdw=  1.4; cut-off distance for the vdw field
 fourierspacing  =  0.12; The maximum grid spacing for the
 FFT grid
 pme_order   =  6; Interpolation order for PME
 optimize_fft=  yes
 pbc=  xyz
 Tcoupl  =  v-rescale
 tc-grps =  System
 tau_t   =  0.1
 ref_t   =  300

 energygrps  =  Protein Non-Protein

 Pcoupl  =  no;berendsen
 tau_p   =  0.1
 compressibility =  4.5e-5
 ref_p   =  1.0
 nstpcouple=  5
 refcoord_scaling=  all
 Pcoupltype  =  isotropic
 gen_vel =  no;yes
 gen_temp=  300
 gen_seed=  -1


 Thanks a lot

 Sebastian
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Different average H bonds with different g_hbond releases

2012-11-23 Thread Luigi CAVALLO
 

Hi, 

we have a .xtc and .tpr file. We were interested in the
average number of H-bonds in the last 10ns of a 60ns long trajectory. We
analyzed the jobs as g_hbond -f traj1_0-60ns.xtc -s topol.tpr -b 5
-num hbond.xvg. We are displaced by having a different number depending
on the g_hbond release. 

Release 4.5.4 : Average number of hbonds per
timeframe 163.620 out of 118112 possible 

Release 4.5.5 : Average
number of hbonds per timeframe 168.168 out of 118112 possible 

Looking
at the hbond.xvg file, the number of H-bonds in each frame are clearly
different between the two releases. How is this possible ? We checked
single versus double precision g_hbonds, same behavior. We checked that
the initial part of the output, i.e. all the various g_hbond defaults,
they are the same. We tested different computers and compilations, same
behavior. 

The topology and the md run were done with release 4.5.4 if
this could be a relevant information. 

Thanks, 

Luigi 
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Different average H bonds with different g_hbond releases

2012-11-23 Thread Justin Lemkul



On 11/23/12 5:23 PM, Luigi CAVALLO wrote:



Hi,

we have a .xtc and .tpr file. We were interested in the
average number of H-bonds in the last 10ns of a 60ns long trajectory. We
analyzed the jobs as g_hbond -f traj1_0-60ns.xtc -s topol.tpr -b 5
-num hbond.xvg. We are displaced by having a different number depending
on the g_hbond release.

Release 4.5.4 : Average number of hbonds per
timeframe 163.620 out of 118112 possible

Release 4.5.5 : Average
number of hbonds per timeframe 168.168 out of 118112 possible

Looking
at the hbond.xvg file, the number of H-bonds in each frame are clearly
different between the two releases. How is this possible ? We checked
single versus double precision g_hbonds, same behavior. We checked that
the initial part of the output, i.e. all the various g_hbond defaults,
they are the same. We tested different computers and compilations, same
behavior.

The topology and the md run were done with release 4.5.4 if
this could be a relevant information.



There was a bug that was fixed in May 2011 wherein 4.5.4 reported too few 
hydrogen bonds.


commit 91a481fad7ef0d87a4f8b2cb633c9dc40644350c
Author: Erik Marklund er...@anfinsen.bmc.uu.se
Date:   Tue May 10 14:37:10 2011 +0200

Fixed long standing bug where the merging resulted in too few hbonds.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs 4.6 segmentation fault with mdrun

2012-11-23 Thread Raf Ponsaerts
Hi Szilárd and Roland,

Thanks for the clear explanation! 

I will compile release-4.6 (instead of the nbnxn_hybrid_acc branch) and
do some further testing in a few weeks since I'm currently using the
machine for production-runs with gmx-4.5.5.

Thanks for your time and effort!

regards,

raf

On Thu, 2012-11-22 at 00:23 +0100, Szilárd Páll wrote:
 Roland,
 
 
 He explicitly stated that he is using 20da718 which is also from the
 nbnxn_hybrid_acc branch.
 
 
 Raf, as Roland said, get the release-4-6 ad try again!
 
 
 
 
 There's an important thing to mention: your hardware configuration is
 probably quite imbalanced and the default settings are certainly not
 the best to run with: two MPI processes/threads with 24 OpenMP threads
 + a GPU each. GROMACS works best with balanced hardware configuration
 and yours is certainly not balanced, the GPUs will not be able to keep
 up with 64 CPU cores.
 
 
 Regarding the run configuration  most importantly, in most cases you
 should avoid running a group of OpenMP threads across sockets (except
 on Intel, =12-16 threads). On these Opterons  running OpenMP at most
 on a half CPU is recommended (the CPUs are in reality two CPU dies
 bolted together) and in fact you might be better off with even less
 threads per MPI process/thread. This means that multiple processes
 will have to share a GPU which is not optimal and work only with MPI
 in the current version.
 
 
 So to conclude, to get the best performance you should try a few
 combinations:
 
 
 # process 0,1 will use GPU0, process 2,3 GPU1
 
 # this avoids running across sockets, but for aforementioned reasons
 it will still be suboptimal
 mpirun -np 4 mdrun_mpi -gpu_id 0011
 
 
 # process 0,1,2,3 will use GPU0, process 4,5,6,7 GPU1
 
 # this config will probably still be slower than the next one
 mpirun -np 8 mdrun_mpi -gpu_id 1
 
 
 # process 0,1,2,3,4,5,6,7 will use GPU0, process 8,9,10,11,12,13,14,15
 GPU1
 
 # this config will probably still be slower than the next one
 mpirun -np 16 mdrun_mpi -gpu_id 1
 
 
 You should go ahead and try with 32 and 64 processes as well, I
 suspect that 2 or 3 threads/process will be the fastest. Depending on
 what system you are simulating, this could lead to load imbalance, but
 that you'll have to see.
 
 
 If it turns out that the Wait for GPU time is more than a few
 percent (which will probably be the case), it means that a GTX 580 is
 not fast enough for two of these Opterons. What you can try is to run
 using the hybrid mode with -nb gpu_cpu which might help.
 
 
 
 Cheers,
 
 --
 Szilárd
 
 
 On Sat, Nov 17, 2012 at 3:11 AM, Roland Schulz rol...@utk.edu wrote:
 Hi Raf,
 
 which version of Gromacs did you use? If you used branch
 nbnxn_hybrid_acc
 please use branch release-4-6 instead and see whether that
 fixes your
 issue. If not please open a bug and upload your log file and
 your tpr.
 
 Roland
 
 
 On Thu, Nov 15, 2012 at 5:13 PM, Raf Ponsaerts 
 raf.ponsae...@med.kuleuven.be wrote:
 
  Hi Szilárd,
 
  I assume I get the same segmentation fault error as
 Sebastian (don't
  shoot if not so). I have 2 NVIDA GTX580 cards (and 4x12-core
 amd64
  opteron 6174).
 
  in brief :
  Program received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0x7fffc07f8700 (LWP 32035)]
  0x761de301 in nbnxn_make_pairlist.omp_fn.2 ()
  from /usr/local/gromacs/bin/../lib/libmd.so.6
 
  Also -nb cpu with Verlet cutoff-scheme results in this
 error...
 
  gcc 4.4.5 (Debian 4.4.5-8), Linux kernel 3.1.1
  CMake 2.8.7
 
  If I attach the mdrun.debug output file to this mail, the
 mail to the
  list gets bounced by the mailserver (because mdrun.debug 
 50 Kb).
 
  Hoping this might help,
 
  regards,
 
  raf
  ===
  compiled code :
  commit 20da7188b18722adcd53088ec30e5f256af62f20
  Author: Szilard Pall pszil...@cbr.su.se
  Date:   Tue Oct 2 00:29:33 2012 +0200
 
  ===
  (gdb) exec mdrun
  (gdb) run -debug 1 -v -s test.tpr
 
  Reading file test.tpr, VERSION 4.6-dev-20121002-20da718
 (single
  precision)
  [New Thread 0x73844700 (LWP 31986)]
  [Thread 0x73844700 (LWP 31986) exited]
  [New Thread 0x73844700 (LWP 31987)]
  [Thread 0x73844700 (LWP 31987) exited]
  Changing nstlist from 10 to 50, rlist from 2 to 2.156
 
  Starting 2 tMPI threads
  [New Thread 0x73844700 (LWP 31992)]
  Using 2 MPI threads
 
  Using 24 OpenMP threads per tMPI thread
 
  2 GPUs 

Re: [gmx-users] Different average H bonds with different g_hbond releases

2012-11-23 Thread Acoot Brett
If we got the results by 4.5.4, what will be the method to analyze it by 4.5.5? 
By a pathch or by installation of 4.5.5 to analyze the 4.5.4 results?

Cheers,

Acoot

--- On Sat, 24/11/12, Justin Lemkul jalem...@vt.edu wrote:

 From: Justin Lemkul jalem...@vt.edu
 Subject: Re: [gmx-users] Different average H bonds with different g_hbond 
 releases
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Received: Saturday, 24 November, 2012, 9:30 AM
 
 
 On 11/23/12 5:23 PM, Luigi CAVALLO wrote:
 
 
  Hi,
 
  we have a .xtc and .tpr file. We were interested in
 the
  average number of H-bonds in the last 10ns of a 60ns
 long trajectory. We
  analyzed the jobs as g_hbond -f traj1_0-60ns.xtc -s
 topol.tpr -b 5
  -num hbond.xvg. We are displaced by having a different
 number depending
  on the g_hbond release.
 
  Release 4.5.4 : Average number of hbonds per
  timeframe 163.620 out of 118112 possible
 
  Release 4.5.5 : Average
  number of hbonds per timeframe 168.168 out of 118112
 possible
 
  Looking
  at the hbond.xvg file, the number of H-bonds in each
 frame are clearly
  different between the two releases. How is this
 possible ? We checked
  single versus double precision g_hbonds, same behavior.
 We checked that
  the initial part of the output, i.e. all the various
 g_hbond defaults,
  they are the same. We tested different computers and
 compilations, same
  behavior.
 
  The topology and the md run were done with release
 4.5.4 if
  this could be a relevant information.
 
 
 There was a bug that was fixed in May 2011 wherein 4.5.4
 reported too few 
 hydrogen bonds.
 
 commit 91a481fad7ef0d87a4f8b2cb633c9dc40644350c
 Author: Erik Marklund er...@anfinsen.bmc.uu.se
 Date:   Tue May 10 14:37:10 2011 +0200
 
      Fixed long standing bug where the
 merging resulted in too few hbonds.
 
 
 -Justin
 
 -- 
 
 
 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
 
 
 -- 
 gmx-users mailing list    gmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search
 before posting!
 * Please don't post (un)subscribe requests to the list. Use
 the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Different average H bonds with different g_hbond releases

2012-11-23 Thread David van der Spoel

On 2012-11-24 05:41, Acoot Brett wrote:

If we got the results by 4.5.4, what will be the method to analyze it by 4.5.5? 
By a pathch or by installation of 4.5.5 to analyze the 4.5.4 results?

In practice there is no problem to have a number of gromacs versions 
installed. It is typically not recommended to switch gromacs versions of 
mdrun during a project - unless there are know issues -  but for the 
analysis this is less critical. In this case 4.5.5 should be used.



Cheers,

Acoot

--- On Sat, 24/11/12, Justin Lemkul jalem...@vt.edu wrote:


From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Different average H bonds with different g_hbond 
releases
To: Discussion list for GROMACS users gmx-users@gromacs.org
Received: Saturday, 24 November, 2012, 9:30 AM


On 11/23/12 5:23 PM, Luigi CAVALLO wrote:



Hi,

we have a .xtc and .tpr file. We were interested in

the

average number of H-bonds in the last 10ns of a 60ns

long trajectory. We

analyzed the jobs as g_hbond -f traj1_0-60ns.xtc -s

topol.tpr -b 5

-num hbond.xvg. We are displaced by having a different

number depending

on the g_hbond release.

Release 4.5.4 : Average number of hbonds per
timeframe 163.620 out of 118112 possible

Release 4.5.5 : Average
number of hbonds per timeframe 168.168 out of 118112

possible


Looking
at the hbond.xvg file, the number of H-bonds in each

frame are clearly

different between the two releases. How is this

possible ? We checked

single versus double precision g_hbonds, same behavior.

We checked that

the initial part of the output, i.e. all the various

g_hbond defaults,

they are the same. We tested different computers and

compilations, same

behavior.

The topology and the md run were done with release

4.5.4 if

this could be a relevant information.



There was a bug that was fixed in May 2011 wherein 4.5.4
reported too few
hydrogen bonds.

commit 91a481fad7ef0d87a4f8b2cb633c9dc40644350c
Author: Erik Marklund er...@anfinsen.bmc.uu.se
Date:   Tue May 10 14:37:10 2011 +0200

  Fixed long standing bug where the
merging resulted in too few hbonds.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search
before posting!
* Please don't post (un)subscribe requests to the list. Use
the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists