[gmx-users] pdb2gmx pairs clarification

2012-01-11 Thread Richard Broadbent
Dear All, I've been reading the manual (v4.5.4) and could use a small amount of clarification. On p117 it says: pdb2gmx sets the number of exclusions to 3, which means that interactions between atoms connected by at most 3 bonds are excluded. Pair interactions are generated for all pairs of atoms

[gmx-users] COMPASS force field revisited Bond Dihedral and Angle Dihedral cross terms

2012-02-21 Thread Richard Broadbent
Dear All, I am considering conducting a simulation of a polymeric system in gromacs. I would like to use the COMPASS forcefield as it has a complete parameter set for my molecule. I believe the majority of the implementation is simple (though long and fiddly). However, it has Bond-Dihedral and An

Re: [gmx-users] problem

2012-05-11 Thread Richard Broadbent
Dear Anik, > Hi, Am Anik Sen. AM using GROMACS 3.3.2 for one of my work. Is there a particular reason you are using such an old version of gromacs? If not then switch to the latest version as there have been many improvements > > I was trying to run the dynamics for some inorganic me

Re: [gmx-users] Create Bond between new residues

2012-05-11 Thread Richard Broadbent
here is a simple example (for more complex ones see the existing residues in an aminoacids.rtp) assume molecule looks like A-B-C where the -'s are bonds with residues AAA, BBB, CCC containing the corresponding atoms, then the .rtp should contain something like: [ AAA ] [ atoms ] A typeA c

Re: [gmx-users] Two [ dihedrals ] sections in topology

2012-05-15 Thread Richard Broadbent
On Tue, 2012-05-15 at 15:43 +0100, Lara Bunte wrote: > Hi > > You wrote: > > >Two blocks of dihedrals are normal output for pdb2gmx - one for proper > >and one for improper dihedrals. > > > Is there a way to force pdb2gmx that there is only my block with improper > dihedrals in the topology?

Re: [gmx-users] Force Constants and Unit Systems

2012-05-17 Thread Richard Broadbent
Hi Lara, On Thu, 2012-05-17 at 17:58 +0100, Lara Bunte wrote: > Hi > > > >One cannot convert between these units, since kcal/mol is an energy term, and > >kJ/(mol nm^2) is a force constant. > > > This confuses me, because in the paper where that constants are from is > written, I quote: Whi

Re: [gmx-users] Force Constants and Unit Systems

2012-05-18 Thread Richard Broadbent
On Fri, 2012-05-18 at 09:40 +0100, Lara Bunte wrote: > Hi > > I have two questions left: > > > 1.) > You wrote > > > >If your term in question is an angle potential, then the force constant > >should indeed have units of energy > > Could you please explain this? Why is it here consistent

[gmx-users] Gromacs-4.6-beta3 compile warnings intel-suite 2011 and 2013

2013-01-15 Thread Richard Broadbent
Dear All, I've just installed 4.6-beta3 on my ubuntu linux (Intel Xeon [sandy bridge]) box using both intel-suite/64/2011.10/319, and intel-suite/64/2013.0/079 with mkl Using either compiler I received several hundred warnings of type #120, #167, and #556 (see bellow for examples). I thought

Re: [gmx-users] Gromacs-4.6-beta3 compile warnings intel-suite 2011 and 2013

2013-01-16 Thread Richard Broadbent
13 at 9:34 AM, Richard Broadbent < richard.broadben...@imperial.ac.uk> wrote: Dear All, I've just installed 4.6-beta3 on my ubuntu linux (Intel Xeon [sandy bridge]) box using both intel-suite/64/2011.10/319, and intel-suite/64/2013.0/079 with mkl Using either compiler I received sev

[gmx-users] compiling on different architecture than the compute nodes architecture

2013-02-06 Thread Richard Broadbent
Dear All, I would like to compile gromacs 4.6 to run with the correct acceleration on the compute nodes on our local cluster. Some of the nodes have intel sandy-bridge whilst others only have sse4.1 and some (including the login and single core job nodes) are still stuck on ssse3 (gmx would us

Re: [gmx-users] Gromacs 4.6 crushes in PBS queue system

2013-02-19 Thread Richard Broadbent
Hi Tomek, Gromacs 4.6 uses very different accelerated kernels to 4.5.5. These are hardware specific and you must therefore select acceleration appropriate for your hardware. your login node will automatically use and select AVX-128-FMA acceleration. However, your compute nodes are considerab

Re: [gmx-users] Problem with gromacs in Cluster

2013-04-25 Thread Richard Broadbent
I generally build a tpr for the whole simulation then submit one job using a command such as: mpirun -n ${NUM_PROCESSORS} mdrun -deffnm ${NAME} -maxh ${WALL_TIME_IN_HOURS} copy all the files back at the end of the script if necessary then: then resubmit it (sending out all the files again if

Re: [gmx-users] Doubt about the Gromacs versions

2013-04-25 Thread Richard Broadbent
The 4.6.1 release is a more advanced version of gromacs with the latest kernels and features (GPU support, verlet cut-offs etc.). 4.5.7 is a maintenance release for those of us who for whatever reason wish to keep using the older 4.5.x series release. It mainly adds fixes made to the 4.6.x ser

Re: [gmx-users] Re: Using virtual site

2013-05-01 Thread Richard Broadbent
Dear Raju, You haven't added any exclusions to your topology. Therefore, the mid point is interacting via the coulomb potential with the Carbon and Oxygen atoms. If you exclude those interactions this system will probably run. Richard On 01/05/13 14:16, 라지브간디 wrote: Dear Mark, As per y

Re: [gmx-users] Periodic Boundary Condition in evaporation of droplets

2013-05-07 Thread Richard Broadbent
If you don't want to simulate your droplet in a perfect vacuum then in most MD codes you have to use either PBC or walls. There are advantages and disadvantages to both. I'm not an expert but in my opinion PBC make more physical sense than walls provided the box is sensibly chosen, *however*, I

[gmx-users] Energies in simulation and rerun using different core counts

2012-09-07 Thread Richard Broadbent
Dear All, I've been having some issues with energies with gromacs running on various core counts for a 7469 polymer in solvent system, constraining all bonds and running with a 2fs time step. I used PME-shift (1.05nm, 1.10nm), and a shift with the same parameters for the VdW, I am using the O

Re: [gmx-users] Holes in the solvent!

2012-10-30 Thread Richard Broadbent
Dear Arman, I have never seen holes appear in my solvent when running on 12 cores or fewer, or when running with langevin coupling or NVE. Running at higher temperatures (700K-1000K) did remove the holes from my system however, the RDF's were inconsistent on varying core counts (peeks and tr

Re: [gmx-users] improper OPLS dihedrals in gromacs

2011-10-19 Thread Richard Broadbent
Dear Jia, >I found a question in 2009 asking which format of improper does >OPLS dihedral gromacs use. I have the same question, is it periodic or >harmonic? If it is periodic, why in its ffbonded.itp file: #define >improper_O_C_X_Y 180.0 43.93200 2 what do they mean? this means that the dihedra

Re: [gmx-users] TIP5P calculating the dummy positions

2011-10-21 Thread Richard Broadbent
Dear Pratik, > > > I am trying to create the tip6p itp file. In order to do that, since > it is an overlap of the tip4p and tip5p model (visually) > I am trying to understand the a, b, and c values for the position of > the dummy charge in the tip5p models. > > Below is the part of the script

RE: [gmx-users] TIP5P calculating the dummy positions

2011-10-27 Thread Richard Broadbent
Dear Pratik, > Dear Richard, > > First of all i really appreciate the help. I 've figured out the cross > product part of the equations, but the |A+B| part i still have figured > it out. > > I've been trying to crack this bit but I still can't do it. > > 0.07 * cos (109.47/2) / | xOH1 + xOH2

Re: [gmx-users] remd

2013-07-02 Thread Richard Broadbent
Not sure exactly what merging together means, for visualisation I generally use vmd as this supports gromacs files directly. Your problem might be to do with, using rlist, rcoulomb, and rvdw set to 0 is not the standard way to do an infinite cut-off normally you set them to -1 as in the manual

Re: [gmx-users] remd

2013-07-02 Thread Richard Broadbent
On 02/07/13 12:10, Justin Lemkul wrote: On Tue, Jul 2, 2013 at 5:30 AM, Richard Broadbent < richard.broadben...@imperial.ac.uk> wrote: Not sure exactly what merging together means, for visualisation I generally use vmd as this supports gromacs files directly. If I understand cor

Re: [gmx-users] gpu cluster explanation

2013-07-12 Thread Richard Broadbent
On 12/07/13 13:26, Francesco wrote: Hi all, I'm working with a 200K atoms system (protein + explicit water) and after a while using a cpu cluster I had to switch to a gpu cluster. I read both Acceleration and parallelization and Gromacs-gpu documentation pages (http://www.gromacs.org/Documentat

Re: [gmx-users] Re: what is sigma in gromacs? the radius of a sphere or the diameter of a sphere?

2013-08-15 Thread Richard Broadbent
Dear Grita, \sigma in gromacs is the value of \sigma in a Lennard-Jones (LJ) potential defined by: 4\epsilon*[(sigma/r)^12-(sigma/r)^6], where r is the separation between the two point particles, epsilon is the well depth, and \sigma is a length scale which characterises the interaction bet

[gmx-users] intermittent changes in energy drift following simulation restarts in v4.6.1

2013-09-09 Thread Richard Broadbent
Dear All, I've been analysing a series of long (200 ns) NVE simulations (md integrator) on ~93'000 atom systems I ran the simulations in groups of 3 using the -multi option in gromacs v4.6.1 double precision. Simulations were run with 1 OpenMP thread per MPI process The simulations were res

Re: [gmx-users] intermittent changes in energy drift following simulation restarts in v4.6.1

2013-09-09 Thread Richard Broadbent
h-bonds lincs_order = 6 lincs_iter = 2 cutoff-scheme = Verlet verlet-buffer-drift = -1 On Mon, Sep 9, 2013 at 4:08 PM, Richard Broadbent wrote: Dear All, I've been analysing a series of long (200 ns) NVE simulations (md integrator) on ~93'000 atom systems I ran the simulation

Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Richard Broadbent
Dear James, On 05/11/13 11:16, James Starlight wrote: My suggestions: 1) During compilstion using -march=corei7-avx-i I have obtained error that somethng now found ( sorry I didnt save log) so I compile gromacs without this flag 2) I have twice as better performance using just 1 gpu by means o

Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-06 Thread Richard Broadbent
Hi Dwey, On 05/11/13 22:00, Dwey Kauffman wrote: Hi Szilard, Thanks for your suggestions. I am indeed aware of this page. In a 8-core AMD with 1GPU, I am very happy about its performance. See below. My intention is to obtain a even better one because we have multiple nodes. ### 8 core AMD