[gmx-users] How can I create the OPLS-AA topology file for the liagnd

2009-06-11 Thread Ms. Aswathy S
Hi, i would like to do a ligand -receptor simulation using OPLS-AA force field. I suppose using the options in Gromacs I can create the itp files for protein only. But how can i create the OPLS --AA topology file for the ligand. I think PRODRG server only creates gromacs force field. Please co

Re: [gmx-users] Re: Issues regarding exclusions and Charge group distribution

2009-06-11 Thread Mark Abraham
Manik Mayur wrote: On Fri, Jun 12, 2009 at 8:46 AM, Mark Abraham > wrote: Manik Mayur wrote: Hi, The last mail supposedly bounced from the server due to attachments. So please excuse in case you find this as a repetition. I a

Re: [gmx-users] Re: Issues regarding exclusions and Charge group distribution

2009-06-11 Thread Manik Mayur
On Fri, Jun 12, 2009 at 8:46 AM, Mark Abraham wrote: > Manik Mayur wrote: > >> Hi, >> >> The last mail supposedly bounced from the server due to attachments. So >> please excuse in case you find this as a repetition. >> >> I am trying to simulate a system where I want to exclude all nb >> interact

Re: [gmx-users] Re: Issues regarding exclusions and Charge group distribution

2009-06-11 Thread Mark Abraham
Manik Mayur wrote: Hi, The last mail supposedly bounced from the server due to attachments. So please excuse in case you find this as a repetition. I am trying to simulate a system where I want to exclude all nb interactions between frozen groups. So I followed the manual and defined them i

Re: [gmx-users] Re: fatal error when using Martini CG force field

2009-06-11 Thread Justin A. Lemkul
lammps lammps wrote: I use the gmx4.04 version. The detail error information is as follows. May be there is a bug in domain decompostion method because someone met this problem beforce. Not all bonded interactions have been properly assigned to the domain d

Re: [gmx-users] Problem in Martini simulation with gromacs version 4.0.4 works fine with gromacs 3.3.1 error G96Angle of 2395 missing

2009-06-11 Thread lammps lammps
I  meet the same problem. Have you delt with it? What's the Matter? Thanks in advance. -- wende ___ gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/s

[gmx-users] Re: fatal error when using Martini CG force field

2009-06-11 Thread lammps lammps
I use the gmx4.04 version. The detail error information is as follows. May be there is a bug in domain decompostion method because someone met this problem beforce. Not all bonded interactions have been properly assigned to the domain decomposition cells A list o

[gmx-users] fatal error when using Martini CG force field

2009-06-11 Thread lammps lammps
When I use CG martini force field to do simulation, and use the example of Martini website. There are many warnings when grompp the mdp file: -- WARNING 4 [file mem16.top, line 41]: For proper thermostat integration tau_t (0.1) should be more than an order of magnitude larger than

[gmx-users] Form of 10-4 Wall Potential Function.

2009-06-11 Thread ttrudeau
Could someone clarify the form of the 10-4 potential function used in GROMACS walls? Is it really 10-4, and not 10-4-3? We have tried to create our own 10-4 potential using tables but it doesn't match the results of the built-in GROMACS wall type. ___ g

Re: [gmx-users] gmxtest

2009-06-11 Thread Justin A. Lemkul
Jones de Andrade wrote: Hi all. Sorry for waking up this subject again. But it rise a question for me: Is gmx 4 or gmx 3.3 that has a bug so? And, what is the bug concerned exactly? Buckingham interaction, 1-4 interactions, or buckingham 1-4 interactions? http://bugzilla.gromacs.org/sho

Re: [gmx-users] gmxtest

2009-06-11 Thread Jones de Andrade
Hi all. Sorry for waking up this subject again. But it rise a question for me: Is gmx 4 or gmx 3.3 that has a bug so? And, what is the bug concerned exactly? Buckingham interaction, 1-4 interactions, or buckingham 1-4 interactions? Thanks a lot in advance, Sincerally yours, Jones On Wed, May

Re: Fwd: [its-cluster-admin] Fwd: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Roland Schulz
On Thu, Jun 11, 2009 at 3:20 PM, Erik Marklund wrote: > > I disagree. One should always sift through the configure-/make-output, > obviously (and therefore a tee would be better than the redirection above), > but I find scripting is superior to manual compilation. It makes it easy to > see exactl

Re: Fwd: [its-cluster-admin] Fwd: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Erik Marklund
Justin A. Lemkul skrev: Erik Marklund wrote: Your admin compiled another version thah the one you're using. Look closely at tle path. /Erik export CPPFLAGS="/usr/local/fftw-3.1.2/include" In addition, this is not correct. It should read: export CPPFLAGS="-I/usr/local/fftw-3.1.2/incl

Re: Fwd: [its-cluster-admin] Fwd: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Justin A. Lemkul
Erik Marklund wrote: Your admin compiled another version thah the one you're using. Look closely at tle path. /Erik export CPPFLAGS="/usr/local/fftw-3.1.2/include" In addition, this is not correct. It should read: export CPPFLAGS="-I/usr/local/fftw-3.1.2/include" ./configure --prefi

Re: Fwd: [its-cluster-admin] Fwd: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Erik Marklund
Your admin compiled another version thah the one you're using. Look closely at tle path. /Erik Sashank Karri skrev: Hi, Our cluster admin argues that gromacs was in fact compiled with MPI. Below is the script he used to compile gromacs. Do you see any errors in the script? Are there a

Fwd: [its-cluster-admin] Fwd: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Sashank Karri
Hi, Our cluster admin argues that gromacs was in fact compiled with MPI. Below is the script he used to compile gromacs. Do you see any errors in the script? Are there any other possible reasons why I'm getting this gmx_bast error? -Sashank -- Forwarded message -- From: Tula

[gmx-users] Announcement: Structure-based model webtool for Gromacs

2009-06-11 Thread Paul Whitford
Gromacs Users, The Onuchic Group at UCSD has developed a webtool and tutorial for using All-atom (and C-alpha) Structure-based Hamiltonians (aka Go Models) in Gromacs. http://sbm.ucsd.edu/ All you need to do is provide a pdb file with protein, RNA and/or DNA chains (there are also some supp

Re: [gmx-users] Alternative TRP residue

2009-06-11 Thread Lucio Ricardo Montero Valenzuela
The acpypi program can calculate the topology files for your molecule. I have done it for non-polymeric molecules, but I don't know if there have to be extra work in polymeric molecules like amino acids. El jue, 11-06-2009 a las 17:09 +0200, abelius escribió: > Dear All, > > I have AmberFF paramet

[gmx-users] Re: Alternative TRP residue

2009-06-11 Thread Gerrit Groenhof
t mdp file are you using? -- -- You haven't given us any diagnostic information. The problem could be that you're not running an MPI GROMACS (show us your configure line, your mdrun command line and the top 50 lines of your .

Re: [gmx-users] Alternative TRP residue

2009-06-11 Thread David van der Spoel
abelius wrote: Dear All, I have AmberFF parameters for a TRP excited state and I was wondering if it was possible to create an alternative TRP residue for gromacs? Things I've done so far: * Download and install amber99 for gromacs * Add a new residue [ ETRP ] with adapted charges to the amber

Re: [gmx-users] Re: gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread David van der Spoel
Thamu wrote: Hi, i am using openmpi, how to build hostfile and where keep that file thamu Did you do configure --enable-mpi ??? ___ gmx-users mailing list

[gmx-users] Re: Issues regarding exclusions and Charge group distribution

2009-06-11 Thread Manik Mayur
Hi, The last mail supposedly bounced from the server due to attachments. So please excuse in case you find this as a repetition. I am trying to simulate a system where I want to exclude all nb interactions between frozen groups. So I followed the manual and defined them in energygrp_excl. But upo

[gmx-users] Alternative TRP residue

2009-06-11 Thread abelius
Dear All, I have AmberFF parameters for a TRP excited state and I was wondering if it was possible to create an alternative TRP residue for gromacs? Things I've done so far: * Download and install amber99 for gromacs * Add a new residue [ ETRP ] with adapted charges to the amber99.rtp file * Ad

[gmx-users] Re: gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu
Hi, > i am using openmpi, how to build hostfile and where keep that file > > thamu > > ___ gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users Please search the archive at http://www.gromacs.org/search bef

Re: [gmx-users] Re: gmx-users Digest, Vol 62, Issue 57

2009-06-11 Thread Justin A. Lemkul
> Initial temperature: 299.838 K > > > > > Recently I successfully installed the gromacs-4.0.5 mpi version. > > > I could run in 8 cpu. but the speed is very slow. > > > Total number of atoms in the system is 78424. > > > w

[gmx-users] Re: gmx-users Digest, Vol 62, Issue 57

2009-06-11 Thread Thamu
SHAKE and RATTLE Algorithms for > Rigid > > Water Models > > J. Comp. Chem. 13 (1992) pp. 952-962 > > --- Thank You --- ---- > > > > Center of mass motion removal mode is Linear > > We have the following groups for center of mass motion removal: > > 0: rest > &g

RE: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread jimkress_58
Mark is correct. You should see node information at the top of the md log file if you are truly running in parallel. Apparently the default host (or machines) file (which contains the list of available nodes on your cluster) has not been /is not being populated correctly. Your can build your own

Re: [gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Mark Abraham
On 06/11/09, Thamu wrote: > > Hi Mark, > > The top md.log is below. The mdrun command was "mpirun -np 8 > ~/software/bin/mdrun_mpi -deffnm md" In my experience, correctly-configured MPI gromacs running in parallel reports information about the number of nodes and the identity of the node writ

[gmx-users] gromacs-4.0.5 parallel run in 8 cpu: slow speed

2009-06-11 Thread Thamu
Hi Mark, The top md.log is below. The mdrun command was "mpirun -np 8 ~/software/bin/mdrun_mpi -deffnm md" :-) G R O M A C S (-: GROup of MAchos and Cynical Suckers :-) VERSION 4.0.5 (-: Written by Dav

Re: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Justin A. Lemkul
Sashank Karri wrote: Hi, I'm running gromacs-4.0.3 on a cluster. I am testing gromacs on it. We are currently getting this error when I run with 4 nodes with one dedicated to PME calculations. Back Off! I just backed up md.log to ./#md.log.5# Reading file ionsol_minim96-1.tpr, VERSION 4.0

Re: [gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread David van der Spoel
Sashank Karri wrote: Hi, I'm running gromacs-4.0.3 on a cluster. I am testing gromacs on it. We are currently getting this error when I run with 4 nodes with one dedicated to PME calculations. Back Off! I just backed up md.log to ./#md.log.5# Reading file ionsol_minim96-1.tpr, VERSION 4.0.3

[gmx-users] gmx_blast error when attempting to run in parallel

2009-06-11 Thread Sashank Karri
Hi, I'm running gromacs-4.0.3 on a cluster. I am testing gromacs on it. We are currently getting this error when I run with 4 nodes with one dedicated to PME calculations. Back Off! I just backed up md.log to ./#md.log.5# Reading file ionsol_minim96-1.tpr, VERSION 4.0.3 (single precision) -

[gmx-users] Implicit walls.

2009-06-11 Thread Yves Dubief
Hi, I am using the implicit wall method available in 4.0.x and I cannot seem to find any published work on the method, aside from the manual. I think I have an overall understanding of the method, but I am not confident about some key details: - Is the force generated from the wall derived f