[gmx-users] Fatal Error: Residue 'DMP' not found in residue topology database

2013-09-20 Thread Santhosh Kumar Nagarajan
Hi guys, The error I'm getting is as follows All occupancies are one Opening force field file /usr/local/gromacs/share/gromacs/top/oplsaa.ff/atomtypes.atp Atomtype 1 Reading residue database... (oplsaa) Opening force field file /usr/local/gromacs/share/gromacs/top/oplsaa.ff/aminoacids.rtp

[gmx-users] RE: MPI runs on a local computer

2013-09-20 Thread Xu, Jianqing
Hi, Looks like my questions may be too much detailed. Hope someone could give some suggestions. If there is a more appropriate List where I should ask these questions, I will appreciate if anyone could let me know. Thanks again, Jianqing -Original Message- From:

Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Carsten Kutzner
Hi Jianqing, On Sep 19, 2013, at 2:48 PM, Xu, Jianqing x...@medimmune.com wrote: Say I have a local desktop having 16 cores. If I just want to run jobs on one computer or a single node (but multiple cores), I understand that I don't have to install and use OpenMPI, as Gromacs has its own

Re: [gmx-users] Fatal Error: Residue 'DMP' not found in residue topology database

2013-09-20 Thread Justin Lemkul
On 9/20/13 2:28 AM, Santhosh Kumar Nagarajan wrote: Hi guys, The error I'm getting is as follows All occupancies are one Opening force field file /usr/local/gromacs/share/gromacs/top/oplsaa.ff/atomtypes.atp Atomtype 1 Reading residue database... (oplsaa) Opening force field file

[gmx-users] Re: grompp for minimization: note warning

2013-09-20 Thread shahab shariati
Dear Tsjerk Thanks for your reply Before correcting the gro file, I knew that gro file is fixed format. I did this correction very carefully. Part of the gro file before and after correction is as follows: - before:

Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Mark Abraham
On Thu, Sep 19, 2013 at 2:48 PM, Xu, Jianqing x...@medimmune.com wrote: Dear all, I am learning the parallelization issues from the instructions on Gromacs website. I guess I got a rough understanding of MPI, thread-MPI, OpenMP. But I hope to get some advice about a correct way to run

Re: [gmx-users] Re: grompp for minimization: note warning

2013-09-20 Thread Mark Abraham
The UNIX tool diff is your friend for comparing files. On Fri, Sep 20, 2013 at 1:53 PM, shahab shariati shahab.shari...@gmail.com wrote: Dear Tsjerk Thanks for your reply Before correcting the gro file, I knew that gro file is fixed format. I did this correction very carefully. Part of

Re: [gmx-users] Re: Charmm 36 forcefield with verlet cut-off scheme

2013-09-20 Thread Mark Abraham
Note that the group scheme does not reproduce the (AFAIK unpublished) CHARMM switching scheme, either. Mark On Fri, Sep 20, 2013 at 4:26 AM, Justin Lemkul jalem...@vt.edu wrote: On 9/19/13 9:55 PM, akk5r wrote: Thanks Justin. I was told that the vdwtype = switch was an essential component

[gmx-users] g_covar average.pdb calculation

2013-09-20 Thread Deniz Aydin
Dear All, I would like to get information on how g_covar calculates the average structure file (average.pdb) My aim was actually to get a covariance matrix (deltaR*deltaR) so I started off by writing my own code, I use MDAnalysis package, so I give psf and traj files as an input and I generate

Re: [gmx-users] g_covar average.pdb calculation

2013-09-20 Thread Tsjerk Wassenaar
Hi Deniz, The option -ref/-noref is not what you think it is. You want to use -nofit. Cheers, Tsjerk On Fri, Sep 20, 2013 at 2:26 PM, Deniz Aydin denizay...@ku.edu.tr wrote: Dear All, I would like to get information on how g_covar calculates the average structure file (average.pdb) My

[gmx-users] Significant slowdown in 4.6? (4.6.3)

2013-09-20 Thread Jonathan Saboury
I have a Intel i7-2630QM CPU @ 2.00GHz on my laptop with 4.6.3 installed and a desktop with an i3-3220 with 4.5.5 installed. I am trying the same energy minimization on each of these machines. My desktop takes a few seconds, my laptop takes hours. This doesn't make much sense bc benchmarks

[gmx-users] Re: Significant slowdown in 4.6? (4.6.3)

2013-09-20 Thread Jonathan Saboury
Figured out the problem. For some reason one thread is being taken up 90% by the system. If I run it with 6 threads it runs fast. Never experienced this on linux though, very curious. Sorry if i wasted your time. -Jonathan Saboury On Fri, Sep 20, 2013 at 7:58 AM, Jonathan Saboury

[gmx-users] Minimum distance periodic images, protein simulation

2013-09-20 Thread Arun Sharma
Hello, I ran a 100-ns long simulation of a small protein (trp-cage) at an elevated temperature. I analysed the distance between periodic images using g_mindist -f md-run-1-noPBC.xtc -s md-run-1.tpr -n index.ndx -od mindist.xvg -pi The output shows that there are situations when the closest

[gmx-users] Broken lipid molecules

2013-09-20 Thread Rama
HI, At the end of a MD run, the lipid molecules in a membrane protein are broken. I load .gro and .trr file into VMD to watch MD simulations, the lipids are broken at periodic boundaries. I try to fix it by trjconv -pbc nojump but output came with only 2 frames but initially it was 1500 frames.

Re: [gmx-users] Broken lipid molecules

2013-09-20 Thread Justin Lemkul
On 9/20/13 5:21 PM, Rama wrote: HI, At the end of a MD run, the lipid molecules in a membrane protein are broken. I load .gro and .trr file into VMD to watch MD simulations, the lipids are broken at periodic boundaries. I try to fix it by trjconv -pbc nojump but output came with only 2

Re: [gmx-users] Minimum distance periodic images, protein simulation

2013-09-20 Thread Justin Lemkul
On 9/20/13 4:11 PM, Arun Sharma wrote: Hello, I ran a 100-ns long simulation of a small protein (trp-cage) at an elevated temperature. I analysed the distance between periodic images using g_mindist -f md-run-1-noPBC.xtc -s md-run-1.tpr -n index.ndx -od mindist.xvg -pi The output shows that