Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Mark Abraham
First, there is no value in ascribing problems to the hardware if the simulation setup is not yet balanced, or not large enough to provide enough atoms and long enough rlist to saturate the GPUs, etc. Look at the log files and see what complaints mdrun makes about things like PME load balance, and

Re: [gmx-users] nose-hoover vs v-rescale in implicit solvent

2013-11-07 Thread Mark Abraham
I think either is correct for practical purposes. Mark On Thu, Nov 7, 2013 at 8:41 AM, Gianluca Interlandi gianl...@u.washington.edu wrote: Does it make more sense to use nose-hoover or v-rescale when running in implicit solvent GBSA? I understand that this might be a matter of opinion.

Re: [gmx-users] Re: single point calculation with gromacs

2013-11-07 Thread Mark Abraham
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl fantastic...@gmail.com wrote: Dear Justin, I am sorry for the late reply. I still can't figure it out. It isn't rocket science - your two .mdp files describe totally different model physics. To compare things, change as few things as necessary to

[gmx-users] Error in Umbrella sampling command

2013-11-07 Thread Arunima Shilpi
Dear Sir Presently I am working with the example file as given in the umbrella sampling tutorial. While running the following command grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr I got the following error. How to debug this error. Ignoring obsolete mdp entry

Re: [gmx-users] Error in Umbrella sampling command

2013-11-07 Thread Justin Lemkul
On 11/7/13 6:27 AM, Arunima Shilpi wrote: Dear Sir Presently I am working with the example file as given in the umbrella sampling tutorial. While running the following command grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr I got the following error. How to

[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All, Any suggestions? Thank you. -- View this message in context: http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html Sent from the GROMACS Users Forum mailing list archive at Nabble.com. -- gmx-users mailing listgmx-users@gromacs.org

[gmx-users] DSSP output

2013-11-07 Thread Anirban
Hi ALL, Is there any way to get the percentage of each secondary structural content of a protein using do_dssp if I supply a single PDB to it? And how to plot the data of the -sc output from do_dssp? Any suggestion is welcome. Regards, Anirban -- gmx-users mailing listgmx-users@gromacs.org

Re: [gmx-users] DSSP output

2013-11-07 Thread Justin Lemkul
On 11/7/13 8:24 AM, Anirban wrote: Hi ALL, Is there any way to get the percentage of each secondary structural content of a protein using do_dssp if I supply a single PDB to it? The output of scount.xvg has the percentages, but it's also trivial to do it for one snapshot. The contents of

[gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread ahmed.sajid
Hi, I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22. The configuration runs okay and I have made sure that I have set paths correctly. I'm getting errors: $ make [ 0%] Building NVCC (Device) object

Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Hi, It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the original paper's coulomb settings can be taken with a grain of salt for use with PME - others' success in practice should be a guideline here. The good news is that the default GROMACS PME settings are pretty good for

[gmx-users] installing gromacs 4.6.1 with openmpi

2013-11-07 Thread niloofar niknam
Dear gromacs users I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and openmpi-1.6.4 on a single machine with 8 cores(Red Hat Enterprise linux 6.1) . During openmpi installation ( I used make -jN) and also in gromacs installation ( I used make -j N command), everything seemed ok but

Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
icc and CUDA is pretty painful. I'd suggest getting latest gcc. Mark On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote: Hi, I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22. The configuration runs okay and I have made sure that I have set paths

Re: [gmx-users] installing gromacs 4.6.1 with openmpi

2013-11-07 Thread Mark Abraham
Sounds like a non-GROMACS problem. I think you should explore configuring OpenMPI correctly, and show you can run an MPI test program successfully. Mark On Thu, Nov 7, 2013 at 5:51 PM, niloofar niknam niloofae_nik...@yahoo.comwrote: Dear gromacs users I have installed gromacs 4.6.1 with

[gmx-users] Re: choosing force field

2013-11-07 Thread pratibha
My protein contains metal ions which are parameterized only in gromos force field. Since I am a newbie to MD simulations, it would be difficult for me to parameterize those myself. Can you please guide me as per my previous mail which out of the two simulations should I consider more

Re: [gmx-users] Re: choosing force field

2013-11-07 Thread Justin Lemkul
On 11/7/13 12:14 PM, pratibha wrote: My protein contains metal ions which are parameterized only in gromos force field. Since I am a newbie to MD simulations, it would be difficult for me to parameterize those myself. Can you please guide me as per my previous mail which out of the two

[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Hello I performed MD simulations of several Protein-ligand complexes and solvated Ligands using PME for log range electrostatics. I want to calculate the binding free energy using the LIE method, but when using g_energy I only get Coul-SR. How can I deal with Ligand-environment long range

Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread rajat desikan
Thank you, Mark. I think that running it on CPUs is a safer choice at present. On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.comwrote: Hi, It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the original paper's coulomb settings can be taken with a

Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM switch differs from the GROMACS switch (Justin linked a paper here with the CHARMM switch description a month or so back, but I don't have that link to hand). Mark On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan

Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
If the long-range component of your electrostatics model is not decomposable by group (which it isn't), then you can't use that with LIE. See the hundreds of past threads on this topic :-) Mark On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado wmira...@fbio.uh.cu wrote: Hello

Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Jones de Andrade
Did it a few days ago. Not so much of a problem here. But I compiled everything, including fftw, with it. The only error I got was that I should turn off the separable compilation, and that the user must be in the group video. My settings are (yes, I know it should go better with openmp, but

Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Gianluca Interlandi
Hi Mark! I think that this is the paper that you are referring to: dx.doi.org/10.1021/ct900549r Also for your reference, these are the settings that Justin recommended using with CHARMM in gromacs: vdwtype = switch rlist = 1.2 rlistlong = 1.4 rvdw = 1.2 rvdw-switch = 1.0 rcoulomb = 1.2 As

Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
You will do much better with gcc+openmp than icc-openmp! Mark On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.comwrote: Did it a few days ago. Not so much of a problem here. But I compiled everything, including fftw, with it. The only error I got was that I should turn

mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-07 Thread Szilárd Páll
Let's not hijack James' thread as your hardware is different from his. On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman mpi...@gmail.com wrote: Hi Szilard, Thanks for your suggestions. I am indeed aware of this page. In a 8-core AMD with 1GPU, I am very happy about its performance. See

Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Szilárd Páll
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.com wrote: I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave me the same performance mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test, mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v -deffnm

[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Thank you Mark What do you think about making a rerun on the trajectories generated previously with PME but this time using coulombtype: cut-off? Could you suggest a cut off value? Thanks again Williams -- gmx-users mailing listgmx-users@gromacs.org

[gmx-users] Question about make_ndx and g_angle

2013-11-07 Thread Chang Woon Jang
Dear Users, I am using openSUSE 12.3 and try to use make_ndx and g_angle. When I try the following command, there is an error message. ./make.ndx -f data.pdb ./make_ndx: error while loading shared libraries: libcudart.so.4:cannot open shared object file: No such file or directory Do

Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
I'd at least use RF! Use a cut-off consistent with the force field parameterization. And hope the LIE correlates with reality! Mark On Nov 7, 2013 10:39 PM, Williams Ernesto Miranda Delgado wmira...@fbio.uh.cu wrote: Thank you Mark What do you think about making a rerun on the trajectories

Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Jones de Andrade
Really? An what about gcc+mpi? should I expect any improvement? On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham mark.j.abra...@gmail.comwrote: You will do much better with gcc+openmp than icc-openmp! Mark On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.com wrote: Did it

[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All, The setting that I mentioned above are from Klauda et al., for a POPE membrane system. They can be found in charmm_npt.mdp in lipidbook (link below) http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html Is there any reason not to use their .mdp parameters for a membrane-protein

[gmx-users] after using ACPYPE , GROMACS OPLS itp file generated an atom type like opls_x with mass 0.000

2013-11-07 Thread aditya sarma
Hi, i was trying to generate topology for p-phenylene vinylene polymer for OPLS forcefield using acpype . The itp file i got has the atomtype opls_x with mass 0.00. Is there any way to rectify this? After reading through how acpype works i found out this was one of the possible errors but there

[gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-07 Thread Qin Qiao
Dear all, I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT ensemble, and I got the following errors when I tried to use 2 cores per replica: [node-ib-4.local:mpi_rank_25][error_sighandler] Caught error: Segmentation fault (signal 11)

[gmx-users] Ligand simulation

2013-11-07 Thread Kavyashree M
Dear users, Although this topic has been extensively discussed in the list previously, I am unclear about the solution for the problem.. While running ligand in water simulation (EM) with RF-0 I get the following message: