Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Mark Abraham
First, there is no value in ascribing problems to the hardware if the
simulation setup is not yet balanced, or not large enough to provide enough
atoms and long enough rlist to saturate the GPUs, etc. Look at the log
files and see what complaints mdrun makes about things like PME load
balance, and the times reported for different components of the simulation,
because these must differ between the two runs you report. diff -y -W 160
*log |less is your friend. Some (non-GPU-specific) background information
in part 5 here
http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/Topology_preparation%2c_%22What's_in_a_log_file%22%2c_basic_performance_improvements%3a_Mark_Abraham%2c_Session_1A
(though
I recommend the PDF version)

Mark


On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.comwrote:

 I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
 me the same performance
 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

 Doest it be due to the small CPU cores or addition RAM ( this system has 32
 gb) is needed ? OR may be some extra options are needed in the config?

 James




 2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

  Hi Dwey,
 
 
  On 05/11/13 22:00, Dwey Kauffman wrote:
 
  Hi Szilard,
 
  Thanks for your suggestions. I am  indeed aware of this page. In a
  8-core
  AMD with 1GPU, I am very happy about its performance. See below. My
  intention is to obtain a even better one because we have multiple nodes.
 
  ### 8 core AMD with  1 GPU,
  Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
  For optimal performance this ratio should be close to 1!
 
 
  NOTE: The GPU has 20% more load than the CPU. This imbalance causes
 performance loss, consider using a shorter cut-off and a finer
 PME
  grid.
 
  Core t (s)   Wall t (s)(%)
  Time:   216205.51027036.812  799.7
7h30:36
(ns/day)(hour/ns)
  Performance:   31.9560.751
 
  ### 8 core AMD with 2 GPUs
 
  Core t (s)   Wall t (s)(%)
  Time:   178961.45022398.880  799.0
6h13:18
(ns/day)(hour/ns)
  Performance:   38.5730.622
  Finished mdrun on node 0 Sat Jul 13 09:24:39 2013
 
 
  I'm almost certain that Szilard meant the lines above this that give the
  breakdown of where the time is spent in the simulation.
 
  Richard
 
 
   However, in your case I suspect that the
  bottleneck is multi-threaded scaling on the AMD CPUs and you should
  probably decrease the number of threads per MPI rank and share GPUs
  between 2-4 ranks.
 
 
 
  OK but can you give a example of mdrun command ? given a 8 core AMD
 with 2
  GPUs.
  I will try to run it again.
 
 
   Regarding scaling across nodes, you can't expect much from gigabit
  ethernet - especially not from the cheaper cards/switches, in my
  experience even reaction field runs don't scale across nodes with 10G
  ethernet if you have more than 4-6 ranks per node trying to
  communicate (let alone with PME). However, on infiniband clusters we
  have seen scaling to 100 atoms/core (at peak).
 
 
   From your comments, it sounds like a cluster of AMD cpus is difficult
 to
 
  scale across nodes in our current setup.
 
  Let's assume we install Infiniband (20 or 40GB/s) in the same system of
 16
  nodes of 8 core AMD with 1 GPU only. Considering the same AMD system,
 what
  is a good way to obtain better performance  when we run a task across
  nodes
  ? in other words, what dose mudrun_mpi look like ?
 
  Thanks,
  Dwey
 
 
 
 
  --
  View this message in context: http://gromacs.5086.x6.nabble.
  com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 
   --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at http://www.gromacs.org/
  Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't 

Re: [gmx-users] nose-hoover vs v-rescale in implicit solvent

2013-11-07 Thread Mark Abraham
I think either is correct for practical purposes.

Mark


On Thu, Nov 7, 2013 at 8:41 AM, Gianluca Interlandi 
gianl...@u.washington.edu wrote:

 Does it make more sense to use nose-hoover or v-rescale when running in
 implicit solvent GBSA? I understand that this might be a matter of opinion.

 Thanks,

  Gianluca

 -
 Gianluca Interlandi, PhD gianl...@u.washington.edu
 +1 (206) 685 4435
 http://artemide.bioeng.washington.edu/

 Research Scientist at the Department of Bioengineering
 at the University of Washington, Seattle WA U.S.A.
 -
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: single point calculation with gromacs

2013-11-07 Thread Mark Abraham
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl fantastic...@gmail.com wrote:

 Dear Justin,

 I am sorry for the late reply. I still can't figure it out.


It isn't rocket science - your two .mdp files describe totally different
model physics. To compare things, change as few things as necessary to
generate the comparison. So use the same input .mdp file for the MD vs EM
single-point comparison, just changing the integrator line, and maybe
unconstrained-start (I forget the details). And be aware of
http://www.gromacs.org/Documentation/How-tos/Single-Point_Energy

Mark

Could you please send me the mdp file which was used for your single point
 calculations.
 I want to do some comparison and then solve the problem.
 Thanks very much!


 All the best,
 Qinghua

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/single-point-calculation-with-gromacs-tp5012084p5012295.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Error in Umbrella sampling command

2013-11-07 Thread Arunima Shilpi
Dear Sir
Presently I am working with the example file as given in the umbrella
sampling tutorial.

While running the following command

grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr

I got the following error. How to debug this error.


Ignoring obsolete mdp entry 'title'

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.5#

NOTE 1 [file npt_umbrella.mdp]:
  nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy


NOTE 2 [file npt_umbrella.mdp]:
  leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1


WARNING 1 [file npt_umbrella.mdp]:
  You are generating velocities so I am assuming you are equilibrating a
  system. You are using Parrinello-Rahman pressure coupling, but this can
  be unstable for equilibration. If your system crashes, try equilibrating
  first with Berendsen pressure coupling. If you are not equilibrating the
  system, you can probably ignore this warning.


ERROR 1 [file npt_umbrella.mdp]:
  Generating velocities is inconsistent with attempting to continue a
  previous run. Choose only one of gen-vel = yes and continuation = yes.

Generated 165 of the 1596 non-bonded parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_B'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_C'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_D'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_E'
turning all bonds into constraints...
Excluding 2 bonded neighbours molecule type 'SOL'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'NA'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'CL'
turning all bonds into constraints...
Velocities were taken from a Maxwell distribution at 300 K

There were 2 notes

There was 1 warning

---
Program grompp_mpi_d, VERSION 4.6.3
Source code file:
/opt/apps/GROMACS/GROMACS-SOURCE/gromacs-4.6.3/src/kernel/gromp

p.c, line: 1593

Fatal error:
There was 1 error in input file(s)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I request you to kindly help me to debug the error

Regards

Arunima
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Error in Umbrella sampling command

2013-11-07 Thread Justin Lemkul



On 11/7/13 6:27 AM, Arunima Shilpi wrote:

Dear Sir
Presently I am working with the example file as given in the umbrella
sampling tutorial.

While running the following command

grompp -f npt_umbrella.mdp -c conf0.gro -p topol.top -n index.ndx -o npt0.tpr

I got the following error. How to debug this error.


Ignoring obsolete mdp entry 'title'

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.5#

NOTE 1 [file npt_umbrella.mdp]:
   nstcomm  nstcalcenergy defeats the purpose of nstcalcenergy, setting
   nstcomm to nstcalcenergy


NOTE 2 [file npt_umbrella.mdp]:
   leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1


WARNING 1 [file npt_umbrella.mdp]:
   You are generating velocities so I am assuming you are equilibrating a
   system. You are using Parrinello-Rahman pressure coupling, but this can
   be unstable for equilibration. If your system crashes, try equilibrating
   first with Berendsen pressure coupling. If you are not equilibrating the
   system, you can probably ignore this warning.


ERROR 1 [file npt_umbrella.mdp]:
   Generating velocities is inconsistent with attempting to continue a
   previous run. Choose only one of gen-vel = yes and continuation = yes.



Either the run is starting for the first time (gen_vel = yes and continuation = 
no) or it is a continuation (gen_vel = no and continuation = yes).  In this 
case, set continuation = no since it is the first run.


-Justin


Generated 165 of the 1596 non-bonded parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_B'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_C'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_D'
turning all bonds into constraints...
Excluding 3 bonded neighbours molecule type 'Protein_chain_E'
turning all bonds into constraints...
Excluding 2 bonded neighbours molecule type 'SOL'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'NA'
turning all bonds into constraints...
Excluding 1 bonded neighbours molecule type 'CL'
turning all bonds into constraints...
Velocities were taken from a Maxwell distribution at 300 K

There were 2 notes

There was 1 warning

---
Program grompp_mpi_d, VERSION 4.6.3
Source code file:
/opt/apps/GROMACS/GROMACS-SOURCE/gromacs-4.6.3/src/kernel/gromp

p.c, line: 1593

Fatal error:
There was 1 error in input file(s)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I request you to kindly help me to debug the error

Regards

Arunima



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All,

Any suggestions? 

Thank you.

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] DSSP output

2013-11-07 Thread Anirban
Hi ALL,

Is there any way to get the percentage of each secondary structural content
of a protein using do_dssp if I supply a single PDB to it?
And how to plot the data of the -sc output from do_dssp?
Any suggestion is welcome.

Regards,

Anirban
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] DSSP output

2013-11-07 Thread Justin Lemkul



On 11/7/13 8:24 AM, Anirban wrote:

Hi ALL,

Is there any way to get the percentage of each secondary structural content
of a protein using do_dssp if I supply a single PDB to it?


The output of scount.xvg has the percentages, but it's also trivial to do it for 
one snapshot.  The contents of scount.xvg are the number of residues present in 
each type of secondary structure, and you know the total number of residues...



And how to plot the data of the -sc output from do_dssp?


Like any multiple data set.  xmgrace -nxy

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread ahmed.sajid
Hi,

I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.

The configuration runs okay and I have made sure that I have set paths 
correctly.

I'm getting errors:

$ make
[  0%] Building NVCC (Device) object 
src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
icc: command line warning #10006: ignoring unknown option '-dumpspecs'
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In function 
`_start':
(.text+0x20): undefined reference to `main'
CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206 (message):
  Error generating
  
/apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o


make[2]: *** 
[src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
 Error 1
make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all] Error 2
make: *** [all] Error 2

Any help would be appreciated.

Regards,
Ahmed.

-- 
Scanned by iCritical.

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Hi,

It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a grain of salt for use
with PME - others' success in practice should be a guideline here. The good
news is that the default GROMACS PME settings are pretty good for at least
some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the GPU
auto-tuning of parameters in 4.6 is designed to preserve the right sorts of
things.

LJ is harder because it would make good sense to preserve the way CHARMM
did it, but IIRC you can't use something equivalent to the CHARMM LJ shift
with the Verlet kernels, either natively or with a table. We hope to fix
that in 5.0, but code is not written yet. I would probably use vdwtype =
cut-off, vdw-modifier = potential-shift-verlet and rcoulomb=rlist=rvdw=1.2,
but I don't run CHARMM simulations for a living ;-)

Mark


On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.comwrote:

 Dear All,

 Any suggestions?

 Thank you.

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] installing gromacs 4.6.1 with openmpi

2013-11-07 Thread niloofar niknam
Dear gromacs users
I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and openmpi-1.6.4 
on a single machine with 8 cores(Red Hat Enterprise linux 6.1) . During openmpi 
installation ( I used make -jN) and also in gromacs installation ( I used 
make -j N command), everything seemed ok but when I want to use mpirun -np N 
mdrun I face this error: 

mpiexec failed: gethostbyname_ex failed for Bioinf2
(I can run mdrun with just one cpu).Any suggestion would be highly appreciated.
thanks in advance,
Niloofar
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
icc and CUDA is pretty painful. I'd suggest getting latest gcc.

Mark


On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:

 Hi,

 I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.

 The configuration runs okay and I have made sure that I have set paths
 correctly.

 I'm getting errors:

 $ make
 [  0%] Building NVCC (Device) object
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
 icc: command line warning #10006: ignoring unknown option '-dumpspecs'
 /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
 function `_start':
 (.text+0x20): undefined reference to `main'
 CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206 (message):
   Error generating

 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o


 make[2]: ***
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
 Error 1
 make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all] Error 2
 make: *** [all] Error 2

 Any help would be appreciated.

 Regards,
 Ahmed.

 --
 Scanned by iCritical.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] installing gromacs 4.6.1 with openmpi

2013-11-07 Thread Mark Abraham
Sounds like a non-GROMACS problem. I think you should explore configuring
OpenMPI correctly, and show you can run an MPI test program successfully.

Mark


On Thu, Nov 7, 2013 at 5:51 PM, niloofar niknam
niloofae_nik...@yahoo.comwrote:

 Dear gromacs users
 I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and
 openmpi-1.6.4 on a single machine with 8 cores(Red Hat Enterprise linux
 6.1) . During openmpi installation ( I used make -jN) and also in gromacs
 installation ( I used make -j N command), everything seemed ok but when I
 want to use mpirun -np N mdrun I face this error:

 mpiexec failed: gethostbyname_ex failed for Bioinf2
 (I can run mdrun with just one cpu).Any suggestion would be highly
 appreciated.
 thanks in advance,
 Niloofar
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: choosing force field

2013-11-07 Thread pratibha
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail  which out of the two
simulations should I consider  more reliable-43a1 or 53a7?  
Thanks in advance.

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012322.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: choosing force field

2013-11-07 Thread Justin Lemkul



On 11/7/13 12:14 PM, pratibha wrote:

My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail  which out of the two
simulations should I consider  more reliable-43a1 or 53a7?


AFAIK, there is no such thing as 53A7, and your original message was full of 
similar typos, making it nearly impossible to figure out what you were actually 
doing.  Can you indicate the actual force field(s) that you have been using in 
case someone has any ideas?  The difference between 53A6 and 54A7 should be 
quite pronounced, in my experience, thus any guesses as to what 53A7 should be 
doing are not productive because I don't know what that is.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Hello
I performed MD simulations of several Protein-ligand complexes and
solvated Ligands using PME for log range electrostatics. I want to
calculate the binding free energy using the LIE method, but when using
g_energy I only get Coul-SR. How can I deal with Ligand-environment long
range electrostatic interaction using gromacs? I have seen other
discussion lists but I couldn't arrive to a solution. Could you please
help me?
Thank you
Williams


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread rajat desikan
Thank you, Mark. I think that running it on CPUs is a safer choice at
present.


On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 Hi,

 It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
 original paper's coulomb settings can be taken with a grain of salt for use
 with PME - others' success in practice should be a guideline here. The good
 news is that the default GROMACS PME settings are pretty good for at least
 some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the GPU
 auto-tuning of parameters in 4.6 is designed to preserve the right sorts of
 things.

 LJ is harder because it would make good sense to preserve the way CHARMM
 did it, but IIRC you can't use something equivalent to the CHARMM LJ shift
 with the Verlet kernels, either natively or with a table. We hope to fix
 that in 5.0, but code is not written yet. I would probably use vdwtype =
 cut-off, vdw-modifier = potential-shift-verlet and rcoulomb=rlist=rvdw=1.2,
 but I don't run CHARMM simulations for a living ;-)

 Mark


 On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com
 wrote:

  Dear All,
 
  Any suggestions?
 
  Thank you.
 
  --
  View this message in context:
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).

Mark


On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan rajatdesi...@gmail.comwrote:

 Thank you, Mark. I think that running it on CPUs is a safer choice at
 present.


 On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  It's not easy to be explicit. CHARMM wasn't parameterized with PME, so
 the
  original paper's coulomb settings can be taken with a grain of salt for
 use
  with PME - others' success in practice should be a guideline here. The
 good
  news is that the default GROMACS PME settings are pretty good for at
 least
  some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the
 GPU
  auto-tuning of parameters in 4.6 is designed to preserve the right sorts
 of
  things.
 
  LJ is harder because it would make good sense to preserve the way CHARMM
  did it, but IIRC you can't use something equivalent to the CHARMM LJ
 shift
  with the Verlet kernels, either natively or with a table. We hope to fix
  that in 5.0, but code is not written yet. I would probably use vdwtype =
  cut-off, vdw-modifier = potential-shift-verlet and
 rcoulomb=rlist=rvdw=1.2,
  but I don't run CHARMM simulations for a living ;-)
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com
  wrote:
 
   Dear All,
  
   Any suggestions?
  
   Thank you.
  
   --
   View this message in context:
  
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
   Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
If the long-range component of your electrostatics model is not
decomposable by group (which it isn't), then you can't use that with LIE.
See the hundreds of past threads on this topic :-)

Mark


On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Hello
 I performed MD simulations of several Protein-ligand complexes and
 solvated Ligands using PME for log range electrostatics. I want to
 calculate the binding free energy using the LIE method, but when using
 g_energy I only get Coul-SR. How can I deal with Ligand-environment long
 range electrostatic interaction using gromacs? I have seen other
 discussion lists but I couldn't arrive to a solution. Could you please
 help me?
 Thank you
 Williams


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Jones de Andrade
Did it a few days ago. Not so much of a problem here.

But I compiled everything, including fftw, with it. The only error I got
was that I should turn off the separable compilation, and that the user
must be in the group video.

My settings are (yes, I know it should go better with openmp, but openmp
goes horrobly in our cluster, I don't know why):

setenv CC  /opt/intel/bin/icc
setenv CXX /opt/intel/bin/icpc
setenv F77 /opt/intel/bin/ifort
setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
mkdir build
cd build
cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
-DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024 -DBUILD_SHARED_LIBS=OFF
-DGMX_PREFER_STATIC_LIBS=ON
-DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
make
make install
cd ..
rm -rf build


On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 icc and CUDA is pretty painful. I'd suggest getting latest gcc.

 Mark


 On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:

  Hi,
 
  I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.
 
  The configuration runs okay and I have made sure that I have set paths
  correctly.
 
  I'm getting errors:
 
  $ make
  [  0%] Building NVCC (Device) object
 
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
  icc: command line warning #10006: ignoring unknown option '-dumpspecs'
  /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
  function `_start':
  (.text+0x20): undefined reference to `main'
  CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
 (message):
Error generating
 
 
 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
 
 
  make[2]: ***
 
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
  Error 1
  make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all] Error
 2
  make: *** [all] Error 2
 
  Any help would be appreciated.
 
  Regards,
  Ahmed.
 
  --
  Scanned by iCritical.
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Gianluca Interlandi

Hi Mark!

I think that this is the paper that you are referring to:

dx.doi.org/10.1021/ct900549r

Also for your reference, these are the settings that Justin recommended 
using with CHARMM in gromacs:


vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2

As you mention the switch function in gromacs is different than in CHARMM 
but it appears that the difference is very small.


Gianluca

On Thu, 7 Nov 2013, Mark Abraham wrote:


Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).

Mark


On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan rajatdesi...@gmail.comwrote:


Thank you, Mark. I think that running it on CPUs is a safer choice at
present.


On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.com

wrote:



Hi,

It's not easy to be explicit. CHARMM wasn't parameterized with PME, so

the

original paper's coulomb settings can be taken with a grain of salt for

use

with PME - others' success in practice should be a guideline here. The

good

news is that the default GROMACS PME settings are pretty good for at

least

some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the

GPU

auto-tuning of parameters in 4.6 is designed to preserve the right sorts

of

things.

LJ is harder because it would make good sense to preserve the way CHARMM
did it, but IIRC you can't use something equivalent to the CHARMM LJ

shift

with the Verlet kernels, either natively or with a table. We hope to fix
that in 5.0, but code is not written yet. I would probably use vdwtype =
cut-off, vdw-modifier = potential-shift-verlet and

rcoulomb=rlist=rvdw=1.2,

but I don't run CHARMM simulations for a living ;-)

Mark


On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com

wrote:



Dear All,

Any suggestions?

Thank you.

--
View this message in context:




http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html

Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists





--
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-
Gianluca Interlandi, PhD gianl...@u.washington.edu
+1 (206) 685 4435
http://artemide.bioeng.washington.edu/

Research Scientist at the Department of Bioengineering
at the University of Washington, Seattle WA U.S.A.
-
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
You will do much better with gcc+openmp than icc-openmp!

Mark


On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.comwrote:

 Did it a few days ago. Not so much of a problem here.

 But I compiled everything, including fftw, with it. The only error I got
 was that I should turn off the separable compilation, and that the user
 must be in the group video.

 My settings are (yes, I know it should go better with openmp, but openmp
 goes horrobly in our cluster, I don't know why):

 setenv CC  /opt/intel/bin/icc
 setenv CXX /opt/intel/bin/icpc
 setenv F77 /opt/intel/bin/ifort
 setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
 mkdir build
 cd build
 cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
 -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024 -DBUILD_SHARED_LIBS=OFF
 -DGMX_PREFER_STATIC_LIBS=ON
 -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
 make
 make install
 cd ..
 rm -rf build


 On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  icc and CUDA is pretty painful. I'd suggest getting latest gcc.
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:
 
   Hi,
  
   I'm having trouble compiling v 4.6.3 with GPU support using CUDA
 5.5.22.
  
   The configuration runs okay and I have made sure that I have set paths
   correctly.
  
   I'm getting errors:
  
   $ make
   [  0%] Building NVCC (Device) object
  
 
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
   icc: command line warning #10006: ignoring unknown option '-dumpspecs'
   /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
   function `_start':
   (.text+0x20): undefined reference to `main'
   CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
  (message):
 Error generating
  
  
 
 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
  
  
   make[2]: ***
  
 
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
   Error 1
   make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
 Error
  2
   make: *** [all] Error 2
  
   Any help would be appreciated.
  
   Regards,
   Ahmed.
  
   --
   Scanned by iCritical.
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-07 Thread Szilárd Páll
Let's not hijack James' thread as your hardware is different from his.

On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman mpi...@gmail.com wrote:
 Hi Szilard,

Thanks for your suggestions. I am  indeed aware of this page. In a 8-core
 AMD with 1GPU, I am very happy about its performance. See below. My

Actually, I was jumping to conclusions too early, as you mentioned AMD
cluster, I assumed you must have 12-16-core Opteron CPUs. If you
have an 8-core (desktop?) AMD CPU, than you may not need to run more
than one rank per GPU.

 intention is to obtain a even better one because we have multiple nodes.

Btw, I'm not sure it's an economically viable solution to install
Infiniband network - especially if you have desktop-class machines.
Such a network will end up costing $500 per machine just for a single
network card, let alone cabling and switches.


 ### 8 core AMD with  1 GPU,
 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
   performance loss, consider using a shorter cut-off and a finer PME
 grid.

Core t (s)   Wall t (s)(%)
Time:   216205.51027036.812  799.7
  7h30:36
  (ns/day)(hour/ns)
 Performance:   31.9560.751

 ### 8 core AMD with 2 GPUs

Core t (s)   Wall t (s)(%)
Time:   178961.45022398.880  799.0
  6h13:18
  (ns/day)(hour/ns)
 Performance:   38.5730.622
 Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


Indeed, as Richard pointed out, I was asking for *full* logs, these
summaries can't tell much, the table above the summary entitled R E A
L   C Y C L E   A N D   T I M E   A C C O U N T I N G as well as
other reported information across the log file is what I need to make
an assessment of your simulations' performance.

However, in your case I suspect that the
bottleneck is multi-threaded scaling on the AMD CPUs and you should
probably decrease the number of threads per MPI rank and share GPUs
between 2-4 ranks.


 OK but can you give a example of mdrun command ? given a 8 core AMD with 2
 GPUs.
 I will try to run it again.

You could try running
mpirun -np 4 mdrun -ntomp 2 -gpu_id 0011
but I suspect this won't help because your scaling issue



Regarding scaling across nodes, you can't expect much from gigabit
ethernet - especially not from the cheaper cards/switches, in my
experience even reaction field runs don't scale across nodes with 10G
ethernet if you have more than 4-6 ranks per node trying to
communicate (let alone with PME). However, on infiniband clusters we
have seen scaling to 100 atoms/core (at peak).

 From your comments, it sounds like a cluster of AMD cpus is difficult to
 scale across nodes in our current setup.

 Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
 nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
 is a good way to obtain better performance  when we run a task across nodes
 ? in other words, what dose mudrun_mpi look like ?

 Thanks,
 Dwey




 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Szilárd Páll
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.com wrote:
 I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
 me the same performance
 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

 Doest it be due to the small CPU cores or addition RAM ( this system has 32
 gb) is needed ? OR may be some extra options are needed in the config?

GROMACS does not really need much (or fast) ram and it's most probably
not configuration settings that are causing the lack of scaling.

Given your setup my guess is that your hardware for your system is
simply imbalanced to be efficiently used in GROMACS runs.

Please post *full* log files (FYI use e.g. http://pastebin.com), that
will help explain what is going on.


 James




 2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

 Hi Dwey,


 On 05/11/13 22:00, Dwey Kauffman wrote:

 Hi Szilard,

 Thanks for your suggestions. I am  indeed aware of this page. In a
 8-core
 AMD with 1GPU, I am very happy about its performance. See below. My
 intention is to obtain a even better one because we have multiple nodes.

 ### 8 core AMD with  1 GPU,
 Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
 For optimal performance this ratio should be close to 1!


 NOTE: The GPU has 20% more load than the CPU. This imbalance causes
performance loss, consider using a shorter cut-off and a finer PME
 grid.

 Core t (s)   Wall t (s)(%)
 Time:   216205.51027036.812  799.7
   7h30:36
   (ns/day)(hour/ns)
 Performance:   31.9560.751

 ### 8 core AMD with 2 GPUs

 Core t (s)   Wall t (s)(%)
 Time:   178961.45022398.880  799.0
   6h13:18
   (ns/day)(hour/ns)
 Performance:   38.5730.622
 Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


 I'm almost certain that Szilard meant the lines above this that give the
 breakdown of where the time is spent in the simulation.

 Richard


  However, in your case I suspect that the
 bottleneck is multi-threaded scaling on the AMD CPUs and you should
 probably decrease the number of threads per MPI rank and share GPUs
 between 2-4 ranks.



 OK but can you give a example of mdrun command ? given a 8 core AMD with 2
 GPUs.
 I will try to run it again.


  Regarding scaling across nodes, you can't expect much from gigabit
 ethernet - especially not from the cheaper cards/switches, in my
 experience even reaction field runs don't scale across nodes with 10G
 ethernet if you have more than 4-6 ranks per node trying to
 communicate (let alone with PME). However, on infiniband clusters we
 have seen scaling to 100 atoms/core (at peak).


  From your comments, it sounds like a cluster of AMD cpus is difficult to

 scale across nodes in our current setup.

 Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
 nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
 is a good way to obtain better performance  when we run a task across
 nodes
 ? in other words, what dose mudrun_mpi look like ?

 Thanks,
 Dwey




 --
 View this message in context: http://gromacs.5086.x6.nabble.
 com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: LIE method with PME

2013-11-07 Thread Williams Ernesto Miranda Delgado
Thank you Mark
What do you think about making a rerun on the trajectories generated
previously with PME but this time using coulombtype: cut-off? Could you
suggest a cut off value?
Thanks again
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Question about make_ndx and g_angle

2013-11-07 Thread Chang Woon Jang
Dear Users, 

 I am using openSUSE 12.3 and try to use make_ndx and g_angle. When I try 
the following command, there is an error message. 

 ./make.ndx -f data.pdb

./make_ndx: error while loading shared libraries: libcudart.so.4:cannot open 
shared object file: No such file or directory

Do I need cuda library in order to use make_ndx and g_angle ?

Thanks. 

Best regards,
Changwoon Jang
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
I'd at least use RF! Use a cut-off consistent with the force field
parameterization. And hope the LIE correlates with reality!

Mark
On Nov 7, 2013 10:39 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Thank you Mark
 What do you think about making a rerun on the trajectories generated
 previously with PME but this time using coulombtype: cut-off? Could you
 suggest a cut off value?
 Thanks again
 Williams

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Jones de Andrade
Really? An what about gcc+mpi? should I expect any improvement?


On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 You will do much better with gcc+openmp than icc-openmp!

 Mark


 On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.com
 wrote:

  Did it a few days ago. Not so much of a problem here.
 
  But I compiled everything, including fftw, with it. The only error I got
  was that I should turn off the separable compilation, and that the user
  must be in the group video.
 
  My settings are (yes, I know it should go better with openmp, but openmp
  goes horrobly in our cluster, I don't know why):
 
  setenv CC  /opt/intel/bin/icc
  setenv CXX /opt/intel/bin/icpc
  setenv F77 /opt/intel/bin/ifort
  setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
  mkdir build
  cd build
  cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
  -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
  -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024 -DBUILD_SHARED_LIBS=OFF
  -DGMX_PREFER_STATIC_LIBS=ON
  -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
  make
  make install
  cd ..
  rm -rf build
 
 
  On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham mark.j.abra...@gmail.com
  wrote:
 
   icc and CUDA is pretty painful. I'd suggest getting latest gcc.
  
   Mark
  
  
   On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:
  
Hi,
   
I'm having trouble compiling v 4.6.3 with GPU support using CUDA
  5.5.22.
   
The configuration runs okay and I have made sure that I have set
 paths
correctly.
   
I'm getting errors:
   
$ make
[  0%] Building NVCC (Device) object
   
  
 
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
icc: command line warning #10006: ignoring unknown option
 '-dumpspecs'
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
function `_start':
(.text+0x20): undefined reference to `main'
CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
   (message):
  Error generating
   
   
  
 
 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
   
   
make[2]: ***
   
  
 
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
Error 1
make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
  Error
   2
make: *** [all] Error 2
   
Any help would be appreciated.
   
Regards,
Ahmed.
   
--
Scanned by iCritical.
   
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
   
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Rajat Desikan
Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a membrane-protein
system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please

To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2


On Fri, Nov 8, 2013 at 2:20 AM, Gianluca Interlandi [via GROMACS] 
ml-node+s5086n5012329...@n6.nabble.com wrote:

 Hi Mark!

 I think that this is the paper that you are referring to:

 dx.doi.org/10.1021/ct900549r

 Also for your reference, these are the settings that Justin recommended
 using with CHARMM in gromacs:

 vdwtype = switch
 rlist = 1.2
 rlistlong = 1.4
 rvdw = 1.2
 rvdw-switch = 1.0
 rcoulomb = 1.2

 As you mention the switch function in gromacs is different than in CHARMM
 but it appears that the difference is very small.

 Gianluca

 On Thu, 7 Nov 2013, Mark Abraham wrote:

  Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
  switch differs from the GROMACS switch (Justin linked a paper here with
 the
  CHARMM switch description a month or so back, but I don't have that link
 to
  hand).
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=0wrote:

 
  Thank you, Mark. I think that running it on CPUs is a safer choice at
  present.
 
 
  On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=1
  wrote:
 
  Hi,
 
  It's not easy to be explicit. CHARMM wasn't parameterized with PME, so
  the
  original paper's coulomb settings can be taken with a grain of salt
 for
  use
  with PME - others' success in practice should be a guideline here. The
  good
  news is that the default GROMACS PME settings are pretty good for at
  least
  some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and
 the
  GPU
  auto-tuning of parameters in 4.6 is designed to preserve the right
 sorts
  of
  things.
 
  LJ is harder because it would make good sense to preserve the way
 CHARMM
  did it, but IIRC you can't use something equivalent to the CHARMM LJ
  shift
  with the Verlet kernels, either natively or with a table. We hope to
 fix
  that in 5.0, but code is not written yet. I would probably use vdwtype
 =
  cut-off, vdw-modifier = potential-shift-verlet and
  rcoulomb=rlist=rvdw=1.2,
  but I don't run CHARMM simulations for a living ;-)
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=2
  wrote:
 
  Dear All,
 
  Any suggestions?
 
  Thank you.
 
  --
  View this message in context:
 
 
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=3
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=4.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=5
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=6.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Rajat Desikan (Ph.D Scholar)
  Prof. K. Ganapathy Ayappa's Lab (no 13),
  Dept. of Chemical Engineering,
  Indian Institute of Science, Bangalore
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=7
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012329i=8.

  * Can't 

[gmx-users] after using ACPYPE , GROMACS OPLS itp file generated an atom type like opls_x with mass 0.000

2013-11-07 Thread aditya sarma
Hi,
i was trying to generate topology for p-phenylene vinylene polymer for OPLS
forcefield using acpype . The itp file i got has the atomtype opls_x with
mass 0.00. Is there any way to rectify this?

After reading through how acpype works i found out this was one of the
possible errors but there was no solution to it.

This is a part of the itp file generated:

 [ atoms ]
;   nr  type  resi  res  atom  cgnr charge  mass   ; qtot
bond_type
 1 opls_145 1   LIG C1-0.117500 12.01100 ; qtot
-0.118  CA
 2 opls_145 1   LIGC12-0.055800 12.01100 ; qtot
-0.173  CA
 3 opls_145 1   LIGC23-0.117500 12.01100 ; qtot
-0.291  CA
 4 opls_145 1   LIGC34-0.131000 12.01100 ; qtot
-0.422  CA
 5 opls_145 1   LIGC45-0.125000 12.01100 ; qtot
-0.547  CA
 6 opls_145 1   LIGC56-0.131000 12.01100 ; qtot
-0.678  CA
 7 opls_x 1   LIGC67-0.099200  0.0 ; qtot
-0.777  x
 8 opls_x 1   LIGC78-0.105200  0.0 ; qtot
-0.882  x
 9 opls_145 1   LIGC89-0.048800 12.01100 ; qtot
-0.931  CA
10 opls_145 1   LIGC9   10-0.119500 12.01100 ; qtot
-1.051  CA
11 opls_145 1   LIG   C10   11-0.118500 12.01100 ; qtot
-1.169  CA
12 opls_145 1   LIG   C11   12-0.051800 12.01100 ; qtot
-1.221  CA
13 opls_145 1   LIG   C12   13-0.118500 12.01100 ; qtot
-1.339  CA
14 opls_145 1   LIG   C13   14-0.119500 12.01100 ; qtot
-1.459  CA
15 opls_x 1   LIG   C14   15-0.101200  0.0 ; qtot
-1.560  x
16 opls_x 1   LIG   C15   16-0.103200  0.0 ; qtot
-1.663  x
17 opls_145 1   LIG   C16   17-0.049800 12.01100 ; qtot
-1.713  CA
18 opls_145 1   LIG   C17   18-0.119500 12.01100 ; qtot
-1.833  CA
19 opls_145 1   LIG   C18   19-0.119000 12.01100 ; qtot
-1.952  CA
20 opls_145 1   LIG   C19   20-0.050800 12.01100 ; qtot
-2.002  CA
21 opls_145 1   LIG   C20   21-0.119000 12.01100 ; qtot
-2.121  CA
22 opls_145 1   LIG   C21   22-0.119500 12.01100 ; qtot
-2.241  CA
23 opls_x 1   LIG   C22   23-0.102200  0.0 ; qtot
-2.343  x
24 opls_x 1   LIG   C23   24-0.102200  0.0 ; qtot
-2.445  x
25 opls_145 1   LIG   C24   25-0.050800 12.01100 ; qtot
-2.496  CA
26 opls_145 1   LIG   C25   26-0.119000 12.01100 ; qtot
-2.615  CA
27 opls_145 1   LIG   C26   27-0.119000 12.01100 ; qtot
-2.734  CA
28 opls_145 1   LIG   C27   28-0.050800 12.01100 ; qtot
-2.785  CA
29 opls_145 1   LIG   C28   29-0.119000 12.01100 ; qtot
-2.904  CA
30 opls_145 1   LIG   C29   30-0.119000 12.01100 ; qtot
-3.023  CA
31 opls_x 1   LIG   C30   31-0.102200  0.0 ; qtot
-3.125  x
32 opls_x 1   LIG   C31   32-0.102200  0.0 ; qtot
-3.227  x
33 opls_145 1   LIG   C32   33-0.050800 12.01100 ; qtot
-3.278  CA
34 opls_145 1   LIG   C33   34-0.119000 12.01100 ; qtot
-3.397  CA
35 opls_145 1   LIG   C34   35-0.119000 12.01100 ; qtot
-3.516  CA
36 opls_145 1   LIG   C35   36-0.050800 12.01100 ; qtot
-3.567  CA
37 opls_145 1   LIG   C36   37-0.119000 12.01100 ; qtot
-3.686  CA
38 opls_145 1   LIG   C37   38-0.119000 12.01100 ; qtot
-3.805  CA
39 opls_x 1   LIG   C38   39-0.102200  0.0 ; qtot
-3.907  x
40 opls_x 1   LIG   C39   40-0.102200  0.0 ; qtot
-4.009  x
41 opls_145 1   LIG   C40   41-0.050800 12.01100 ; qtot
-4.060  CA
42 opls_145 1   LIG   C41   42-0.119000 12.01100 ; qtot
-4.179  CA
43 opls_145 1   LIG   C42   43-0.119500 12.01100 ; qtot
-4.299  CA
44 opls_145 1   LIG   C43   44-0.049800 12.01100 ; qtot
-4.348  CA
45 opls_145 1   LIG   C44   45-0.119500 12.01100 ; qtot
-4.468  CA
46 opls_145 1   LIG   C45   46-0.119000 12.01100 ; qtot
-4.587  CA
47 opls_x 1   LIG   C46   47-0.103200  0.0 ; qtot
-4.690  x
48 opls_x 1   LIG   C47   48-0.101200  0.0 ; qtot
-4.791  x
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-07 Thread Qin Qiao
Dear all,

I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
ensemble, and I got the following errors when I tried to use 2 cores per
replica:

[node-ib-4.local:mpi_rank_25][error_sighandler] Caught error: Segmentation
fault (signal 11)
[node-ib-13.local:mpi_rank_63][error_sighandler] Caught error: Segmentation
fault (signal 11)
...


Surprisingly, it worked fine when I tried to use only 1 core per replica..
I have no idea what caused the problem.. Could you give me some advice?

ps. the command I used is
srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr -multi
48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append

Best
Qin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Ligand simulation

2013-11-07 Thread Kavyashree M
Dear users,

Although this topic has been extensively discussed
in the list previously, I am unclear about the solution
for the problem..

While running ligand in water simulation (EM) with RF-0
I get the following message:

--
Number of degrees of freedom in T-Coupling group rest is 24531.00
Largest charge group radii for Van der Waals: 0.458, 0.356 nm
Largest charge group radii for Coulomb:   0.458, 0.356 nm

NOTE 1 [file ../em.mdp]:
  The sum of the two largest charge group radii (0.814337) is larger than
  rlist (1.60) - rvdw (1.00)

NOTE 2 [file ../em.mdp]:
  The sum of the two largest charge group radii (0.814337) is larger than
  rlist (1.60) - rcoulomb (1.40)
--

But I continued for nvt and npt where I got the
same notes
NVT -
--
Largest charge group radii for Van der Waals: 0.509, 0.487 nm
Largest charge group radii for Coulomb:   0.509, 0.487 nm

NOTE 1 [file ../nvt.mdp]:
  The sum of the two largest charge group radii (0.996343) 

NOTE 2 [file ../nvt.mdp]:
  The sum of the two largest charge group radii (0.996343) .

--
for NPT -

Number of degrees of freedom in T-Coupling group System is 16357.00
Largest charge group radii for Van der Waals: 0.787, 0.684 nm
Largest charge group radii for Coulomb:   0.787, 0.684 nm

NOTE 1 [file ../npt.mdp]:
  The sum of the two largest charge group radii (1.470764) .

NOTE 2 [file ../npt.mdp]:
  The sum of the two largest charge group radii (1.470764) .
--

For MD -
--
Largest charge group radii for Van der Waals: 0.671, 0.605 nm
Largest charge group radii for Coulomb:   0.671, 0.605 nm

NOTE 1 [file md.mdp]:
  The sum of the two largest charge group radii (1.276104) ..

NOTE 2 [file md.mdp]:
  The sum of the two largest charge group radii (1.276104) .
--


The ligand is not broken, whole of it is inside the water in the
beginning of the simulation, topology is ok because protein-ligand
simulation with PME ran fine.

Any suggestions are welcome.

Thank you
Regards
kavya
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists