Re: [gmx-users] help about ibi

2013-11-13 Thread Mark Abraham
Hi,

Something went wrong earlier in your workflow. Check your log files, etc.

Mark
On Nov 13, 2013 3:57 AM, guozhicheng222 guozhicheng...@126.com wrote:

 Hi:

 When I am running the ibi procedure, I get the following error message:



  A coordinate in file conf.gro does
 not contain a '.'

 Additionally, I check the coordinate file of confout.gro in step_001. It
 showed that 'nan' symbol appeared in confout.gro.

 What is wrong with this? How can I fix it? I am very appreciating for
 anyone's help.

 Best Wishes!

 Zhicheng Guo
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Recompile Gromacs 4.6.3

2013-11-13 Thread Mark Abraham
For just modifying a file, just doing make is sufficient. I would
recommend not installing the modified version (since you can run the
build/src/kernel/mdrun directly), or if you must install, to use the
suffixing options available in the ccmake advanced mode.

Mark
On Nov 13, 2013 2:48 PM, Jheng Wei Li lijheng...@gmail.com wrote:

 Hello, all
 I intend to make some modification on minimize.c in mdlib.
 Do I need to do cmake make make install all over again?
 Or is there a quick way for recompiling?

 Thanks for any tips.

 JhengWei Li
 Institute of Atomic and Molecular Sciences,
 Academia Sinica, Taipei 106, Taiwan
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] error while running pdb2gmx

2013-11-13 Thread Mark Abraham
Probably the default behaviour of pdb2gmx for termini is not appropriate
for your input. Use pdb2gmx -ter and choose wisely

Mark
On Nov 13, 2013 12:03 PM, hasthi durgs7kr...@gmail.com wrote:

 Hello GROMACS users,
   I have phosphorylated Serine residue in my
 protein (140 residues) of interest, now when I run pdb2gmx I get this
 following error

 Atom OXT in residue ALA 140 was not found in rtp entry ALA with 6 atoms
 while sorting atoms.

 I checked aminoacid.rtp, there is no separate entry for OXT there.When I
 did the simulation for the same protein prior phosphorylation I did not get
 this error. What is the reason for this and how should I rectify this
 error?

 Please help me with this regard


 Regards,
 Hasthi
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] GROMACS 4.6.4 is released

2013-11-13 Thread Mark Abraham
Hi GROMACS users,

GROMACS 4.6.4 is officially released. It contains numerous bug fixes, and
some noteworthy simulation performance enhancements (particularly with
GPUs!). We encourage all users to upgrade their installations from earlier
4.6-era releases.

You can find the code, manual, release notes, installation instructions and
test
suite at the links below. Note that some tests have been added, and the
manual has changed only in chapter 7 and appendix D.

ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.4.tar.gz
ftp://ftp.gromacs.org/pub/manual/manual-4.6.4.pdf
http://www.gromacs.org/About_Gromacs/Release_Notes/Versions_4.6.x#Release_notes_for_4.6.4
http://www.gromacs.org/Documentation/Installation_Instructions
http://gromacs.googlecode.com/files/regressiontests-4.6.4.tar.gz

Happy simulating!

The GROMACS team
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] About Compiler Compatibility for Gromacs 4.6.2 Compilation

2013-11-10 Thread Mark Abraham
On Nov 10, 2013 10:04 AM, Mark Abraham mark.j.abra...@gmail.com wrote:

 Yes (unless you are using AMD cpus), per the installation instructions,
 although you will probably do slightly better with GCC 4.7, and should not
 do a new install of 4.6.2 after 4.6.3 is released. In particular, 4.6.2 has
 an affinity-related performance regression when using external MPI
 libraries.

 Mark
 On Nov 10, 2013 7:55 AM, vidhya sankar scvsankar_...@yahoo.com wrote:



 Dear Justin and Mark Thank you for your Previous reply
 Can i Use the Following Intel  Compiler  for grmacs 4.6.2
 in centos Linux OS ?

  Intel® C++ Composer XE 2013 for Linux

 it Includes Intel® C++ Compiler, Intel® Integrated Performance Primitives
 7.1, Intel® Math Kernel Library 11.0,
 Intel Cilk™ Plus, the Intel® Threading Building Blocks (Intel® TBB)”

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: mdrun on 8-core AMD + GTX TITAN (was: Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs)

2013-11-10 Thread Mark Abraham
On Sun, Nov 10, 2013 at 5:28 AM, Dwey Kauffman mpi...@gmail.com wrote:

 Hi Szilard,

  Thank you very much for your suggestions.

 Actually, I was jumping to conclusions too early, as you mentioned AMD
 cluster, I assumed you must have 12-16-core Opteron CPUs. If you
 have an 8-core (desktop?) AMD CPU, than you may not need to run more
 than one rank per GPU.

 Yes, we do have independent clusters of AMD, AMD opteron, Intel Corei7. All
 nodes of three clusters are  installed with (at least) 1 GPU card.   I have
 run the same test on these three clusters.

 Let's focus on a basic scaling issue:  One GPU  v.s Two GPUs within the
 same
 node of 8-core AMD cpu.
 Using 1 GPU, we  can  have a performance of ~32 ns/day.  Using two GPU, we
 gain not much more ( ~38.5 ns/day ).  It is about ~20% more performance.
 However, this is not really true because in some tests, I also saw only
 2-5%
 more, which really surprised me.


Neither run had a PP-PME work distribution suitable for the hardware it was
running on (and fixing that for each run requires opposite changes). Adding
a GPU and hoping to see scaling requires that there be proportionately more
GPU work available to do, *and* enough absolute work to do. mdrun tries to
do this, and reports early in the log file, which is one of the reasons
Szilard asked to see whole log files - please use a file sharing service to
do that.

As you can see, this test was made on the same node regardless of
 networking.  Can the performance be improved  say 50% more when 2 GPUs are
 used on a general task ?  If yes, how ?

 Indeed, as Richard pointed out, I was asking for *full* logs, these
 summaries can't tell much, the table above the summary entitled R E A
 L   C Y C L E   A N D   T I M E   A C C O U N T I N G as well as
 other reported information across the log file is what I need to make
 an assessment of your simulations' performance.

 Please see below.

 However, in your case I suspect that the
 bottleneck is multi-threaded scaling on the AMD CPUs and you should
 probably decrease the number of threads per MPI rank and share GPUs
 between 2-4 ranks.

 After I test all three clusters, I found it may NOT be an issue of AMD
 cpus.
 Intel cpus has the SAME scaling issue.

 However, I am curious as to how you justify the setup of 2-4 ranks sharing
 GPUs ? Can you please explain it a bit more ?


NUMA effects on multi-socket AMD processors are particularly severe; the
way GROMACS uses OpenMP is not well suited to them. Using a rank (or two)
per socket will greatly reduce those effects, but introduces different
algorithmic overhead from the need to do DD and explicitly communicate
between ranks. (You can see the latter in your .log file snippets below.)
Also, that means the parcel of PP work available from a rank to give to the
GPU is smaller, which is the opposite of what you'd like for GPU
performance and/or scaling. We are working on a general solution for this
and lots of related issues in the post-5.0 space, but there is a very hard
limitation imposed by the need to amortize the cost of CPU-GPU transfer by
having lots of PP work available to do.

You could try running
 mpirun -np 4 mdrun -ntomp 2 -gpu_id 0011
 but I suspect this won't help because your scaling issue

 Your guess is correct but why is that ?  it is worse. The more nodes are
 involved in a task, the performance is worse.


  in my
 experience even reaction field runs don't scale across nodes with 10G
 ethernet if you have more than 4-6 ranks per node trying to
 communicate (let alone with PME).

 What dose it mean  let alone with PME ?  how to do so ? by mdrun ?
 I do know  mdrun -npme to specify PME process.


If using PME (rather than RF), network demands are more severe.


 Thank you.

 Dwey



 ### One GPU 

  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

  Computing: Nodes   Th. Count  Wall t (s) G-Cycles   %

 -
  Neighbor search18 11 431.81713863.390 1.6
  Launch GPU ops.18501 472.90615182.556 1.7
  Force  185011328.61142654.785 4.9
  PME mesh   18501   11561.327   371174.09042.8
  Wait GPU local 185016888.008   221138.11125.5
  NB X/F buffer ops. 189911216.49939055.455 4.5
  Write traj.18   1030  12.741  409.039 0.0
  Update 185011696.35854461.226 6.3
  Constraints185011969.72663237.647 7.3
  Rest   11458.82046835.133 5.4

 -
  Total  1   27036.812   868011.431   100.0

 

Re: [gmx-users] problem in running mdrun command

2013-11-10 Thread Mark Abraham
Hi,

There's nothing GROMACS-specific here - something about your MPI
installation, configuration or use is pretty wrong, but we can't help work
out what.

Mark


On Sun, Nov 10, 2013 at 12:31 PM, S.Chandra Shekar 
chandrashe...@iisertvm.ac.in wrote:

 Dear all

 I encounter a problem while running command  mdrun_mpi -v -deffnm em in
 gromacs.

 I am new to the gromacs. i just ran test calculation, *simulation of
 lyzozyme in water*. i am able to generate gro, tpr files. But in the final
 step i got following error.

  Thanks in advance.


 [localhost.localdomain:23122] mca: base: component_find: paffinity
 mca_paffinity_linux uses an MCA interface that is not recognized
 (component MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: paffinity
 mca_paffinity_linux uses an MCA interface that is not recognized
 (component MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: ras
 mca_ras_dash_host uses an MCA interface that is not recognized (component
 MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: ras
 mca_ras_gridengine uses an MCA interface that is not recognized
 (component MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: ras
 mca_ras_localhost uses an MCA interface that is not recognized (component
 MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: errmgr
 mca_errmgr_hnp uses an MCA interface that is not recognized (component
 MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: errmgr
 mca_errmgr_orted uses an MCA interface that is not recognized (component
 MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: errmgr
 mca_errmgr_proxy uses an MCA interface that is not recognized (component
 MCA v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: iof
 mca_iof_proxy uses an MCA interface that is not recognized (component MCA
 v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23123] mca: base: component_find: iof mca_iof_svc
 uses an MCA interface that is not recognized (component MCA v1.0.0 !=
 supported MCA v2.0.0) -- ignored
 [localhost.localdomain:23122] mca: base: component_find: rcache
 mca_rcache_rb uses an MCA interface that is not recognized (component MCA
 v1.0.0 != supported MCA v2.0.0) -- ignored
 [localhost:23122] *** Process received signal ***
 [localhost:23122] Signal: Segmentation fault (11)
 [localhost:23122] Signal code: Address not mapped (1)
 [localhost:23122] Failing at address: 0x4498
 [localhost:23122] [ 0] /lib64/libpthread.so.0() [0x3fcee0f500]
 [localhost:23122] [ 1] /usr/local/lib/libmpi.so.1(PMPI_Comm_size+0x4e)
 [0x2acc6d93727e]
 [localhost:23122] [ 2]
 /usr/local/gromacs/bin/../lib/libgmx_mpi.so.8(gmx_setup+0x32)
 [0x2acc6d195e02]
 [localhost:23122] [ 3]
 /usr/local/gromacs/bin/../lib/libgmx_mpi.so.8(init_par+0x51)
 [0x2acc6d234251]
 [localhost:23122] [ 4] mdrun_mpi(cmain+0x11f9) [0x435799]
 [localhost:23122] [ 5] /lib64/libc.so.6(__libc_start_main+0xfd)
 [0x3fcea1ecdd]
 [localhost:23122] [ 6] mdrun_mpi() [0x406ee9]
 [localhost:23122] *** End of error message ***
 Segmentation fault (core dumped)
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Reproducing results with independent runs

2013-11-09 Thread Mark Abraham
On Sat, Nov 9, 2013 at 9:36 AM, alex.bjorling alex.bjorl...@gmail.comwrote:

 Dear users,

 I am investigating protein crystal packing artifacts by doing equilibrium
 simulations starting from a crystal structure. I would like to know if the
 relaxations i see are reproducible, in the sense that many simulations with
 independent velocities give the same general result.

 My plans is to do only one set of (first NVT then NPT) equilibrations with
 position restraints. Then, I thought I'd do a shorter NPT run with position
 restraints, with more frequent output and using the trr snapshots as
 starting points for production runs.

 The only question then is how far apart these snapshots need to be to
 guarantee independent velocities. Attached is the velocity autocorrelation
 for the Protein group. It seems to me that using snapshots 1ps apart would
 do it, since the autocorrelation has decayed by then.

 Is this a valid approach?


That the observations are uncorrelated (because the autocorrelation time
has elapsed) does not imply that trajectories started from successive
snapshots would be independent - in the absence of floating-point or
load-balancing artefacts leading to numerical divergence, the separate
simulations would nearly reproduce each other! Since you have to wait for a
period of divergence anyway, you might as well generate new velocities
after an initial stage of equilibration, equilibrate further, and have no
independence question to answer.

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-08 Thread Mark Abraham
Hi,

That shouldn't happen if your MPI library is working (have you tested it
with other programs?) and configured properly. It's possible this is a
known bug, so please let us know if you can reproduce it in the latest
releases.

Mark


On Fri, Nov 8, 2013 at 6:55 AM, Qin Qiao qiaoqi...@gmail.com wrote:

 Dear all,

 I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
 ensemble, and I got the following errors when I tried to use 2 cores per
 replica:

 [node-ib-4.local:mpi_rank_25][error_sighandler] Caught error: Segmentation
 fault (signal 11)
 [node-ib-13.local:mpi_rank_63][error_sighandler] Caught error: Segmentation
 fault (signal 11)
 ...
 

 Surprisingly, it worked fine when I tried to use only 1 core per replica..
 I have no idea what caused the problem.. Could you give me some advice?

 ps. the command I used is
 srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr -multi
 48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append

 Best
 Qin
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-08 Thread Mark Abraham
On Fri, Nov 8, 2013 at 12:02 AM, Jones de Andrade johanne...@gmail.comwrote:

 Really?


Of course. With openmp gets to use all your cores for PME+bondeds+stuff
while the GPU does PP. Any version without openmp gets to use one core per
domain, which is bad.


 An what about gcc+mpi? should I expect any improvement?


Run how and compared with what? Using an external MPI library within a
single node is a complete waste of time compared with the alternatives
(thread-MPI, OpenMP, or both).

Mark




 On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  You will do much better with gcc+openmp than icc-openmp!
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.com
  wrote:
 
   Did it a few days ago. Not so much of a problem here.
  
   But I compiled everything, including fftw, with it. The only error I
 got
   was that I should turn off the separable compilation, and that the user
   must be in the group video.
  
   My settings are (yes, I know it should go better with openmp, but
 openmp
   goes horrobly in our cluster, I don't know why):
  
   setenv CC  /opt/intel/bin/icc
   setenv CXX /opt/intel/bin/icpc
   setenv F77 /opt/intel/bin/ifort
   setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
   mkdir build
   cd build
   cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
   -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
   -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024
 -DBUILD_SHARED_LIBS=OFF
   -DGMX_PREFER_STATIC_LIBS=ON
   -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
   make
   make install
   cd ..
   rm -rf build
  
  
   On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham mark.j.abra...@gmail.com
   wrote:
  
icc and CUDA is pretty painful. I'd suggest getting latest gcc.
   
Mark
   
   
On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:
   
 Hi,

 I'm having trouble compiling v 4.6.3 with GPU support using CUDA
   5.5.22.

 The configuration runs okay and I have made sure that I have set
  paths
 correctly.

 I'm getting errors:

 $ make
 [  0%] Building NVCC (Device) object

   
  
 
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
 icc: command line warning #10006: ignoring unknown option
  '-dumpspecs'
 /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
 function `_start':
 (.text+0x20): undefined reference to `main'
 CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
(message):
   Error generating


   
  
 
 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o


 make[2]: ***

   
  
 
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
 Error 1
 make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
   Error
2
 make: *** [all] Error 2

 Any help would be appreciated.

 Regards,
 Ahmed.

 --
 Scanned by iCritical.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before
 posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
   
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http

Re: [gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-08 Thread Mark Abraham
OK, thanks.

Please open a new issue at redmine.gromacs.org, describe your observations
as above, and upload a tarball of your input files.

Mark


On Fri, Nov 8, 2013 at 2:14 PM, Qin Qiao qiaoqi...@gmail.com wrote:

 On Fri, Nov 8, 2013 at 7:18 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  That shouldn't happen if your MPI library is working (have you tested it
  with other programs?) and configured properly. It's possible this is a
  known bug, so please let us know if you can reproduce it in the latest
  releases.
 
  Mark
 
 
  Hi,

 I installed different versions of gromacs with the same MPI library.
 Surprisingly, the problem doesn't occur in gromacs-4.5.1.. but still in the
 gromacs-4.6.3... The MPI version is MVAPICH2-1.9a for infinite band.

 Best,

 Qin

 On Fri, Nov 8, 2013 at 6:55 AM, Qin Qiao qiaoqi...@gmail.com wrote:
 
   Dear all,
  
   I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
   ensemble, and I got the following errors when I tried to use 2 cores
 per
   replica:
  
   [node-ib-4.local:mpi_rank_25][error_sighandler] Caught error:
  Segmentation
   fault (signal 11)
   [node-ib-13.local:mpi_rank_63][error_sighandler] Caught error:
  Segmentation
   fault (signal 11)
   ...
   
  
   Surprisingly, it worked fine when I tried to use only 1 core per
  replica..
   I have no idea what caused the problem.. Could you give me some advice?
  
   ps. the command I used is
   srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr
  -multi
   48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append
  
   Best
   Qin
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-07 Thread Mark Abraham
First, there is no value in ascribing problems to the hardware if the
simulation setup is not yet balanced, or not large enough to provide enough
atoms and long enough rlist to saturate the GPUs, etc. Look at the log
files and see what complaints mdrun makes about things like PME load
balance, and the times reported for different components of the simulation,
because these must differ between the two runs you report. diff -y -W 160
*log |less is your friend. Some (non-GPU-specific) background information
in part 5 here
http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/Topology_preparation%2c_%22What's_in_a_log_file%22%2c_basic_performance_improvements%3a_Mark_Abraham%2c_Session_1A
(though
I recommend the PDF version)

Mark


On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.comwrote:

 I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
 me the same performance
 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v  -deffnm md_CaM_test,

 Doest it be due to the small CPU cores or addition RAM ( this system has 32
 gb) is needed ? OR may be some extra options are needed in the config?

 James




 2013/11/6 Richard Broadbent richard.broadben...@imperial.ac.uk

  Hi Dwey,
 
 
  On 05/11/13 22:00, Dwey Kauffman wrote:
 
  Hi Szilard,
 
  Thanks for your suggestions. I am  indeed aware of this page. In a
  8-core
  AMD with 1GPU, I am very happy about its performance. See below. My
  intention is to obtain a even better one because we have multiple nodes.
 
  ### 8 core AMD with  1 GPU,
  Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
  For optimal performance this ratio should be close to 1!
 
 
  NOTE: The GPU has 20% more load than the CPU. This imbalance causes
 performance loss, consider using a shorter cut-off and a finer
 PME
  grid.
 
  Core t (s)   Wall t (s)(%)
  Time:   216205.51027036.812  799.7
7h30:36
(ns/day)(hour/ns)
  Performance:   31.9560.751
 
  ### 8 core AMD with 2 GPUs
 
  Core t (s)   Wall t (s)(%)
  Time:   178961.45022398.880  799.0
6h13:18
(ns/day)(hour/ns)
  Performance:   38.5730.622
  Finished mdrun on node 0 Sat Jul 13 09:24:39 2013
 
 
  I'm almost certain that Szilard meant the lines above this that give the
  breakdown of where the time is spent in the simulation.
 
  Richard
 
 
   However, in your case I suspect that the
  bottleneck is multi-threaded scaling on the AMD CPUs and you should
  probably decrease the number of threads per MPI rank and share GPUs
  between 2-4 ranks.
 
 
 
  OK but can you give a example of mdrun command ? given a 8 core AMD
 with 2
  GPUs.
  I will try to run it again.
 
 
   Regarding scaling across nodes, you can't expect much from gigabit
  ethernet - especially not from the cheaper cards/switches, in my
  experience even reaction field runs don't scale across nodes with 10G
  ethernet if you have more than 4-6 ranks per node trying to
  communicate (let alone with PME). However, on infiniband clusters we
  have seen scaling to 100 atoms/core (at peak).
 
 
   From your comments, it sounds like a cluster of AMD cpus is difficult
 to
 
  scale across nodes in our current setup.
 
  Let's assume we install Infiniband (20 or 40GB/s) in the same system of
 16
  nodes of 8 core AMD with 1 GPU only. Considering the same AMD system,
 what
  is a good way to obtain better performance  when we run a task across
  nodes
  ? in other words, what dose mudrun_mpi look like ?
 
  Thanks,
  Dwey
 
 
 
 
  --
  View this message in context: http://gromacs.5086.x6.nabble.
  com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
  Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 
   --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at http://www.gromacs.org/
  Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't 

Re: [gmx-users] nose-hoover vs v-rescale in implicit solvent

2013-11-07 Thread Mark Abraham
I think either is correct for practical purposes.

Mark


On Thu, Nov 7, 2013 at 8:41 AM, Gianluca Interlandi 
gianl...@u.washington.edu wrote:

 Does it make more sense to use nose-hoover or v-rescale when running in
 implicit solvent GBSA? I understand that this might be a matter of opinion.

 Thanks,

  Gianluca

 -
 Gianluca Interlandi, PhD gianl...@u.washington.edu
 +1 (206) 685 4435
 http://artemide.bioeng.washington.edu/

 Research Scientist at the Department of Bioengineering
 at the University of Washington, Seattle WA U.S.A.
 -
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: single point calculation with gromacs

2013-11-07 Thread Mark Abraham
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl fantastic...@gmail.com wrote:

 Dear Justin,

 I am sorry for the late reply. I still can't figure it out.


It isn't rocket science - your two .mdp files describe totally different
model physics. To compare things, change as few things as necessary to
generate the comparison. So use the same input .mdp file for the MD vs EM
single-point comparison, just changing the integrator line, and maybe
unconstrained-start (I forget the details). And be aware of
http://www.gromacs.org/Documentation/How-tos/Single-Point_Energy

Mark

Could you please send me the mdp file which was used for your single point
 calculations.
 I want to do some comparison and then solve the problem.
 Thanks very much!


 All the best,
 Qinghua

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/single-point-calculation-with-gromacs-tp5012084p5012295.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Hi,

It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a grain of salt for use
with PME - others' success in practice should be a guideline here. The good
news is that the default GROMACS PME settings are pretty good for at least
some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the GPU
auto-tuning of parameters in 4.6 is designed to preserve the right sorts of
things.

LJ is harder because it would make good sense to preserve the way CHARMM
did it, but IIRC you can't use something equivalent to the CHARMM LJ shift
with the Verlet kernels, either natively or with a table. We hope to fix
that in 5.0, but code is not written yet. I would probably use vdwtype =
cut-off, vdw-modifier = potential-shift-verlet and rcoulomb=rlist=rvdw=1.2,
but I don't run CHARMM simulations for a living ;-)

Mark


On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.comwrote:

 Dear All,

 Any suggestions?

 Thank you.

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
icc and CUDA is pretty painful. I'd suggest getting latest gcc.

Mark


On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:

 Hi,

 I'm having trouble compiling v 4.6.3 with GPU support using CUDA 5.5.22.

 The configuration runs okay and I have made sure that I have set paths
 correctly.

 I'm getting errors:

 $ make
 [  0%] Building NVCC (Device) object
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
 icc: command line warning #10006: ignoring unknown option '-dumpspecs'
 /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
 function `_start':
 (.text+0x20): undefined reference to `main'
 CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206 (message):
   Error generating

 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o


 make[2]: ***
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
 Error 1
 make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all] Error 2
 make: *** [all] Error 2

 Any help would be appreciated.

 Regards,
 Ahmed.

 --
 Scanned by iCritical.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] installing gromacs 4.6.1 with openmpi

2013-11-07 Thread Mark Abraham
Sounds like a non-GROMACS problem. I think you should explore configuring
OpenMPI correctly, and show you can run an MPI test program successfully.

Mark


On Thu, Nov 7, 2013 at 5:51 PM, niloofar niknam
niloofae_nik...@yahoo.comwrote:

 Dear gromacs users
 I have installed gromacs 4.6.1 with cmake 2.8.12, fftw3.3.3 and
 openmpi-1.6.4 on a single machine with 8 cores(Red Hat Enterprise linux
 6.1) . During openmpi installation ( I used make -jN) and also in gromacs
 installation ( I used make -j N command), everything seemed ok but when I
 want to use mpirun -np N mdrun I face this error:

 mpiexec failed: gethostbyname_ex failed for Bioinf2
 (I can run mdrun with just one cpu).Any suggestion would be highly
 appreciated.
 thanks in advance,
 Niloofar
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-07 Thread Mark Abraham
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).

Mark


On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan rajatdesi...@gmail.comwrote:

 Thank you, Mark. I think that running it on CPUs is a safer choice at
 present.


 On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  It's not easy to be explicit. CHARMM wasn't parameterized with PME, so
 the
  original paper's coulomb settings can be taken with a grain of salt for
 use
  with PME - others' success in practice should be a guideline here. The
 good
  news is that the default GROMACS PME settings are pretty good for at
 least
  some problems (http://pubs.acs.org/doi/abs/10.1021/ct4005068), and the
 GPU
  auto-tuning of parameters in 4.6 is designed to preserve the right sorts
 of
  things.
 
  LJ is harder because it would make good sense to preserve the way CHARMM
  did it, but IIRC you can't use something equivalent to the CHARMM LJ
 shift
  with the Verlet kernels, either natively or with a table. We hope to fix
  that in 5.0, but code is not written yet. I would probably use vdwtype =
  cut-off, vdw-modifier = potential-shift-verlet and
 rcoulomb=rlist=rvdw=1.2,
  but I don't run CHARMM simulations for a living ;-)
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 1:42 PM, Rajat Desikan rajatdesi...@gmail.com
  wrote:
 
   Dear All,
  
   Any suggestions?
  
   Thank you.
  
   --
   View this message in context:
  
 
 http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
   Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
If the long-range component of your electrostatics model is not
decomposable by group (which it isn't), then you can't use that with LIE.
See the hundreds of past threads on this topic :-)

Mark


On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Hello
 I performed MD simulations of several Protein-ligand complexes and
 solvated Ligands using PME for log range electrostatics. I want to
 calculate the binding free energy using the LIE method, but when using
 g_energy I only get Coul-SR. How can I deal with Ligand-environment long
 range electrostatic interaction using gromacs? I have seen other
 discussion lists but I couldn't arrive to a solution. Could you please
 help me?
 Thank you
 Williams


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-07 Thread Mark Abraham
You will do much better with gcc+openmp than icc-openmp!

Mark


On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade johanne...@gmail.comwrote:

 Did it a few days ago. Not so much of a problem here.

 But I compiled everything, including fftw, with it. The only error I got
 was that I should turn off the separable compilation, and that the user
 must be in the group video.

 My settings are (yes, I know it should go better with openmp, but openmp
 goes horrobly in our cluster, I don't know why):

 setenv CC  /opt/intel/bin/icc
 setenv CXX /opt/intel/bin/icpc
 setenv F77 /opt/intel/bin/ifort
 setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
 mkdir build
 cd build
 cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
 -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024 -DBUILD_SHARED_LIBS=OFF
 -DGMX_PREFER_STATIC_LIBS=ON
 -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
 make
 make install
 cd ..
 rm -rf build


 On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  icc and CUDA is pretty painful. I'd suggest getting latest gcc.
 
  Mark
 
 
  On Thu, Nov 7, 2013 at 2:42 PM, ahmed.sa...@stfc.ac.uk wrote:
 
   Hi,
  
   I'm having trouble compiling v 4.6.3 with GPU support using CUDA
 5.5.22.
  
   The configuration runs okay and I have made sure that I have set paths
   correctly.
  
   I'm getting errors:
  
   $ make
   [  0%] Building NVCC (Device) object
  
 
 src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
   icc: command line warning #10006: ignoring unknown option '-dumpspecs'
   /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
   function `_start':
   (.text+0x20): undefined reference to `main'
   CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
  (message):
 Error generating
  
  
 
 /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
  
  
   make[2]: ***
  
 
 [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
   Error 1
   make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
 Error
  2
   make: *** [all] Error 2
  
   Any help would be appreciated.
  
   Regards,
   Ahmed.
  
   --
   Scanned by iCritical.
  
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: LIE method with PME

2013-11-07 Thread Mark Abraham
I'd at least use RF! Use a cut-off consistent with the force field
parameterization. And hope the LIE correlates with reality!

Mark
On Nov 7, 2013 10:39 PM, Williams Ernesto Miranda Delgado 
wmira...@fbio.uh.cu wrote:

 Thank you Mark
 What do you think about making a rerun on the trajectories generated
 previously with PME but this time using coulombtype: cut-off? Could you
 suggest a cut off value?
 Thanks again
 Williams

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Number of water molecules around any methyl carbon

2013-11-06 Thread Mark Abraham
Count the number of O observed near each C singly and compare the four
numbers.

Mark
On Nov 6, 2013 4:57 PM, rankinb rank...@purdue.edu wrote:

 Hi all,

 I would  like to calculate the number of water molecules around any of the
 methyl carbon atoms of tert-butyl alcohol.  Currently, I have defined an
 index group containing all three of the methyl carbon atoms and used
 trjorder -nshell to calculate the number of oxygen atoms within a specified
 cutoff distance of this index group.  What I am trying to figure out is
 whether this method results in the number of oxygen atoms around any single
 methyl carbon or all methyl carbon atoms.  Does anyone have any insights
 regarding this problem?  If the described method does not calculate the
 number of oxygen atoms around all of the methyl carbon atoms, is there a
 way
 to do so, without overcounting?

 Thanks,
 Blake

 PhD Candidate
 Purdue University
 Ben-Amotz Lab

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Number-of-water-molecules-around-any-methyl-carbon-tp5012297.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Analysis tools and triclinic boxes

2013-11-06 Thread Mark Abraham
Hi,

They ought to, and we hope they do, but historically quality control of
analysis tools was threadbare, there is no testing of that kind of thing
now, and certainly no implied warranty. Especially at the existing price
point! ;-)

That comment could easily refer to (or be) an archaic code version, I'm
afraid. If you have doubts, please try to verify with a simple system the
behaviour you expect.

Mark


On Mon, Nov 4, 2013 at 7:29 PM, Stephanie Teich-McGoldrick 
stephani...@gmail.com wrote:

 Dear all,

 I am using gromacs 4.6.3 with a triclinic box. Based on the manual and mail
 list, it is my understanding that the default box shape in gromacs in a
 triclinic box. Can I assume that all the analysis tools also work for a
 triclinic box.

 Cheers,
 Stephanie
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] stopped simulation

2013-11-06 Thread Mark Abraham
On Wed, Nov 6, 2013 at 8:22 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/6/13 2:14 PM, Ehsan Sadeghi wrote:

 Many thanks Justin. What is an appropriate cut-off value? My box size is
 d=
 0.5 nm; based on the definition of cut-off radius, its value shouble be
 smaller than d/2; therefore 0.24 is an appropriate cut-off value. Am I
 right?


 No.  The cutoff value is not a function of box size; it is a fixed
 property of the force field.  No wonder the simulation is crashing.  If
 your box is only 0.5 nm, then a cutoff of 1.5 nm is triple-counting
 interactions across PBC!


Triple counting is not possible, per minimum-image convention. I think
Ehsan's report of a 0.5nm box size is probably wrong, e.g. per its
documentation, editconf -d 0.5 does not produce a 0.5nm box.

Mark

Refer to the primary literature for the Gromos parameter set you are using
 for proper settings.  You haven't said which one you're using, and there
 may be slight differences between them.  If the value you're using isn't
 taken directly from a paper, it's not credible.

 -Justin


  Cheers, Ehsan

 - Original Message - From: Justin Lemkul jalem...@vt.edu To:
 Discussion list for GROMACS users gmx-users@gromacs.org Sent:
 Wednesday,
 November 6, 2013 10:54:42 AM Subject: Re: [gmx-users] stopped simulation



 On 11/6/13 12:53 PM, Ehsan Sadeghi wrote:

 Hi gmx users,

 I have simulated ionomer in water solution using gromos force field. But
 in
 middle of simulation(after 2 ns) the simulation stopped and I received
 these messages:


 WARNING: Listed nonbonded interaction between particles 174 and 188 at
 distance 3f which is larger than the table limit 3f nm.

 This is likely either a 1,4 interaction, or a listed interaction inside a
 smaller molecule you are decoupling during a free energy calculation.
 Since
 interactions at distances beyond the table cannot be computed, they are
 skipped until they are inside the table limit again. You will only see
 this
 message once, even if it occurs for several interactions.

 IMPORTANT: This should not happen in a stable simulation, so there is
 probably something wrong with your system. Only change the
 table-extension
 distance in the mdp file if you are really sure that is the reason.

 Fatal error: 1 particles communicated to PME node 5 are more than 2/3
 times
 the cut-off out of the domain decomposition cell of their charge group in
 dimension y. This usually means that your system is not well
 equilibrated.


  I used simulated annealing for equilibrating the system in NVT and
 NPT
 condition. The mdp files are:

  NVT --

 define = -DPOSRES integrator = md dt = 0.002 ; time step (in
 ps) nsteps = 25000 ; Maximum number of steps to perform

 ; OUTPUT CONTROL OPTIONS nstxout= 500 nstvout= 500 nstenergy  =
 500 nstlog = 500 energygrps = Non-Water Water

 ; NEIGHBORSEARCHING PARAMETERS

 nstlist= 1 ns_type= grid rlist  = 1.5 pbc= xyz

 ; OPTIONS FOR ELECTROSTATICS AND VDW

 coulombtype= PME pme_order  = 4 fourierspacing= 0.16 rcoulomb   = 1.5
 vdw-type   = Cut-off rvdw   = 1.5

 ; Temperature coupling

 tcoupl = v-rescale tc-grps= Non-Water Water tau_t  = 0.1
 0.1 ref_t  = 300300 ; Dispersion correction DispCorr   =
 EnerPres ;
 Pressure coupling is off pcoupl = no

 ; Annealing

 annealing   = single single annealing-npoints = 5  5 annealing-time = 0
 10
 20 30 40 0 10 20 30 40 annealing-temp = 300 320 340 360 380 300 320 340
 360
 380

 ; GENERATE VELOCITIES FOR STARTUP RUN gen_vel= yes gen_temp   = 300
 gen_seed   = -1

 ; OPTIONS FOR BONDS constraints = ; all-bonds continuation= no
 constraint_algorithm = lincs lincs_iter = 1 lincs_order= 4

 - NPT 

 define = -DPOSRES integrator = md dt = 0.002 nsteps =
 25000

 ; OUTPUT CONTROL OPTIONS nstxout= 500 nstvout= 500 nstfout=
 500 nstenergy  = 500 nstlog = 500 energygrps = Non-Water Water

 ; NEIGHBORSEARCHING PARAMETERS

 nstlist= 5 ns_type= grid rlist  = 1.5 pbc= xyz

 ; OPTIONS FOR ELECTROSTATICS AND VDW

 coulombtype= PME pme_order  = 4 fourierspacing= 0.16 rcoulomb   = 1.5
 vdw-type   = Cut-off rvdw   = 1.5

 ; Temperature coupling

 tcoupl = v-rescale tc-grps= Non-Water Water tau_t  = 0.1
 0.1 ref_t  = 300300 ; Dispersion correction DispCorr   = EnerPres

 pcoupl = Parrinello-Rahman Pcoupltype = Isotropic tau_p  = 2.0
 compressibility = 4.5e-5 ref_p  = 1.0 refcoord_scaling = com

 ; Annealing

 annealing   = single single annealing-npoints = 5  5 annealing-time = 0
 10
 20 30 40 0 10 20 30 40 annealing-temp = 380 360 340 320 300 380 360 340
 320
 300


 ; GENERATE VELOCITIES FOR STARTUP RUN gen_vel= no

 ; OPTIONS FOR BONDS constraints = ; all-bonds continuation= yes
 ;continuation from NVT constraint_algorithm = lincs lincs_iter = 1
 lincs_order= 4 --

 Is the equilibration time is long 

Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Mark Abraham
On Tue, Nov 5, 2013 at 12:55 PM, James Starlight jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


Richard suggested

mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

which looks correct to me. -ntomp 6 is probably superfluous

Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   You can use the -march=native flag with gcc to optimize for the CPU
  your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
  CPUs.
  --
  Szilárd Páll
 
 
  On Mon, Nov 4, 2013 at 12:37 PM, James Starlight 
 jmsstarli...@gmail.com
  
  wrote:
 
  Szilárd, thanks for suggestion!
 
  What kind of CPU optimisation should I take into account assumint that
 
  I'm
 
  using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as
  12
  nodes in Debian).
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   That should be enough. You may want to use the -march (or equivalent)
  compiler flag for CPU optimization.
 
  Cheers,
  --
  Szilárd Páll
 
 
  On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 
 
  jmsstarli...@gmail.com
 
  wrote:
 
  Dear Gromacs Users!
 
  I'd like to compile lattest 4.6 Gromacs with native GPU supporting
 on
 
  my
 
  i7
 
  cpu with dual GeForces Titans gpu mounted. With this config I'd like
 
  to
 
  perform simulations using cpu as well as both gpus simultaneously.
 
  What flags besides
 
  cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5
 
 
  should I define to CMAKE for compiling optimized gromacs on such
 
  workstation?
 
 
 
  Thanks for help
 
  James
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
   --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  

Re: [gmx-users] Re: Using gromacs on Rocks cluster

2013-11-05 Thread Mark Abraham
You need to configure your MPI environment to do so (so read its docs).
GROMACS can only do whatever that makes available.

Mark


On Tue, Nov 5, 2013 at 2:16 AM, bharat gupta bharat.85.m...@gmail.comwrote:

 Hi,

 I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
 32 processors (cpu). But while running the nvt equilibration step, it uses
 only 1 cpu and the others remain idle. I have complied the Gromacs using
 enable-mpi option. How can make the mdrun use all the 32 processors ??

 --
 Bharat
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Mark Abraham
Yes, that has been true for GROMACS for a few years. Low-latency
communication is essential if you want a whole MD step to happen in around
1ms wall time.

Mark
On Nov 5, 2013 11:24 PM, Dwey Kauffman mpi...@gmail.com wrote:

 Hi Szilard,

  Thanks.

 From Timo's benchmark,
 1  node142 ns/day
 2  nodes FDR14 218 ns/day
 4  nodes FDR14 257 ns/day
 8  nodes FDR14 326 ns/day


 It looks like a infiniband network is required in order to scale up when
 running a task across nodes. Is it correct ?


 Dwey


 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012280.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Installation Gromacs 4.5.7 on rocluster cluster with centos 6.0

2013-11-04 Thread Mark Abraham
On Mon, Nov 4, 2013 at 12:01 PM, bharat gupta bharat.85.m...@gmail.comwrote:

 Hi,

 I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
 fine till .configure command, but I am getting error at the make command :-

 Error:
 
 [root@cluster gromacs-4.5.7]# make


These is no need to run make as root - doing so guarantees you have almost
no knowledge of the final state of your entire machine.


 /bin/sh ./config.status --recheck
 running CONFIG_SHELL=/bin/sh /bin/sh ./configure  --enable-mpi
 LDFLAGS=-L/opt/rocks/lib CPPFLAGS=-I/opt/rocks/include  --no-create
 --no-recursion
 checking build system type... x86_64-unknown-linux-gnu
 checking host system type... x86_64-unknown-linux-gnu
 ./configure: line 2050: syntax error near unexpected token `tar-ustar'
 ./configure: line 2050: `AM_INIT_AUTOMAKE(tar-ustar)'
 make: *** [config.status] Error 2


Looks like the system has an archaic autotools setup. Probably you can
comment out the line with tar-ustar from the original configure script, or
remove tar-ustar. Or use the CMake build.




 I have another query regarding the gromacs that comes with the Rocks
 cluster distribution. The mdrun of that gromacs has been complied without
 mpi option. How can I recomplie with mpi option. As I need the .configure
 file which is not there in the installed gromacs folder of the rocks
 cluster ...


The 4.5-era GROMACS installation instructions are up on the website.
Whatever's distributed with Rocks is more-or-less irrelevant.

Mark




 Thanks in advance for help




 Regards
 
 Bharat
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Help to simulate gas mixture

2013-11-03 Thread Mark Abraham
The principle is the same as at
http://www.gromacs.org/Documentation/How-tos/Mixed_Solvents
On Nov 3, 2013 6:55 PM, ali.nazari ali.nazari.a...@gmail.com wrote:

 Dear Friends,

 I am just a beginner in using GROMCS-4.6.3 and I want to simulate gas
 mixture, the same as mixture of O2 and N2, any help(the same as introducing
 a reference, not GROMACS manual b/c there is no explanation about gas
 mixture) is appreciated.

 Kind Regards,
 Ali

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Help-to-simulate-gas-mixture-tp5012193.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] probability distribution of bond distance/length

2013-11-01 Thread Mark Abraham
On Fri, Nov 1, 2013 at 4:04 AM, Xu Dong Huang xudonghm...@gmail.com wrote:

 Dear all,

 I would like to assess the probability distribution of particle bond
 distance/length over the entire run, specifically I want to collect
 possibly a histogram representation or even a regular plot. Would using
 g_bond be the correct way to obtain the probability distribution?


g_bond -h and/or manual chapter 8 are the best places to start.


 Or is there another function that gets probability distribution
 specifically. Also, if using g_bond, it will give me an average (I
 suppose), so how can I get a histogram/data series representation? (I do
 not want to visualize this result using xmgrace)


g_analyze, or your favourite maths/stats software package.

P.S I believe someone earlier suggested a link to the data collection
 reporting procedure, I tried it and changed the .xvg to a .csv, but the
 data reported in excel format all belongs to 1 single column, which won’t
 let me make a plot.


A plot of two columns, no, but it should let you plot a histogram!

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] GMX manually generate topology for residues

2013-11-01 Thread Mark Abraham
They're http://en.wikipedia.org/wiki/C_preprocessor symbols that are
#defined elsewhere in the directory that contains that .rtp file. The
names/symbols probably map to the original force field literature. grep is
your friend.

Mark


On Fri, Nov 1, 2013 at 6:45 AM, charles char...@mails.bicpu.edu.in wrote:

 i am a newbie to gromacs, trying to generate a new rtp entry for my
 residue.
 Bonds, dihedrals of .rtp file has gb_XX
 what is gb_XX numbers? how can get informations about that?
 How to define those velues for my residues?
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-29 Thread Mark Abraham
On Oct 29, 2013 1:26 AM, Pavan Ghatty pavan.grom...@gmail.com wrote:

 Now /afterok/ might not work since technically the job is killed due to
 walltime limits - making it not ok.

Hence use -maxh!

Mark

 So I suppose /afterany/ is a better
 option. But I do appreciate your warning about spamming the queue and yes
I
 will re-read PBS docs.


 On Mon, Oct 28, 2013 at 5:11 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

  On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.com
  wrote:
 
   Mark,
  
   The problem with one .tpr file set for 100ns is that when job number
  (say)
   4 hits the wall limit, it crashes and never gets a chance to submit
the
   next job. So it's not really automated.
  
 
  That's why I suggested -maxh, so you can have an orderly shutdown.
(Though
  if a job can get suspended, that won't always help, because mdrun can't
  find out about the suspension...)
 
  Now I could initiate job 5 before /mdrun/ in job 4's script and hold
job 5
   till job 4 ends.
 
 
  Sure - read your PBS docs and find the environment variable to read so
that
  job 4 knows its ID so it can submit job 5 with an afterok hold on job 4
on
  it. But don't tell your sysadmins where I live. ;-) Seriously, if you
live
  on this edge, you could spam infinite jobs, which tends to get your
account
  cut off. That's why you want the afterok hold - you only want the next
job
  to start if the exit code from the first script correctly indicates that
  mdrun exited correctly. Test carefully!
 
  Mark
 
  But the PBS queuing system is sometime weird and takes a
   bit of time to recognize a job and give back its jobID. So I could
submit
   job 5 but be unable to change its status to /hold/ because PBS does
not
   return its ID. Another problem is that if resources are available,
job 5
   could start before I ever get a chance to /hold/ it.
  
  
  
  
   On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham 
mark.j.abra...@gmail.com
   wrote:
  
On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty 
pavan.grom...@gmail.com
wrote:
   
 I have need to collect 100ns but I can collect only ~1ns
(1000steps)
   per
 run. Since I dont have .trr files, I rely on .cpt files for
restarts.
   For
 example,

 grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15

 This runs into a problem when the run gets killed due to walltime
limits. I
 now have a .xtc file which has run (say) 700 steps and a .cpt file
   which
 was last written at 600th step.

   
You seem to have no need to use grompp, because you don't need to
use a
workflow that generates multiple .tpr files. Do the equivalent of
what
   the
restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make
a
   .tpr
for the whole 100ns run, and then keep doing
   
mdrun -s whole-run -cpi whateverwaslast -deffnm
  whateversuitsyouthistime
   
with or without -append, perhaps with -maxh, keeping whatever manual
backups you feel necessary. Then perhaps concatenate your final
   trajectory
files, according to your earlier choices.
   
- To set up the next run I use the .cpt file from 600th step.
 - Now during analysis if I want to center the protein and such,
   /trjconv/
 needs an .xtc and .tpr file but not a .cpt file. So how does
  /trjconv/
know
 to stop at 600th step?
   
   
trjconv just operates on the contents of the trajectory file, as
  modified
by things like -b -e and -dt. The .tpr just gives it context, such
as
   atom
names. You could give it a .tpr from any point during the run.
   
Mark
   
If this has to be put in manually, it becomes
 cumbersome.

 Thoughts?





 On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
   wrote:

 
 
  On 10/27/13 9:37 AM, Pavan Ghatty wrote:
 
  Hello All,
 
  Is there a way to make mdrun put out .cpt file with the same
   frequency
 as
  a
  .xtc or .trr file. From here
  http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts

 http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see
  that
we
  can choose how often (time in mins) the .cpt file is written.
But
 clearly
  if the frequency of output of .cpt (frequency in mins) and .xtc
 (frequency
  in simulation steps) do not match, it can create problems
during
 analysis;
  especially in the event of frequent crashes. Also, I am not
  storing
.trr
  file since I dont need that precision.
  I am using Gromacs 4.6.1.
 
 
  What problems are you experiencing?  There is no need for .cpt
frequency
  to be the same as .xtc frequency, because any duplicate frames
  should
be
  handled elegantly when appending.
 
  -Justin
 
  --
  ==**
 
  Justin A. Lemkul, Ph.D.
  Postdoctoral Fellow

Re: [gmx-users] pbc problem

2013-10-29 Thread Mark Abraham
On Tue, Oct 29, 2013 at 5:02 PM, shahab shariati
shahab.shari...@gmail.comwrote:

 Dear Mark

 Very thanks for your reply

  To make this clear, center the trajectory on the water and watch the
  time evolution in some visualization program.

 I did your suggestion (center the trajectory on the water). Again, drug
 molecule is in region (1)in some frames and is in region (4) in other
 frames.


With pbc = xyz, you do not have two chunks of water. You have one chunk of
water. Where you put the box for visualization is irrelevant to the
simulation. You could align one of the box sides with one of the membrane
surfaces, and now you will see only one chunk of membrane, and one chunk of
water. In that chunk of water the drug goes wherever diffusion takes it,
just like it did inside the membrane.

Mark



 --
 Dear Justin

 Very thanks for your attention

  As has already been stated several times, there is no problem at all.
  The outcome is completely normal, and there are not discrete
  regions (1) and (4).
  It is a continuous block of water via PBC.  The molecule can freely
  diffuse throughout it.

 If outcome is completely normal, Can I use this structure for pmf
 calculation. I want to calculate potential of mean force, delta G, as a
 function of the distance between the centers of mass of drug and the
 centers of mass of bilayer.

 Best wishes for you.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-28 Thread Mark Abraham
On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:

 I have need to collect 100ns but I can collect only ~1ns (1000steps) per
 run. Since I dont have .trr files, I rely on .cpt files for restarts. For
 example,

 grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15

 This runs into a problem when the run gets killed due to walltime limits. I
 now have a .xtc file which has run (say) 700 steps and a .cpt file which
 was last written at 600th step.


You seem to have no need to use grompp, because you don't need to use a
workflow that generates multiple .tpr files. Do the equivalent of what the
restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a .tpr
for the whole 100ns run, and then keep doing

mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime

with or without -append, perhaps with -maxh, keeping whatever manual
backups you feel necessary. Then perhaps concatenate your final trajectory
files, according to your earlier choices.

- To set up the next run I use the .cpt file from 600th step.
 - Now during analysis if I want to center the protein and such, /trjconv/
 needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/ know
 to stop at 600th step?


trjconv just operates on the contents of the trajectory file, as modified
by things like -b -e and -dt. The .tpr just gives it context, such as atom
names. You could give it a .tpr from any point during the run.

Mark

If this has to be put in manually, it becomes
 cumbersome.

 Thoughts?





 On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu wrote:

 
 
  On 10/27/13 9:37 AM, Pavan Ghatty wrote:
 
  Hello All,
 
  Is there a way to make mdrun put out .cpt file with the same frequency
 as
  a
  .xtc or .trr file. From here
  http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
 http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that we
  can choose how often (time in mins) the .cpt file is written. But
 clearly
  if the frequency of output of .cpt (frequency in mins) and .xtc
 (frequency
  in simulation steps) do not match, it can create problems during
 analysis;
  especially in the event of frequent crashes. Also, I am not storing .trr
  file since I dont need that precision.
  I am using Gromacs 4.6.1.
 
 
  What problems are you experiencing?  There is no need for .cpt frequency
  to be the same as .xtc frequency, because any duplicate frames should be
  handled elegantly when appending.
 
  -Justin
 
  --
  ==**
 
  Justin A. Lemkul, Ph.D.
  Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 601
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu
 | (410)
  706-7441
 
  ==**
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/**mailman/listinfo/gmx-users
 http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at http://www.gromacs.org/**
  Support/Mailing_Lists/Search
 http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
  * Please don't post (un)subscribe requests to the list. Use the www
  interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
 http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Failure in MD run without any error

2013-10-28 Thread Mark Abraham
Hi,

Hard to know. LAM was discontinued over 4 years ago. You could have a flaky
file system. Unless you're trying to run a jobsover both machines over
network like Infiniband, you don't even want to use an external MPI library
- single-node performance with built-in thread-MPI will give much better
value.

Mark


On Mon, Oct 28, 2013 at 9:12 PM, niloofar niknam
niloofae_nik...@yahoo.comwrote:



  DearGromacs users
 I have encountered something strange. I have installed Red
 Hat Enterprise Linux 6.1  6.2 on two machines recently and then lam 7.1.4,
 fftw 3.3.2 and Gromacs 4.5.5 .
 During linux installation, everything went well I didn`t
 face any complain or receiving any error, as well as in lam, fftw and
 Gromacs
 installation, But when I run an MD job on both of these machines, at first
 everything seems normal but after some steps ( usually multi thousands
 steps), the
 job doesn`t proceed. Log file does not show any change or there is no
 error.
 Obviously the job is stopped while terminal shows all the processors are
 100%
 busy.
 I have also reinstalled the linux and the mentioned programs
 too but it did not solve the problem. I don`t have any idea what the
 problem
 is. Any comment or suggestion would be highly appreciated.
 Thanks in advance,
 Niloofar

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun cpt

2013-10-28 Thread Mark Abraham
On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:

 Mark,

 The problem with one .tpr file set for 100ns is that when job number (say)
 4 hits the wall limit, it crashes and never gets a chance to submit the
 next job. So it's not really automated.


That's why I suggested -maxh, so you can have an orderly shutdown. (Though
if a job can get suspended, that won't always help, because mdrun can't
find out about the suspension...)

Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
 till job 4 ends.


Sure - read your PBS docs and find the environment variable to read so that
job 4 knows its ID so it can submit job 5 with an afterok hold on job 4 on
it. But don't tell your sysadmins where I live. ;-) Seriously, if you live
on this edge, you could spam infinite jobs, which tends to get your account
cut off. That's why you want the afterok hold - you only want the next job
to start if the exit code from the first script correctly indicates that
mdrun exited correctly. Test carefully!

Mark

But the PBS queuing system is sometime weird and takes a
 bit of time to recognize a job and give back its jobID. So I could submit
 job 5 but be unable to change its status to /hold/ because PBS does not
 return its ID. Another problem is that if resources are available, job 5
 could start before I ever get a chance to /hold/ it.




 On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.com
  wrote:
 
   I have need to collect 100ns but I can collect only ~1ns (1000steps)
 per
   run. Since I dont have .trr files, I rely on .cpt files for restarts.
 For
   example,
  
   grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
  
   This runs into a problem when the run gets killed due to walltime
  limits. I
   now have a .xtc file which has run (say) 700 steps and a .cpt file
 which
   was last written at 600th step.
  
 
  You seem to have no need to use grompp, because you don't need to use a
  workflow that generates multiple .tpr files. Do the equivalent of what
 the
  restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
 .tpr
  for the whole 100ns run, and then keep doing
 
  mdrun -s whole-run -cpi whateverwaslast -deffnm whateversuitsyouthistime
 
  with or without -append, perhaps with -maxh, keeping whatever manual
  backups you feel necessary. Then perhaps concatenate your final
 trajectory
  files, according to your earlier choices.
 
  - To set up the next run I use the .cpt file from 600th step.
   - Now during analysis if I want to center the protein and such,
 /trjconv/
   needs an .xtc and .tpr file but not a .cpt file. So how does /trjconv/
  know
   to stop at 600th step?
 
 
  trjconv just operates on the contents of the trajectory file, as modified
  by things like -b -e and -dt. The .tpr just gives it context, such as
 atom
  names. You could give it a .tpr from any point during the run.
 
  Mark
 
  If this has to be put in manually, it becomes
   cumbersome.
  
   Thoughts?
  
  
  
  
  
   On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul jalem...@vt.edu
 wrote:
  
   
   
On 10/27/13 9:37 AM, Pavan Ghatty wrote:
   
Hello All,
   
Is there a way to make mdrun put out .cpt file with the same
 frequency
   as
a
.xtc or .trr file. From here
http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts
   http://www.gromacs.org/Documentation/How-tos/Doing_RestartsI see that
  we
can choose how often (time in mins) the .cpt file is written. But
   clearly
if the frequency of output of .cpt (frequency in mins) and .xtc
   (frequency
in simulation steps) do not match, it can create problems during
   analysis;
especially in the event of frequent crashes. Also, I am not storing
  .trr
file since I dont need that precision.
I am using Gromacs 4.6.1.
   
   
What problems are you experiencing?  There is no need for .cpt
  frequency
to be the same as .xtc frequency, because any duplicate frames should
  be
handled elegantly when appending.
   
-Justin
   
--
==**
   
Justin A. Lemkul, Ph.D.
Postdoctoral Fellow
   
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
   
jalemkul@outerbanks.umaryland.**edu 
 jalem...@outerbanks.umaryland.edu
  
   | (410)
706-7441
   
==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-users
   http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Search
   http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
* Please don't

Re: [gmx-users] Re: gmx-users Digest, Vol 114, Issue 15

2013-10-28 Thread Mark Abraham
On Mon, Oct 28, 2013 at 8:04 PM, Hari Pandey hariche...@yahoo.com wrote:

 Dear Gromacs Users,

 First, I would like to thank Dr. Lemkul for reply.

 My problem description is as follows:
 I am using CHARMM36  forcefield to equilibrate of AOT. when I add the mass
 of all atoms from topology, it gives me 444.5 which is correct but when I
 run the script


Justin asked you about your atom names, but somehow you have forgotten to
answer him :-)



 editconf -c  -f A.gro -o A.gro   -density 1000  -bt cubic -box  5 -d 0.1 .


 It  display incorrect value for mass of input.  The mass of input should
 be 444.5 . The out come of above script is:

 Volume: 125 nm^3, corresponds to roughly 56200 electrons
 No velocities found
 system size :  0.215  0.234  0.157 (nm)
 diameter:  0.287   (nm)
 center  :  2.500  2.500  2.500 (nm)
 box vectors :  5.000  5.000  5.000 (nm)
 box angles  :  90.00  90.00  90.00 (degrees)
 box volume  : 125.00   (nm^3)

 WARNING: masses and atomic (Van der Waals) radii will be determined
  based on residue and atom names. These numbers can deviate
  from the correct mass and radius of the atom type.


editconf only has a .gro file, so it does not know about any atom types, or
bonds, so it is not worth trying to write code to guess correctly whether
HG1 is the first hydrogen on the gamma carbon, or the first mercury, etc.
We do write a warning message, but sometimes people don't read them.

Mark



 Volume  of input 125 (nm^3)
 Massof input 967.25 (a.m.u.)
 Density of input 12.8493 (g/l)
 Scaling all box vectors by 0.234221
 new system size :  0.050  0.055  0.037
 shift   :  1.914  1.914  1.914 (nm)
 new center  :  2.500  2.500  2.500 (nm)
 new box vectors :  5.000  5.000  5.000 (nm)
 new box angles  :  90.00  90.00  90.00 (degrees)
 new box volume  : 125.00(nm^3)
 Here we can see that mass of input is  967.25 which is far beyond the
 reality. this will cause error in density and all other mass dependent
 parameters.

 Please  help me how do I come over to this error
 Thank you so much for you kind help

 Hari
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem with reading AMBER trajectories

2013-10-26 Thread Mark Abraham
Hi,

Seems plausible, and it's good to know you have the plugins working for at
least one format! The question of whether the plugins are out of step with
the main VMD distribution would be best raised on the VMD mailing list (but
search first!). If you do, you might also suggest that the links in the
plugin docs be updated to http://ambermd.org/formats.html

Cheers,

Mark
On Oct 26, 2013 9:20 AM, anu chandra anu80...@gmail.com wrote:

 Hi,

 FYI, when I feed the coordinates in '.binpos' format, which I generated
 after loading the same '.crd' file to VMD, could able to do the job. What I
 infer from this is that the VMD molfile, for reading AMBER '.crd'
 trajectories, has made for reading AMBER 7 '.crd' formatted trajectories
 which can not able to read latest ones.


 On Sat, Oct 26, 2013 at 12:21 PM, anu chandra anu80...@gmail.com wrote:

  Hi,
 
  Sorry for the late reply. I have tried all the possibilities with
 filename
  extension as mentioned in the VMD molfile details. As said, VMD uses .crd
  or .crdbox filename extensions for reading Amber trajectories. I have
 tried
  with both the options ( ie. with .crd and .crdbox extensions) , but
  unfortunately both the attempt got failed with same error as shown below
 
  *
  Note: the fit and analysis group are identical,
while the fit is mass weighted and the analysis is not.
Making the fit non mass weighted.
 
 
  WARNING: If there are molecules in the input trajectory file
   that are broken across periodic boundaries, they
   cannot be made whole (or treated as whole) without
   you providing a run input file.
 
  Calculating the average structure ...
  The file format of eqc.crdbox is not a known trajectory format to
 GROMACS.
  Please make sure that the file is a trajectory!
 
  GROMACS will now assume it to be a trajectory and will try to open it
  using the VMD plug-ins.
  This will only work in case the VMD plugins are found and it is a
  trajectory format supported by VMD.
 
  Using VMD plugin: crdbox (AMBER Coordinates with Periodic Box)
 
  Format of file eqc.crdbox does not record number of atoms.
 
 
  ---
  Program g_covar, VERSION 4.6.1
  Source code file: /usr/local/gromacs-4.6.1/src/gmxlib/trxio.c, line: 1035
 
  Fatal error:
  Not supported in read_first_frame: md1.crdbox
 
  For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
  ---
 
  Hang On to Your Ego (F. Black)
 
 
 
 
 
  Can anyone please help me to figure out what is going wrong here?
 
  Many thanks
  Anu
 
 
 
  On Fri, Oct 18, 2013 at 6:21 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
 
  OK. All GROMACS does is feed your filename extension to the VMD library
  and
  let it choose how to read the file based on that. If that doesn't make
  sense (and it seems it doesn't, because GROMACS wasn't told about the
  number of atoms, and it needs to know), then the ball is back to you to
  choose the filename extension in the way the plugin needs. I suggest you
  check out http://www.ks.uiuc.edu/Research/vmd/plugins/molfile/ and try
  some
  alternatives.
 
  Mark
 
 
  On Fri, Oct 18, 2013 at 2:10 PM, anu chandra anu80...@gmail.com
 wrote:
 
   Hi Mark,
  
   Yes. I do can able to load the trajectories successfully in VMD with
 the
   file format option of ' AMBER coordinate with periodic box'. I am
 using
  VMD
   1.9 version.
  
   Regards
   Anu
  
  
  
  
   On Fri, Oct 18, 2013 at 1:05 PM, Mark Abraham 
 mark.j.abra...@gmail.com
   wrote:
  
Can this file be opened in VMD itself?
   
Mark
On Oct 18, 2013 6:21 AM, anu chandra anu80...@gmail.com wrote:
   
 Dear Gromacs users,

 I am trying to use Gromacs to read AMBER trajectories (mdcrd) for
  doing
few
 analysis. Unfortunately I ended-up with the following error.

 
 GROMACS will now assume it to be a trajectory and will try to open
  it
using
 the VMD plug-ins.
 This will only work in case the VMD plugins are found and it is a
 trajectory format supported by VMD.

 Using VMD plugin: crd (AMBER Coordinates)

 Format of file md.crd does not record number of atoms.

 ---
 Program g_covar, VERSION 4.6.1
 Source code file: /usr/local/gromacs-4.6.1/src/gmxlib/trxio.c,
 line:
   1035

 Fatal error:
 Not supported in read_first_frame: md.crd
 For more information and tips for troubleshooting, please check
 the
GROMACS
 website at http://www.gromacs.org/Documentation/Errors

Re: [gmx-users] Re: gmx-users Digest, Vol 114, Issue 64

2013-10-26 Thread Mark Abraham
On Sat, Oct 26, 2013 at 2:07 PM, Santu Biswas santu.biswa...@gmail.comwrote:

 
 
 
  Not working is too vague a symptom for anyone to guess what the problem
  is, sorry.
 
  Mark
  On Oct 24, 2013 9:39 AM, Santu Biswas santu.biswa...@gmail.com
 wrote:
 
   dear users,
  
 I am performing 500ps mdrun in vacuum for
  polypeptide(formed
   by 10-residues leucine) using gromacs_4.5.5(double-precision) using
   opls-aa/L force field.Input file for 500ps mdrun is given below
  
  
   title= peptide in vaccum
   cpp= /lib/cpp
  
   ; RUN CONTROL
   integrator = md
   comm_mode= ANGULAR
   nsteps = 50
   dt= 0.001
   ; NEIGHBOR SEARCHING
   nstlist  = 0
   ns_type   = simple
   pbc = no
   rlist = 0
   ; OUTPUT CONTROL
   nstxout  = 1000
   nstvout  = 1000
   nstxtcout   = 0
   nstlog= 1000
   constraints = none
   nstenergy   = 1000
   ; OPTION FOR ELECTROSTATIC AND VDW
   rcoulomb = 0
   ; Method for doing Van der Waals
   rvdw= 0
   ; OPTIONS FOR WEAK COUPLING ALGORITHMS
   tcoupl  = V-rescale
   tc_grps= Protein
   tau_t= 0.1
   ref_t = 300
   gen_vel= yes
   gen_temp = 300
  
   Using the 500ps trajectory if i run g_hbond_d for calculating the
 number
  of
   hydrogen bonds as a function of time using index file(where atom O and
  atom
   N H is used) it is not working.
   Also if i used g_rdf_d with pbc=no using the 500ps trajectory it is
 also
   not working.
   I do not know why this is happening.
  
   --
   santu
   --
  Thanks Mark for your reply.
 

Using the 500ps trajectory i want to calculate the number of hydrogen
 bonds as a function of time in vacuum .For this calculation i

   have uesd
  g_hbond_d  -f  traj_0-500ps.trr  -s 500ps.tpr -n index.ndx -num
  hbond-num.xvg -dist dist.xvg -ang angle.xvg


With what groups? Can there be any hydrogen bonds between those groups?

Is there a bug fixed in a version of g_hbond that isn't 2 years old? Did a
shorter trajectory work because it took less time? Does doing only one of
three analyses help things to work? You'd be much closer to a solution if
you'd tried some simplifications and done some detective work already ;-)


  Programm was running .After 1 hour it was still running but there was no
  output.
 

 If I calculate the number of hydrogen bonds as a function of time in
 water (no vacuum) using the same command line then there was  no problem.

Same problem when I used g_rdf in vacuum.The commad line I have used
g_rdf_4.5.5 -f traj.trr -s 500ps.tpr -n index.ndx -o rdf.xvg and also
 checked with -nopbc with the same command line.



RDF of what, in vacuum? What groups did you use?


The programm is running but in the output file nothing is written.
If I used g_rdf in water using the same command line there was no
 problem.


OK - but does your analysis make sense in vacuum?

Mark


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Optimizing performance mac osx

2013-10-25 Thread Mark Abraham
On Fri, Oct 25, 2013 at 11:19 AM, Tiago Gomes tiagogome...@gmail.comwrote:

 Hi,

 I am relatively new to the gromacs environment and would like to optimize
 performance for my mac pro (osx 10.6.8)
 with 8 cores (16 in hyper-theading). I´ve read that one can use
 the g_tune_pme, i guess with np = 16. Don´t know if using mpirun or gromacs
 compiled mpi
 would be faster.


Won't be. GROMACS's default built-in thread-MPI is designed for this case.
Do check out
http://www.gromacs.org/Documentation/Acceleration_and_parallelization


 I guess sin gromacs 4,5 mpirun is deprecated and mdrun
 automatically distributes the workload through all the cores, i think.
 We also have a 40 core condor cluster; would setting it up there increase
 performance?


Using more than one node for the same simulation is not useful unless you
have a high-speed network, e.g. Infiniband. A network over which one would
use Condor would generally not be suitable.


 I think also that the scaling depends on the number of atoms.
 Any info on this?


Depends very much on the hardware, compiler, model physics and simulation
composition, also. You should aim for something at least around 500-800
atoms per physical core, though.

Mark
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] pbc problem

2013-10-24 Thread Mark Abraham
On Oct 24, 2013 8:10 AM, shahab shariati shahab.shari...@gmail.com
wrote:

 Dear jkrieger

 I used 2 times trjconv tool:

 1) trjconv -f npt.xtc -s npt.tpr -n index.ndx -o 2npt.xtc -pbc nojump

 2) trjconv -f 2npt.xtc -s npt.tpr -n index.ndx -o 3npt.xtc -pbc mol
-center


 Dear Mark

 I selected all lipid atoms for centering.

 With my manner, pbc problem was solved just for lipids and not for drug
 molecule which is put inside water molecules in top leaflet. This pbc
 problem cause to drug molecule be in top and bottom leaflets, while I want
 to study translocation of the drug molecule from water to lipid bilayer.
 I want to solve this problem for drug molecule.

There is only one water region, so upper and lower don't mean much. If
you just want to see the drug and bilayer in the same PBC cell, then center
on something that is central.

Mark

 If my manner is wrong, please tell me true way.

 Best wishes.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_hbond and g_rdf in vacuum

2013-10-24 Thread Mark Abraham
Not working is too vague a symptom for anyone to guess what the problem
is, sorry.

Mark
On Oct 24, 2013 9:39 AM, Santu Biswas santu.biswa...@gmail.com wrote:

 dear users,

   I am performing 500ps mdrun in vacuum for polypeptide(formed
 by 10-residues leucine) using gromacs_4.5.5(double-precision) using
 opls-aa/L force field.Input file for 500ps mdrun is given below


 title= peptide in vaccum
 cpp= /lib/cpp

 ; RUN CONTROL
 integrator = md
 comm_mode= ANGULAR
 nsteps = 50
 dt= 0.001
 ; NEIGHBOR SEARCHING
 nstlist  = 0
 ns_type   = simple
 pbc = no
 rlist = 0
 ; OUTPUT CONTROL
 nstxout  = 1000
 nstvout  = 1000
 nstxtcout   = 0
 nstlog= 1000
 constraints = none
 nstenergy   = 1000
 ; OPTION FOR ELECTROSTATIC AND VDW
 rcoulomb = 0
 ; Method for doing Van der Waals
 rvdw= 0
 ; OPTIONS FOR WEAK COUPLING ALGORITHMS
 tcoupl  = V-rescale
 tc_grps= Protein
 tau_t= 0.1
 ref_t = 300
 gen_vel= yes
 gen_temp = 300

 Using the 500ps trajectory if i run g_hbond_d for calculating the number of
 hydrogen bonds as a function of time using index file(where atom O and atom
 N H is used) it is not working.
 Also if i used g_rdf_d with pbc=no using the 500ps trajectory it is also
 not working.
 I do not know why this is happening.

 --
 santu
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Output pinning for mdrun

2013-10-24 Thread Mark Abraham
Hi,

No. mdrun reports the stride with which it moves over the logical cores
reported by the OS, setting the affinity of GROMACS threads to logical
cores, and warnings are written for various wrong-looking cases, but we
haven't taken the time to write a sane report of how GROMACS logical
threads and ranks are actually mapped to CPU cores. Where supported by the
processor, the CPUID information is available and used in
gmx_thread_affinity.c. It's just not much fun to try to report that in a
way that will make sense on all possible hardware that supports CPUID - and
then people will ask why it doesn't map to what their mpirun reports, get
confused by hyper-threading, etc.

What question were you seeking to answer?

Mark



On Thu, Oct 24, 2013 at 11:44 AM, Carsten Kutzner ckut...@gwdg.de wrote:

 Hi,

 can one output how mdrun threads are pinned to CPU cores?

 Thanks,
   Carsten
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Output pinning for mdrun

2013-10-24 Thread Mark Abraham
On Thu, Oct 24, 2013 at 4:52 PM, Carsten Kutzner ckut...@gwdg.de wrote:

 On Oct 24, 2013, at 4:25 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  No. mdrun reports the stride with which it moves over the logical cores
  reported by the OS, setting the affinity of GROMACS threads to logical
  cores, and warnings are written for various wrong-looking cases, but we
  haven't taken the time to write a sane report of how GROMACS logical
  threads and ranks are actually mapped to CPU cores. Where supported by
 the
  processor, the CPUID information is available and used in
  gmx_thread_affinity.c. It's just not much fun to try to report that in a
  way that will make sense on all possible hardware that supports CPUID -
 and
  then people will ask why it doesn't map to what their mpirun reports, get
  confused by hyper-threading, etc.
 Yes, I see.
 
  What question were you seeking to answer?
 Well, I just wanted to check whether my process placement is correct and
 that
 I am not getting decreased performance due to a suboptimal placement. In
 many cases the performance is really bad (like 50% of the expected values)
 if the pinning is wrong or does not work, but you never know.


GROMACS does report if its attempt to set affinities fail (and the reason),
which covers some of the problem cases. Keeping MPI ranks closely
associated with the hardware granularity (nodes, sockets, GPUs) will be
important, but that's something to configure at the mpirun level.
(Thread-MPI, being a single-node solution, has more assumptions it can make
safely.) Keeping OpenMP threads within their MPI ranks pinned within
hardware regions is important, but severity and solutions vary a lot with
the hardware and software context (e.g. you might as well get out an
abacus, as run GROMACS with OpenMP spread over a whole AMD processor, but
with a single GPU then that can be the best you can do, at the moment).

Key to interpreting performance results is to measure the (pinned)
single-core performance, so that there is a minimum-overhead reference
number for comparison.

Mark

On some clusters there are of course tools that check and output the process
 placement for a dummy parallel job, or environment variables like
 MP_INFOLEVEL for
 loadleveler.

 Thanks!
   Carsten


  Mark
 
 
 
  On Thu, Oct 24, 2013 at 11:44 AM, Carsten Kutzner ckut...@gwdg.de
 wrote:
 
  Hi,
 
  can one output how mdrun threads are pinned to CPU cores?
 
  Thanks,
   Carsten
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] nstcalclr bug?

2013-10-24 Thread Mark Abraham
Ja. No twin-range = no long-range :-)

Mark


On Thu, Oct 24, 2013 at 5:50 PM, jkrie...@mrc-lmb.cam.ac.uk wrote:

 I think nstcalclr would only do something if you have longer range
 interactions to calculate (lr means longer than rlist). Therefore
 something has be longer than rlist for this to happen.

  Hi there,
 
  I am using gromacs-4.6.1 with this mdp file:
 
  integrator= md; leap-frog integrator
  nsteps= 300   ; 6.0 ns
  dt= 0.002 ; 2 fs
  nstxout   = 0 ; save coordinates every 10 ps
  nstvout   = 0 ; save velocities every 10 ps
  nstenergy = 5000  ; save energies every 10 ps
  nstlog= 5000  ; update log file every 5 ps
  nstcalcenergy   = 100  ;
  nstxtcout   = 5000  ; xtc every 10 ps
  xtc_precision = 100
  continuation  = yes   ; Restarting
  constraint_algorithm = lincs  ; holonomic constraints
  constraints   = all-bonds ; all bonds (even heavy atom-H bonds)
 constrained
  lincs_iter= 1 ; accuracy of LINCS
  lincs_order   = 4 ; also related to accuracy
  ns_type   = grid  ; search neighboring grid cells
  nstlist   = 20; 10 fs
  rlist = 1.0   ; short-range neighborlist cutoff (in nm)
  rcoulomb  = 1.0   ; short-range electrostatic cutoff (in nm)
  rvdw  = 1.0   ; short-range van der Waals cutoff (in nm)
  nstcalclr= 10
  cutoff-scheme   = Group
  vdwtype = Cut-off
  vdw-modifier = Potential-shift
  coulombtype   = PME   ; Particle Mesh Ewald for long-range
 electrostatics
  pme_order = 4 ; cubic interpolation
  fourierspacing= 0.16  ; grid spacing for FFT
  coulomb-modifier = Potential-shift
  tcoupl= V-rescale ; modified Berendsen thermostat
  tc-grps   = System; two coupling groups - more
 accurate
  tau_t = 0.1   ; time constant, in ps
  ref_t = 300   ; reference temperature, one for each
 group, in K
  energygrps  = complex Water; group(s) to write to energy file
  pcoupl= Parrinello-Rahman ; Pressure coupling on in
 NPT
  pcoupltype= isotropic ; uniform scaling of box vectors
  tau_p = 2.0   ; time constant, in ps
  ref_p = 1.0   ; reference pressure, in bar
  compressibility = 4.5e-5  ; isothermal compressibility of water,
 bar^-1
  refcoord_scaling = com
  pbc   = xyz   ; 3-D PBC
  DispCorr  = EnerPres  ; account for cut-off vdW scheme
  gen_vel   = no; Velocity generation is off
  gen-seed= 128742
  ; number of steps for center of mass motion removal
  nstcomm  = 1000
 
  the mdout.mdp file says nstcalclr = 10, but gmxdump of the tpr file says
  nstcalclr = 0. If I set rvdw = 1.4 ( rlist), gmxdump of the file tpr is
  now correct to nstcalclr = 10.
  I have double checked the manual but I couldn't find the reason of this
  behaviour.
 
  is this a bug or am I doing wrong somewhere??
 
  thanks for any helps
 
 
  and
 
 
 
 
  Andrea Spitaleri PhD
  D3 - Drug Discovery  Development
  Istituto Italiano di Tecnologia
  Via Morego, 30 16163 Genova
  cell: +39 3485188790
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] pbc problem

2013-10-24 Thread Mark Abraham
As Justin said, there is no actual division between region 1 and 4.
Apparently you got the free diffusion you asked for! :-)

Mark


On Thu, Oct 24, 2013 at 4:57 PM, shahab shariati
shahab.shari...@gmail.comwrote:

 Dear Mark

 Thank for your reply.

 If I show my system as 4 regions, my system before equilibration is as
 fallows:

 region (1): water + drug
 region (2): top leaflet of bilayer
 region (3): bottom leaflet of bilayer
 region (4): water

 After equilibration, drug molecule exits region (1) and enters region (4).

 Please tell me how to fix it? Which options of trjconv are appropriate
 for this problem?

 Best wishes
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Continuing runs from 4.5.4 in 4.6.3

2013-10-23 Thread Mark Abraham
On Oct 23, 2013 7:24 AM, rajat desikan rajatdesi...@gmail.com wrote:

 Hi,

 We recently had a software upgrade in our cluster from gromacs 4.5.4. to
 gromacs 4.6.3.. I need to continue an earlier simulation that had been run
 in 4.5.4. using the .cpt, .tpr and .mdp.

 Are there any issues with continuing these runs in 4.6.3.? Can I
 concatenate these trajectories for later analysis?

This is not recommended. Even if it works, the trajectory is discontinuous,
and the years of accumulated bug fixes, and complete re-implementation of
the kernels in 4.6.3, are likely to make the discontinuity observable.
Upgrading within a minor release (4.5.4 - 4.5.7, 4.6 - 4.6.3) is intended
to work (modulo relevant bug fixes), but would still tend to make your
reviewer nervous.

Mark

 I notice that I cannot use a 4.6.3 .cpt and .tpr in 4.5.4.

 Any input will be appreciated. Thanks.

 --
 Rajat Desikan (Ph.D Scholar)
 Prof. K. Ganapathy Ayappa's Lab (no 13),
 Dept. of Chemical Engineering,
 Indian Institute of Science, Bangalore
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Box size increases in NPT

2013-10-23 Thread Mark Abraham
On Oct 23, 2013 5:34 AM, Nilesh Dhumal ndhu...@andrew.cmu.edu wrote:

 Hello,

 I am running a NPT simulation for cyclopropylchloride(1) in
 50%water(100)+50%ethanol(100) using opls force field parameter .

 After equilibration box size increases from 20 A to 70 A.

Really? Seems wildly unlikely to have occurred without crashing. Over what
time span? How did you observe before and after? What densities do you
measure?

Mark

 I used the following mdp file.

 ; RUN CONTROL PARAMETERS =
 integrator   = sd
 ; start time and timestep in ps =
 tinit= 0
 dt   = 0.001
 nsteps   = 5
 ; number of steps for center of mass motion removal =
 nstcomm  = 100
 ; OUTPUT CONTROL OPTIONS =
 ; Output frequency for coords (x), velocities (v) and forces (f) =
 nstxout  = 0
 nstvout  = 0
 nstfout  = 0
 ; Output frequency for energies to log file and energy file =
 nstlog   = 500
 nstenergy= 100
 ; Output frequency and precision for xtc file =
 nstxtcout= 5000
 xtc-precision= 1000
 ; NEIGHBORSEARCHING PARAMETERS =
 ; nblist update frequency =
 nstlist  = 10
 ; ns algorithm (simple or grid) =
 ns_type  = grid
 ;OPTIONS FOR TEMPERATURE COUPLING
 tc_grps  = system
 tau_t= 0.1
 ref_t= 290;350
 ;OPTIONS FOR PRESSURE COUPLING
 Pcoupl   = berendsen
 tau_p= 0.5
 compressibility  = 4.5e-05
 ref_p= 1.0
 ; OPTIONS FOR BONDS =
 constraints  = hbonds
 ; Type of constraint algorithm =
 constraint-algorithm = Lincs
 ; Do not constrain the start configuration =
 unconstrained-start  = no
 ; Relative tolerance of shake =
 shake-tol= 0.0001
 ; Highest order in the expansion of the constraint coupling matrix =
 lincs-order  = 12
 ; Lincs will write a warning to the stderr if in one step a bond =
 ; rotates over more degrees than =
 lincs-warnangle  = 30

 ; Periodic boundary conditions: xyz or none =
 pbc  = xyz
 ; nblist cut-off =
 rlist= 0.9
 domain-decomposition = no
 ; OPTIONS FOR ELECTROSTATICS AND VDW =
 ; Method for doing electrostatics =
 coulombtype  = pme
 ;rcoulomb-switch  = 0
 rcoulomb = 0.9
 ; Dielectric constant (DC) for cut-off or DC of reaction field =
 epsilon-r= 1
 ; Method for doing Van der Waals =
 vdw-type = switch
 ; cut-off lengths=
 rvdw-switch  = 0.8
 rvdw = 0.9
 ; Apply long range dispersion corrections for Energy and Pressure =
 DispCorr  = EnerPres
 ; Spacing for the PME/PPPM FFT grid =
 fourierspacing   = 0.1
 ; FFT grid size, when a value is 0 fourierspacing will be used =
 fourier_nx   = 0
 fourier_ny   = 0
 fourier_nz   = 0
 ; EWALD/PME/PPPM parameters =
 pme_order= 6
 ewald_rtol   = 1e-06
 epsilon_surface  = 0
 optimize_fft = no
 ; Free energy control stuff
 free_energy  = no


 Nilesh



 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Box size increases in NPT

2013-10-23 Thread Mark Abraham
By crash I meant explode not DD is impossible. Explosions don't
happen because of parallelism, they happen because the steps are too large
for the size of the forces. The forces required to stably expand a box from
20A to 70A seem likely to be so large that I am very skeptical that you
could design such a simulation to do this with a 1fs time step.

Mark


On Wed, Oct 23, 2013 at 11:39 AM, Dr. Vitaly Chaban vvcha...@gmail.comwrote:

 If the job is not very parallel, it will not crash.

 It is better to preequilibrate in NVT beforehand. Cyclopropylchloride
 is probably a liquid at 290K, if the model is parametrized reasonably.
 So it should not phase-separate.

 Vitaly


 On Wed, Oct 23, 2013 at 11:29 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
  On Oct 23, 2013 5:34 AM, Nilesh Dhumal ndhu...@andrew.cmu.edu wrote:
 
  Hello,
 
  I am running a NPT simulation for cyclopropylchloride(1) in
  50%water(100)+50%ethanol(100) using opls force field parameter .
 
  After equilibration box size increases from 20 A to 70 A.
 
  Really? Seems wildly unlikely to have occurred without crashing. Over
 what
  time span? How did you observe before and after? What densities do you
  measure?
 
  Mark
 
  I used the following mdp file.
 
  ; RUN CONTROL PARAMETERS =
  integrator   = sd
  ; start time and timestep in ps =
  tinit= 0
  dt   = 0.001
  nsteps   = 5
  ; number of steps for center of mass motion removal =
  nstcomm  = 100
  ; OUTPUT CONTROL OPTIONS =
  ; Output frequency for coords (x), velocities (v) and forces (f) =
  nstxout  = 0
  nstvout  = 0
  nstfout  = 0
  ; Output frequency for energies to log file and energy file =
  nstlog   = 500
  nstenergy= 100
  ; Output frequency and precision for xtc file =
  nstxtcout= 5000
  xtc-precision= 1000
  ; NEIGHBORSEARCHING PARAMETERS =
  ; nblist update frequency =
  nstlist  = 10
  ; ns algorithm (simple or grid) =
  ns_type  = grid
  ;OPTIONS FOR TEMPERATURE COUPLING
  tc_grps  = system
  tau_t= 0.1
  ref_t= 290;350
  ;OPTIONS FOR PRESSURE COUPLING
  Pcoupl   = berendsen
  tau_p= 0.5
  compressibility  = 4.5e-05
  ref_p= 1.0
  ; OPTIONS FOR BONDS =
  constraints  = hbonds
  ; Type of constraint algorithm =
  constraint-algorithm = Lincs
  ; Do not constrain the start configuration =
  unconstrained-start  = no
  ; Relative tolerance of shake =
  shake-tol= 0.0001
  ; Highest order in the expansion of the constraint coupling matrix =
  lincs-order  = 12
  ; Lincs will write a warning to the stderr if in one step a bond =
  ; rotates over more degrees than =
  lincs-warnangle  = 30
 
  ; Periodic boundary conditions: xyz or none =
  pbc  = xyz
  ; nblist cut-off =
  rlist= 0.9
  domain-decomposition = no
  ; OPTIONS FOR ELECTROSTATICS AND VDW =
  ; Method for doing electrostatics =
  coulombtype  = pme
  ;rcoulomb-switch  = 0
  rcoulomb = 0.9
  ; Dielectric constant (DC) for cut-off or DC of reaction field =
  epsilon-r= 1
  ; Method for doing Van der Waals =
  vdw-type = switch
  ; cut-off lengths=
  rvdw-switch  = 0.8
  rvdw = 0.9
  ; Apply long range dispersion corrections for Energy and Pressure =
  DispCorr  = EnerPres
  ; Spacing for the PME/PPPM FFT grid =
  fourierspacing   = 0.1
  ; FFT grid size, when a value is 0 fourierspacing will be used =
  fourier_nx   = 0
  fourier_ny   = 0
  fourier_nz   = 0
  ; EWALD/PME/PPPM parameters =
  pme_order= 6
  ewald_rtol   = 1e-06
  epsilon_surface  = 0
  optimize_fft = no
  ; Free energy control stuff
  free_energy  = no
 
 
  Nilesh
 
 
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Re: [gmx-users] a new GROMACS simulation tool

2013-10-23 Thread Mark Abraham
Hi,

Sounds very interesting. Can I have a test account, please?

The Lindahl group has some related work going on at
http://copernicus-computing.org/, automating large-scale simulation
workflows. I'm not sure yet whether we have any synergies! :-)

Cheers,

Mark


On Tue, Oct 22, 2013 at 4:34 PM, Kevin Chen fch6...@gmail.com wrote:

 Hi Everyone,

 I'm writing to let you guys know that we have developed a web-based tool MD
 simulation tool for GROMACS.  It is a software package primarily developed
 for biological MD and offers a huge amount of possible options and settings
 for tailoring the simulations. Seamlessly integrated with newly developed
 GUI interfaces, the tool provides comprehensive setup, simulation, analysis
 and job submission tools. Most importantly, unlike other GROMACS GUI
 applications, user can actually run really simulations using the dedicated
 HPC resources. That been said, there's no proposal and installation
 required.  This tool could be a great fit for both teaching and research
 projects. Users inexperienced in MD can work along prepared workflows,
 while
 experts may enjoy a significant relief from the tedium of typing and
 scripting. As for now, we'd like to invite people to participate in user
 testing on this newly developed tool. Let me know if you'd like to try it
 out. We will set up an account for you.

 Best Regards,

 Kevin Chen, Ph.D.
 Information Technology at Purdue (ITaP)
 West Lafayette, IN 47907-2108

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] pbc problem

2013-10-23 Thread Mark Abraham
Center on a particular lipid? Or head group?

Mark
On Oct 23, 2013 6:13 PM, shahab shariati shahab.shari...@gmail.com
wrote:

 Dear gromacs users

 My system contains DOPC + CHOLESTEROLO + WATER + drug molecules in a
 rectangular box.

 I put drug molecule in 2 position: a) drug in the center of bilayer
 membrane, b) drug inside water molecules in top leaflet.

 For both positions, I did energy minimization successfully with following
 mdp file.

 --
 ; Parameters describing what to do, when to stop and what to save
 integrator= steep; Algorithm (steep = steepest descent
 minimization)
 emtol= 1000.0  ; Stop minimization when the maximum force 
 1000.0 kJ/mol/nm
 emstep  = 0.01  ; Energy step size
 nsteps= 5  ; Maximum number of (minimization) steps to
 perform

 ; Parameters describing how to find the neighbors of each atom
 nstlist= 1; Frequency to update the neighbor list and
 long range forces
 ns_type= grid; Method to determine neighbor list (simple,
 grid)
 rlist= 1.2; Cut-off for making neighbor list (short range
 forces)
 coulombtype= PME; Treatment of long range electrostatic
 interactions
 rcoulomb= 1.2; Short-range electrostatic cut-off
 rvdw= 1.2; Short-range Van der Waals cut-off
 pbc= xyz ; Periodic Boundary Conditions

 ---
 After energy minimization, I saw obtained file (em.gro) by VMD. All things
 were true and intact.

 For both positions, I did equilibration in NPT ensemble with following mdp
 file.

 ---
 ; Run parameters
 integrator= md; leap-frog integrator
 nsteps= 25; 2 * 50 = 1000 ps (1 ns)
 dt= 0.002; 2 fs
 ; Output control
 nstxout= 100; save coordinates every 0.2 ps
 nstvout= 100; save velocities every 0.2 ps
 nstxtcout   = 100; xtc compressed trajectory output every 2 ps
 nstenergy= 100; save energies every 0.2 ps
 nstlog= 100; update log file every 0.2 ps
 energygrps  = CHOL DOPC drg SOL
 ; Bond parameters
 continuation= no; Restarting after NVT
 constraint_algorithm = lincs; holonomic constraints
 constraints= all-bonds; all bonds (even heavy atom-H bonds)
 constrained
 lincs_iter= 1; accuracy of LINCS
 lincs_order= 4; also related to accuracy
 ; Neighborsearching
 ns_type= grid; search neighboring grid cels
 nstlist= 5; 10 fs
 rlist= 1.0; short-range neighborlist cutoff (in nm)
 rcoulomb= 1.0; short-range electrostatic cutoff (in nm)
 rvdw= 1.0; short-range van der Waals cutoff (in nm)
 ; Electrostatics
 coulombtype= PME; Particle Mesh Ewald for long-range
 electrostatics
 pme_order= 4; cubic interpolation
 fourierspacing= 0.16; grid spacing for FFT
 ; Temperature coupling is on
 tcoupl= V-rescale; More accurate thermostat
 tc-grps= CHOL_DOPCdrg SOL; three coupling groups - more
 accurate
 tau_t= 0.50.5   0.5   ; time constant, in ps
 ref_t= 323 323   323 ; reference temperature, one for
 each group, in K
 ; Pressure coupling is on
 pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
 pcoupltype= semiisotropic; uniform scaling of x-y box
 vectors, independent z
 tau_p= 5.0; time constant, in ps
 ref_p= 1.01.0; reference pressure, x-y, z (in
 bar)
 compressibility = 4.5e-54.5e-5; isothermal compressibility, bar^-1
 ; Periodic boundary conditions
 pbc= xyz; 3-D PBC
 ; Dispersion correction
 DispCorr= EnerPres; account for cut-off vdW scheme
 ; Velocity generation
 gen_vel= yes; assign velocities from Maxwell distribution
 gen_temp= 323; temperature for Maxwell distribution
 gen_seed= -1; generate a random seed
 ; COM motion removal
 ; These options remove motion of the protein/bilayer relative to the
 solvent/ions
 nstcomm = 1
 comm-mode   = Linear
 comm-grps   = CHOL_DOPC_drg  SOL
 ; Scale COM of reference coordinates
 refcoord_scaling = com


 ---
 For 2 positions, I chechked tempreture and pressure fluctuation and box
 dimention during equilibration. All things were good. When I saw trajectory
 by VMD (npt.gro and npt xtc), I had pbc problem (some atoms leave box and
 enter the box in opposit direction).

 For 

Re: [gmx-users] regarding charge group

2013-10-22 Thread Mark Abraham
Probably, make your broken molecules whole before passing them to grompp.

Mark


On Tue, Oct 22, 2013 at 8:26 AM, Sathish Kumar sathishk...@gmail.comwrote:

 The sum of the two largest charge group radii (13.336) is larger
 than rlist(1.2) - rvdw/rcoulomb i am getting this error while running
 membrane simulations. please any one suggest how to rectify this error.
 --
 regards
 M.SathishKumar
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Ligand breaking in to two

2013-10-21 Thread Mark Abraham
Sounds like issues with
http://www.gromacs.org/Documentation/Terminology/Periodic_Boundary_Conditions,
strategies for coping found there.

Mark


On Mon, Oct 21, 2013 at 9:31 AM, MUSYOKA THOMMAS 
mutemibiochemis...@gmail.com wrote:

 Dear Users,
 I am doing protein-ligand MD simulations. I first prepare the ligand by
 adding Hydrogen atoms and setting the charges using UCSF chimera. I
 thereafter use acpype to get the ligand's gro,itp and top files. Finally, i
 process the protein.PDB file and perform MD simulations. However, when I
 combine the ligand and protein gro files and convert the resulting complex
 to a PDB file so as to visualise with VMD, the ligand always appears to be
 broken in two parts.

 Any advice on how to overcome this?

 Thanks
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Insertion of chromium III ion into lipid bilayer

2013-10-19 Thread Mark Abraham
First, can you successfully add an ion that the force field already knows
about, like potassium? Second, does the force field know about chromium? If
not, who does?

Mark


On Sat, Oct 19, 2013 at 4:27 PM, Sathya bti027.2...@gmail.com wrote:

 Hi,

 I want to add chromium III ion into lipid bilayer.  I have included cr
 entry in the ions.itp file,  and I used grompp it shows error like Atom
 types cr+3 is not found. After removing cr ions from the ions.itp file it
 works and after using genion to add cr3+ ions into lipid the following
 command was used.

  genion -s ions.tpr -o dppc_solv_ions.gro -p topol.top
 -pname CR -pq 3

 But it shows No ions to add and no potential to calculate.. 
 Is it necessary to include chromium entry in the ions.itp file?  What file
 i
 should modify to add cr into lipid?
 Please explain me to solve this..

 Thanks

 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Insertion-of-chromium-III-ion-into-lipid-bilayer-tp5011884.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem with reading AMBER trajectories

2013-10-18 Thread Mark Abraham
Can this file be opened in VMD itself?

Mark
On Oct 18, 2013 6:21 AM, anu chandra anu80...@gmail.com wrote:

 Dear Gromacs users,

 I am trying to use Gromacs to read AMBER trajectories (mdcrd) for doing few
 analysis. Unfortunately I ended-up with the following error.

 
 GROMACS will now assume it to be a trajectory and will try to open it using
 the VMD plug-ins.
 This will only work in case the VMD plugins are found and it is a
 trajectory format supported by VMD.

 Using VMD plugin: crd (AMBER Coordinates)

 Format of file md.crd does not record number of atoms.

 ---
 Program g_covar, VERSION 4.6.1
 Source code file: /usr/local/gromacs-4.6.1/src/gmxlib/trxio.c, line: 1035

 Fatal error:
 Not supported in read_first_frame: md.crd
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---
 



 While browsing through the GROMACS mail-list, I came to know that it might
 be a problem with DLOPEN libraries. So I recompiled Gromcas with cmake
 using the following command

 
 CMAKE_PREFIX_PATH=/usr/include/libltdl cmake
 -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs -DCMAKE_C_COMPILER=gcc
 -DCMAKE_CXX_COMPILER=g++ -DFFTWF_LIBRARY=/usr/lib/libfftw3f.a
 -DFFTWF_INCLUDE_DIR=/usr/lib/ ../
 

 But, the same problem came-up again. Can anyone help me to figure out what
 went wrong with my Gromacs installation?

 Many thanks in advance.

 Regards
 Anu
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem with reading AMBER trajectories

2013-10-18 Thread Mark Abraham
OK. All GROMACS does is feed your filename extension to the VMD library and
let it choose how to read the file based on that. If that doesn't make
sense (and it seems it doesn't, because GROMACS wasn't told about the
number of atoms, and it needs to know), then the ball is back to you to
choose the filename extension in the way the plugin needs. I suggest you
check out http://www.ks.uiuc.edu/Research/vmd/plugins/molfile/ and try some
alternatives.

Mark


On Fri, Oct 18, 2013 at 2:10 PM, anu chandra anu80...@gmail.com wrote:

 Hi Mark,

 Yes. I do can able to load the trajectories successfully in VMD with the
 file format option of ' AMBER coordinate with periodic box'. I am using VMD
 1.9 version.

 Regards
 Anu




 On Fri, Oct 18, 2013 at 1:05 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Can this file be opened in VMD itself?
 
  Mark
  On Oct 18, 2013 6:21 AM, anu chandra anu80...@gmail.com wrote:
 
   Dear Gromacs users,
  
   I am trying to use Gromacs to read AMBER trajectories (mdcrd) for doing
  few
   analysis. Unfortunately I ended-up with the following error.
  
   
   GROMACS will now assume it to be a trajectory and will try to open it
  using
   the VMD plug-ins.
   This will only work in case the VMD plugins are found and it is a
   trajectory format supported by VMD.
  
   Using VMD plugin: crd (AMBER Coordinates)
  
   Format of file md.crd does not record number of atoms.
  
   ---
   Program g_covar, VERSION 4.6.1
   Source code file: /usr/local/gromacs-4.6.1/src/gmxlib/trxio.c, line:
 1035
  
   Fatal error:
   Not supported in read_first_frame: md.crd
   For more information and tips for troubleshooting, please check the
  GROMACS
   website at http://www.gromacs.org/Documentation/Errors
   ---
   
  
  
  
   While browsing through the GROMACS mail-list, I came to know that it
  might
   be a problem with DLOPEN libraries. So I recompiled Gromcas with cmake
   using the following command
  
   
   CMAKE_PREFIX_PATH=/usr/include/libltdl cmake
   -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs -DCMAKE_C_COMPILER=gcc
   -DCMAKE_CXX_COMPILER=g++ -DFFTWF_LIBRARY=/usr/lib/libfftw3f.a
   -DFFTWF_INCLUDE_DIR=/usr/lib/ ../
   
  
   But, the same problem came-up again. Can anyone help me to figure out
  what
   went wrong with my Gromacs installation?
  
   Many thanks in advance.
  
   Regards
   Anu
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mistake occured in Gromacs install

2013-10-17 Thread Mark Abraham
You do need a C compiler, not a Fortran one, and IIRC gcc 4.6.2 has some
known issues. Please follow the instructions in the install guide and get
the latest compiler you can.

Mark
On Oct 17, 2013 8:30 AM, 张海平 21620101152...@stu.xmu.edu.cn wrote:

 Dear professor:
   When I install the Gromacs software, there occured a problem as
 follow(my computer is 64bit,linux, gcc is GNU Fortran (GCC) 4.6.2):


 [ZHP@console build]$  cmake .. -DGMX_BUILD_OWN_FFTW=ON
 -- No compatible CUDA toolkit found (v3.2+), disabling native GPU
 acceleration
 CMake Warning at CMakeLists.txt:744 (message):
   No C SSE4.1 flag found.  Consider a newer compiler, or use SSE2 for
   slightly lower performance


 CMake Error at CMakeLists.txt:767 (message):
   Cannot find smmintrin.h, which is required for SSE4.1 intrinsics support.


 -- Configuring incomplete, errors occurred!
 
 I don't know how to solve it. Hope your reply soon.

 Best regards
 Haiping Zhang
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] default -rdd with distance restraints seems too large

2013-10-17 Thread Mark Abraham
Hi,

The log file gives a breakdown of how the minimum cell size was computed.
What does it say?

Mark
On Oct 17, 2013 5:17 AM, Christopher Neale chris.ne...@mail.utoronto.ca
wrote:

 I have a system that also uses a set of distance restraints

 The box size is:
7.12792   7.12792  10.25212

 When running mdrun -nt 8, I get:

 Fatal error:
 There is no domain decomposition for 8 nodes that is compatible with the
 given box and a minimum cell size of 3.62419 nm

 However, the largest restrained distance is 2.0 nm and the largest
 displacement between restrained atoms is 2.63577 nm

 So why does mdrun set -rdd to 3.62419 nm ?

 If I run mdrun -rdd 2.8 everything works fine.

 Thank you,
 Chris.

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] There is no domain decomposition for 16 nodes that is compatible with the given box and a minimum cell size of 0.826223 nm

2013-10-17 Thread Mark Abraham
4.5 can only handle about 500-1000 atoms per processor. Details vary.

Mark
On Oct 17, 2013 5:39 AM, Nilesh Dhumal ndhu...@andrew.cmu.edu wrote:

 Thanks for you reply.

 I am doing simulation for ionic liquids BMIM + CL. Total number of atoms
 are 3328.

 Nilesh

  Assuming you're using LINCS, from the manual:
  With domain decomposition, the cell size is limited by the distance
  spanned by *lincs-order*+1 constraints.
  Assuming a default lincs-order (4), 0.82nm seems a fairly sane distance
  for
  5 bonds.
 
  Which means that you're probably using too many nodes for the size of
 your
  system.
 
  Hope that helps. If it doesn't you'll need to provide some information
  about your system.
 
  -Trayder
 
 
 
  On Thu, Oct 17, 2013 at 1:27 PM, Nilesh Dhumal
  ndhu...@andrew.cmu.eduwrote:
 
  Hello,
 
  I am getting the following error for simulation. I am using Gromacs
  VERSION 4.5.5 and running on 24 processors.
 
  Should I reduce the number of processor or the problem is in bonded
  parameters. If I use -nt 1 option. I could run the simulation.
 
  Fatal error:
  There is no domain decomposition for 16 nodes that is compatible with
  the
  given box and a minimum cell size of 0.826223 nm
  Change the number of nodes or mdrun option -rdd or -dds
  Look in the log file for details on the domain decomposition
 
 
  Nilesh
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: [gmx-developers] v4.6.3: cmake-stage error at CMakeLists.txt:102

2013-10-16 Thread Mark Abraham
(Redirected from gmx-developers)

The only way I can reproduce those symptoms is if I delete (or otherwise
make unreadable) various parts of src/gmxlib. You may have deleted some
files or been a different user at some point. I suggest you do a fresh
unpack of the tarball and try again.

Mark


On Wed, Oct 16, 2013 at 7:19 AM, Nikolay Alemasov suc...@gmail.com wrote:

 Greetings,

 I am trying to compile the source code v.4.6.3 with cmake. The shell
 script is (ran from the build directory inside gromacs source root dir):

  CC=/ifs/opt/2013/intel/bin/icc
 CXX=/ifs/opt/2013/intel/bin/**icpc
 CMAKE_PREFIX_PATH=/ifs/home/**bionet/alemasov/libraries/**fftw

 cmake .. \
 -DGMX_GPU=OFF \
 -DGMX_CPU_ACCELERATION=SSE2 \
 -DFFTWF_LIBRARY='/ifs/home/**bionet/alemasov/libraries/**fftw/lib/libfftw3f.so'
 \
 -DFFTWF_INCLUDE_DIR='/ifs/**home/bionet/alemasov/**libraries/fftw/include'
 \
 -DCMAKE_INSTALL_PREFIX='/ifs/**home/bionet/alemasov/**libraries/gromacs'


 And get a message (successful part was cut):
 ...

 -- Performing Test HAVE_DLOPEN
 -- Performing Test HAVE_DLOPEN - Success
 -- Checking for dlopen - found
 -- Found the ability to use plug-ins when building shared libaries, so
 will compile to use plug-ins (e.g. to read VMD-supported file formats).
 -- Checking for suitable VMD version
 -- VMD plugins not found. Path to VMD can be set with VMDDIR.
 CMake Error at src/gmxlib/CMakeLists.txt:102 (list):
   list sub-command REMOVE_ITEM requires two or more arguments.


 CMake Error at src/gmxlib/CMakeLists.txt:105 (list):
   list sub-command REMOVE_ITEM requires two or more arguments.


 You have called ADD_LIBRARY for library md without any source files. This
 typically indicates a problem with your CMakeLists.txt file
 -- Configuring incomplete, errors occurred!
 See also /ifs/home/bionet/alemasov/**libraries/gromacs-4.6.3/build/**
 CMakeFiles/CMakeOutput.log.
 See also /ifs/home/bionet/alemasov/**libraries/gromacs-4.6.3/build/**
 CMakeFiles/CMakeError.log.


 Below is a content of the src/gmxlib/CMakeLists.txt:99-**105:

 99 : # Files called xxx_test.c are test drivers with a main() function
 for module xxx.c,
 100: # so they should not be included in the library
 101: file(GLOB_RECURSE NOT_GMXLIB_SOURCES *_test.c *\#*)
 102: list(REMOVE_ITEM GMXLIB_SOURCES ${NOT_GMXLIB_SOURCES})
 103: # Selection has test_ instead of _test.
 104: file(GLOB SELECTION_TEST selection/test*)
 105: list(REMOVE_ITEM GMXLIB_SOURCES ${SELECTION_TEST})


 The target system:

  Linux nks-g6.sscc.ru 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009
 x86_64 x86_64 x86_64 GNU/Linux
 Red Hat Enterprise Linux Server release 5.4 (Tikanga)
 model name: Intel(R) Xeon(R) CPU   X5560  @ 2.80GHz


 Please help me to sort out the issue. In fact my primary aim was to build
 GPU-version of the GROMACS, but I can not do it even for the simplest
 variant.
 --
 gmx-developers mailing list
 gmx-develop...@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-**developershttp://lists.gromacs.org/mailman/listinfo/gmx-developers
 Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to 
 gmx-developers-request@**gromacs.orggmx-developers-requ...@gromacs.org
 .

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: [gmx-developers] v4.6.3: cmake-stage error at CMakeLists.txt:102

2013-10-16 Thread Mark Abraham
On Wed, Oct 16, 2013 at 12:27 PM, Nikolay Alemasov suc...@gmail.com wrote:

 Thank you, Mark!

 It was already tried. I mean a fresh unpacking and further cmake run.
 As for your first thought concerning a loss of access to some parts of
 gmxlib:

 [alemasov@nks-g6 gromacs-4.6.3]$ ls -l ./src/gmxlib/ | grep -e - | cut
 -d' ' -f 1 | sort -n | uniq
 drwxr-x---
 -rw-r-


 So there are only two permissions patterns which allow me to
 read/write items in the directory. I am little bit confused. Are there any
 limitations about OS or cmake versions? The latter is cmake version
 2.8.12.


Also relevant are the owners, if you have unpacked as root and built as
normal user, or vice-versa, etc. You should be doing nothing with root
until you need to install, of course, and since you are installing to user
space, you definitely should not be root.

CMake has not yet updated their compatibility matrix for 2.8.12 (
http://www.cmake.org/Wiki/CMake_Version_Compatibility_Matrix/Commands) and
as you can see there, things that used to work occasionally stop working.
If you can try a different version of CMake we can rule out bugs in CMake
2.8.12.

Mark


  (Redirected from gmx-developers)

 The only way I can reproduce those symptoms is if I delete (or otherwise
 make unreadable) various parts of src/gmxlib. You may have deleted some
 files or been a different user at some point. I suggest you do a fresh
 unpack of the tarball and try again.

 Mark


 On Wed, Oct 16, 2013 at 7:19 AM, Nikolay Alemasov such3r at gmail.com  
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 wrote:

 /  Greetings,
 //
 //  I am trying to compile the source code v.4.6.3 with cmake. The
 shell
 //  script is (ran from the build directory inside gromacs source root
 dir):
 //
 //   CC=/ifs/opt/2013/intel/bin/icc
 //  CXX=/ifs/opt/2013/intel/bin/icpc
 //  CMAKE_PREFIX_PATH=/ifs/home/bionet/alemasov/libraries/
 fftw
 //
 //  cmake .. \
 //  -DGMX_GPU=OFF \
 //  -DGMX_CPU_ACCELERATION=SSE2 \
 //  -DFFTWF_LIBRARY='/ifs/home/bionet/alemasov/libraries/
 fftw/lib/libfftw3f.so'
 //  \
 //  -DFFTWF_INCLUDE_DIR='/ifs/home/bionet/alemasov/
 libraries/fftw/include'
 //  \
 //  -DCMAKE_INSTALL_PREFIX='/ifs/home/bionet/alemasov/
 libraries/gromacs'
 //
 //
 //  And get a message (successful part was cut):
 //  ...
 //
 //  -- Performing Test HAVE_DLOPEN
 //  -- Performing Test HAVE_DLOPEN - Success
 //  -- Checking for dlopen - found
 //  -- Found the ability to use plug-ins when building shared libaries,
 so
 //  will compile to use plug-ins (e.g. to read VMD-supported file
 formats).
 //  -- Checking for suitable VMD version
 //  -- VMD plugins not found. Path to VMD can be set with VMDDIR.
 //  CMake Error at src/gmxlib/CMakeLists.txt:102 (list):
 //list sub-command REMOVE_ITEM requires two or more arguments.
 //
 //
 //  CMake Error at src/gmxlib/CMakeLists.txt:105 (list):
 //list sub-command REMOVE_ITEM requires two or more arguments.
 //
 //
 //  You have called ADD_LIBRARY for library md without any source
 files. This
 //  typically indicates a problem with your CMakeLists.txt file
 //  -- Configuring incomplete, errors occurred!
 //  See also /ifs/home/bionet/alemasov/
 libraries/gromacs-4.6.3/build/
 //  CMakeFiles/CMakeOutput.log.
 //  See also /ifs/home/bionet/alemasov/
 libraries/gromacs-4.6.3/build/
 //  CMakeFiles/CMakeError.log.
 //
 //
 //  Below is a content of the src/gmxlib/CMakeLists.txt:99-105:
 //
 //  99 : # Files called xxx_test.c are test drivers with a main()
 function
 //  for module xxx.c,
 //  100: # so they should not be included in the library
 //  101: file(GLOB_RECURSE NOT_GMXLIB_SOURCES *_test.c *\#*)
 //  102: list(REMOVE_ITEM GMXLIB_SOURCES ${NOT_GMXLIB_SOURCES})
 //  103: # Selection has test_ instead of _test.
 //  104: file(GLOB SELECTION_TEST selection/test*)
 //  105: list(REMOVE_ITEM GMXLIB_SOURCES ${SELECTION_TEST})
 //
 //
 //  The target system:
 //
 //   Linux nks-g6.sscc.ru 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT
 2009
 //  x86_64 x86_64 x86_64 GNU/Linux
 //  Red Hat Enterprise Linux Server release 5.4 (Tikanga)
 //  model name: Intel(R) Xeon(R) CPU   X5560  @ 2.80GHz
 //
 //
 //  Please help me to sort out the issue. In fact my primary aim was to
 build
 //  GPU-version of the GROMACS, but I can not do it even for the simplest
 //  variant.
 //  --
 //  gmx-developers mailing list
 //  gmx-developers at gromacs.org  http://lists.gromacs.org/**
 mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 
 //  
 http://lists.gromacs.org/mailman/listinfo/gmx-developershttp://lists.gromacs.org/**mailman/listinfo/gmx-**developers
 http://lists.**gromacs.org/mailman/listinfo/**gmx-developershttp://lists.gromacs.org/mailman/listinfo/gmx-developers
  
 http://lists.gromacs.org/mailman/listinfo/gmx-
 

Re: [gmx-users] jwe1050i + jwe0019i errors = SIGSEGV (Fujitsu)

2013-10-15 Thread Mark Abraham
On Thu, Oct 10, 2013 at 2:34 PM, James jamesresearch...@gmail.com wrote:

 Dear Mark,

 Thanks again for your response.

 Many of the regression tests seem to have passed:

 All 16 simple tests PASSED
 All 19 complex tests PASSED
 All 142 kernel tests PASSED
 All 9 freeenergy tests PASSED
 All 0 extra tests PASSED
 Error not all 42 pdb2gmx tests have been done successfully
 Only 0 energies in the log file
 pdb2gmx tests FAILED

 I'm not sure why pdb2gmx failed but I suppose it will not impact the
 crashing I'm experiencing.


No, that's fine. Probably they don't have sufficiently explicit guards to
stop people running the energy minimization with a more-than-useful number
of OpenMP threads.


 Regarding the stack trace showing line numbers, what is the best way to go
 about this, in this context? I'm not really experienced in that aspect.


That's a matter of compiling in debug mode (use cmake ..
-DCMAKE_BUILD_TYPE=Debug), and hopefully observing the same crash with an
error message that has more useful information. The debug mode annotates
the executable so that a finger can be pointed at the code line that caused
the segfault. Hopefully the compiler does this properly, but support for
this in OpenMP is a corner compiler writers might cut ;-) Depending on the
details, loading a core dump in a debugger can also be necessary, but your
local sysadmins are the people to talk to there.

Mark

Thanks again for your help!

 Best regards,

 James


 On 21 September 2013 23:12, Mark Abraham mark.j.abra...@gmail.com wrote:

  On Sat, Sep 21, 2013 at 2:45 PM, James jamesresearch...@gmail.com
 wrote:
   Dear Mark and the rest of the Gromacs team,
  
   Thanks a lot for your response. I have been trying to isolate the
 problem
   and have also been in discussion with the support staff. They suggested
  it
   may be a bug in the gromacs code, and I have tried to isolate the
 problem
   more precisely.
 
  First, do the GROMACS regression tests for Verlet kernels pass? (Run
  them all, but those with nbnxn prefix are of interest here.) They
  likely won't scale to 16 OMP threads, but you can vary OMP_NUM_THREADS
  environment variable to see what you can see.
 
   Considering that the calculation is run under MPI with 16 OpenMP cores
  per
   MPI node, the error seems to occur under the following conditions:
  
   A few thousand atoms: 1 or 2 MPI nodes: OK
   Double the number of atoms (~15,000): 1 MPI node: OK, 2 MPI nodes:
  SIGSEGV
   error described below.
  
   So it seems that the error occurs for relatively large systems which
 use
   MPI.
 
  ~500 atoms per core (thread) is a system in the normal GROMACS scaling
  regime. 16 OMP threads is more than is useful on other HPC systems,
  but since we don't know what your hardware is, whether you are
  investigating something useful is your decision.
 
   The crash mentions the calc_cell_indices function (see below). Is
 this
   somehow a problem with memory not being sufficient at the MPI interface
  at
   this function? I'm not sure how to proceed further. Any help would be
   greatly appreciated.
 
  If there is a problem with GROMACS (which so far I doubt), we'd need a
  stack trace that shows a line number (rather than addresses) in order
  to start to locate it.
 
  Mark
 
   Gromacs version is 4.6.3.
  
   Thank you very much for your time.
  
   James
  
  
   On 4 September 2013 16:05, Mark Abraham mark.j.abra...@gmail.com
  wrote:
  
   On Sep 4, 2013 7:59 AM, James jamesresearch...@gmail.com wrote:
   
Dear all,
   
I'm trying to run Gromacs on a Fujitsu supercomputer but the
 software
  is
crashing.
   
I run grompp:
   
grompp_mpi_d -f parameters.mdp -c system.pdb -p overthe.top
   
and it produces the error:
   
jwe1050i-w The hardware barrier couldn't be used and continues
  processing
using the software barrier.
taken to (standard) corrective action, execution continuing.
error summary (Fortran)
error number error level error count
jwe1050i w 1
total error count = 1
   
but still outputs topol.tpr so I can continue.
  
   There's no value in compiling grompp with MPI or in double precision.
  
I then run with
   
export FLIB_FASTOMP=FALSE
source /home/username/Gromacs463/bin/GMXRC.bash
mpiexec mdrun_mpi_d -ntomp 16 -v
   
but it crashes:
   
starting mdrun 'testrun'
5 steps, 100.0 ps.
jwe0019i-u The program was terminated abnormally with signal number
   SIGSEGV.
signal identifier = SEGV_MAPERR, address not mapped to object
error occurs at calc_cell_indices._OMP_1 loc 00233474 offset
03b4
calc_cell_indices._OMP_1 at loc 002330c0 called from loc
02088fa0 in start_thread
start_thread at loc 02088e4c called from loc
 029d19b4
  in
__thread_start
__thread_start at loc 029d1988 called from o.s.
error summary (Fortran)
error number error level error count
jwe0019i

Re: [gmx-users] problem in NPT equilibration step

2013-10-14 Thread Mark Abraham
http://www.gromacs.org/Documentation/Terminology/Pressure_Coupling and
http://www.gromacs.org/Documentation/Terminology/Pressure are useful here.

Mark


On Mon, Oct 14, 2013 at 4:32 PM, srinathchowdary
srinathchowd...@gmail.comwrote:

 The barostat tries to equilibrate the system at the desired pressure, there
 will be fluctuations and these fluctuations are little higher for
 Parrinello-rahman if started far away from equilibrium value. I would
 suggest to start from berendsen and then extend it to P-R. Also, you should
 run little longer time for the system to reach equilibrium
 regards
 sri


 On Mon, Oct 14, 2013 at 9:13 AM, Preeti Choudhary 
 preetichoudhary18111...@gmail.com wrote:

  Dear Gromacs user,
 
  I am trying to simulate a protein (nmr structure).I have successfully
 done
  energy minimisation step.Also I have equilibrated the system a 298 k
 (which
  is achieved from 100 ps run) .Now,I am trying to equilibrate the system
 at
  1 bar pressure.After a run of 100 ps ,I am getting average pressure of
 the
  system as 4.9 bar.Then I extended this simulation for 50 ps(total 150 ps
  from start) ,so the av. pressure dropped to 1.5 bar.Then,I again extended
  this simulation for further 50 ps(Total 200 ps from start),pressure
 raised
  to 2.14 bar.Again ,I extended this simulation for further 50 ps(Total 250
  ps),pressure raised to 3.56 bar.Similarly,it av. pressure is 2.98
 bar,2.85
  bar,2.41 bar for 300ps, 350 ps and 400 ps.I am not able to equilibrate
 the
  system at 1 bar pressure.What should be done in these cases?
 
  I am using opls-aa force field,tip-4 water model,for pressure coupling I
 am
  using following parameters:
  ; Pressure coupling is on
  pcoupl= Parrinello-Rahman; Pressure coupling on in NPT
  pcoupltype= isotropic; uniform scaling of box vectors
  tau_p= 2.0; time constant, in ps
  ref_p= 1.0; reference pressure, in bar
  compressibility = 4.5e-5; isothermal compressibility of water, bar^-1
  refcoord_scaling = com
 
  note :the initial pressure at the beginning of npt simulation and at the
  end of nvt simuation is -311.41 bar
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 V.Srinath Chowdary
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] recalculating .trr from .xtc

2013-10-14 Thread Mark Abraham
Also, the precision was selected when the xtc file was written, ie in the
mdp file.

Mark
On Oct 15, 2013 3:24 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/14/13 7:56 PM, Leandro Bortot wrote:

 Dear GROMACS users,

   Does anyone know how significant is the difference between the
 original .trr file from a simulation and a recalculated .trr from a
 whole system .xtc (mdrun -rerun traj.xtc -o traj.trr)?
   I mean... do you know how big would be the error induced by this
 recalculation procedure?

   I'm not interested in calculating autocorrelation functions. Most of
 my analysis are related to the atom positions over time and free energy
 calculations.


 Position-related quantities should be impacted very little.  Given that
 you can't acquire precision though, I see no point in even generating a
 .trr file - the .xtc has the same information while occupying less disk
 space.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu |
 (410) 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] energy drift - comparison of double and single precision

2013-10-13 Thread Mark Abraham
On Sat, Oct 12, 2013 at 11:07 PM, Guillaume Chevrot 
guillaume.chev...@gmail.com wrote:

 2013/10/12 Mark Abraham mark.j.abra...@gmail.com

  Didn't see any problem in the .mdp. -4500 kJ/mol in 10ns over (guessing)
  30K atoms is 0.015 kJ/mol/ns/atom. k_B T is at least 100 times larger
 than
  that. I bet the rest of the lysozyme model physics is not accurate to
 less
  than 1% ;-) There are some comparative numbers at
  http://dx.doi.org/10.1016/j.cpc.2013.06.003 - the two systems are rather
  different but they share the use of SETTLE.
 
 
 Do you suggest that SETTLE is the cause of the drift?


Seems likely to me, but I would certainly try to compare apples with apples
before reaching that conclusion!

 (note: If I did not mistaken, my drift would be 0.18 [k_B T / ns / atom],
 quite close of the figures shown in their figure 2.)


Your log file has 51000 atoms (most of which are presumably water), so
4500/10/51000 is 0.0088 kJ/mol/ns/atom.

 In their Figure 2, they show a drift for single and double precision, and
 it is not the case for my double precision simulation, so maybe SETTLE is
 no the cause of my trouble?


There are many differences in the simulations (you have protein, Fig 2 uses
2fs time steps, PME settings are different), so there is not yet any basis
for assigning the reason for differences in drift.





  Note that using md-vv guarantees the 2007 paper is inapplicable, because
  GROMACS did not have a velocity Verlet integrator back then. Sharing the
 

 If I remember well, their demonstration was true whatever the integrator.
 Nevertheless, I also tested the leap-frog integrator, and I observe the
 same drift in energy.
 So maybe their explanation is still applicable.


The authors of that paper show Desmond's drift with RATTLE (an iterative
solver), not SETTLE (a constant-time analytical solver). Desmond's drift
with SETTLE would have been interesting to see. A cost/benefit analysis of
simulation wall-clock time vs errors in the simulation observables for the
different solvers would also be interesting.

Projecting the total drift from my estimate above back onto Fig 1 of their
paper is instructive ;-)


  .log files might be informative.
 
 
 Here is the link where you can find the log file:
 http://dx.doi.org/10.6084/m9.figshare.821211


The compiler traveled on the Ark, and the binary was compiled for a machine
less capable than the SSE4.1 machine you ran it on. Perhaps the compiler is
correct (there are certainly known bugs in *later* gcc minor releases; get
the latest), but even if the compiler is correct, you will probably observe
things go faster if you fix those ;-)

Mark



 Thanks for your comments!

 Guillaume



  Mark
 
 
  On Fri, Oct 11, 2013 at 11:38 PM, Guillaume Chevrot 
  guillaume.chev...@gmail.com wrote:
 
   Hi,
  
   sorry for my last post! I re-write my e-mail (with some additional
   information) and I provide the links to my files ;-)
  
   I compared the total energy of 2 simulations:
   lysozyme in water / NVE ensemble / single precision / Gromacs 4.6.3
   lysozyme in water / NVE ensemble / double precision / Gromacs 4.6.3
  
   ... and what I found was quite ... disturbing (see the plots of the
 total
   energy: http://dx.doi.org/10.6084/m9.figshare.820153). I observe a
   constant
   drift in energy in the case of the single precision simulation.
  
   Did I do something wrong*? Any remarks are welcomed! Here is the link
 to
   the ‘mdout.mdp’ file (http://dx.doi.org/10.6084/m9.figshare.820154) so
  you
   can check what mdp options I used.
  
   My second question is: if I did not do something wrong, what are the
   consequences on the simulation? Can I trust the results of single
  precision
   simulations?
  
   Regards,
  
   Guillaume
  
   *PS: I am not the only one encountering this behavior. In the
 literature,
   this problem has already been mentioned:
   http://jcp.aip.org/resource/1/jcpsa6/v126/i4/p046101_s1
  
  
  
  
   2013/10/11 Mark Abraham mark.j.abra...@gmail.com
  
On Oct 11, 2013 7:59 PM, Guillaume Chevrot 
   guillaume.chev...@gmail.com
wrote:

 Hi all,

 I recently compared the total energy of 2 simulations:
 lysozyme in water / NVE ensemble / single precision
 lysozyme in water / NVE ensemble / double precision

 ... and what I found was quite ... disturbing (see the attached
  figure
   -
 plots of the total energy). I observe a constant drift in energy in
  the
 case of the single precision simulation.

 Did I do something wrong*? Any remarks are welcomed! I join the
‘mdout.mdp’
 file so you can check what mdp options I used.
   
Maybe. Unfortunately we cannot configure the mailing list to allow
  people
to send attachments to thousands of people, so you will need to do
something like provide links to files on a sharing service.
   

 My second question is: if I did not do something wrong, what are
 the
 consequences on the simulation? Can

Re: [gmx-users] DSSP installation on Ubuntu 12.10

2013-10-11 Thread Mark Abraham
Maintaining your login scripts is a basic UNIX issue, not a GROMACS issue.
Google knows a lot more about it than anybody here ;-)

Mark


On Fri, Oct 11, 2013 at 2:57 PM, Mass masstransfer_2...@yahoo.com wrote:

 Dear Gromacs user,
 Can anyone tell me how  to arrange for my login scripts to source gromacs
 automatically? Justin just point that to me and in Gromacs website it is
 written search the web for that, anyone know how to do that?
 Thanks



 On Saturday, October 12, 2013 1:12 AM, Mass masstransfer_2...@yahoo.com
 wrote:

 Hi Justin,
 Sorry for the mistake,
 I typed in terminal
 do_dssp -f bLac_orig_md2.trr -s bLac_orig_md2.tpr -sc
 Secondary_Structure_analysis_original_dss.xvg -ssdump


 and got the following error,

 Program do_dssp, VERSION 4.6.3
 Source code file: /home/mass/gromacs-4.6.3/src/gmxlib/gmxfio.c, line: 524

 Can not open file:
 bLac_orig_md2.trr
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors

 I can see the file bLac_orig_md2.trr in the directory

 any comments?



 On Saturday, October 12, 2013 12:55 AM, Justin Lemkul jalem...@vt.edu
 wrote:



 On 10/11/13 1:34 AM, Mass wrote:
  Dear Mark,
  Thanks for your comments, I uninstalled my previous Gromacs version (
 from Ubuntu software centre I just removed it) . and followed the dirty and
 quick installation on Gromacs website
 
  tar xfz gromacs-4.6.3.tar.gz
  cd gromacs-4.6.3
  mkdir build
  cd build
  cmake .. -DGMX_BUILD_OWN_FFTW=ON
  make
  sudo make install
  source /usr/local/gromacs/bin/GMXRC
  I have one question here, why when I run mdrun in my home directory it
 is telling me that Gromacs is not installed, but when I source it again and
 go to my home directory
  after that mdrun show Gromacs version-4.6.3. any comments on this? how
 can I call gromacs without
  sourcing every time
 

 Configure your login scripts to do it for you.


  secondly when I do do_dssp
 
  do_dssp -f bLac_orig_md2.trr -s bLac_orig_md2.tpr -sc
 Secondary_Structure_analysis_original_dss.xvg -ssdump
 
 
 
  I am getting following error
 
  Program mdrun, VERSION 4.6.3
  Source code file: /home/mass/gromacs-4.6.3/src/gmxlib/gmxfio.c, line: 524
 
  Can not open file:
  topol.tpr
  For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
 

 Whatever you typed above is not what you typed in the terminal (always
 copy and
 paste!), because do_dssp is looking for topol.tpr, which is the default
 name for
 -s.  If you do not specify a particular required input, all Gromacs
 programs
 look for default names.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441


 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] energy drift - comparison of double and single precision

2013-10-11 Thread Mark Abraham
On Oct 11, 2013 7:59 PM, Guillaume Chevrot guillaume.chev...@gmail.com
wrote:

 Hi all,

 I recently compared the total energy of 2 simulations:
 lysozyme in water / NVE ensemble / single precision
 lysozyme in water / NVE ensemble / double precision

 ... and what I found was quite ... disturbing (see the attached figure -
 plots of the total energy). I observe a constant drift in energy in the
 case of the single precision simulation.

 Did I do something wrong*? Any remarks are welcomed! I join the
‘mdout.mdp’
 file so you can check what mdp options I used.

Maybe. Unfortunately we cannot configure the mailing list to allow people
to send attachments to thousands of people, so you will need to do
something like provide links to files on a sharing service.


 My second question is: if I did not do something wrong, what are the
 consequences on the simulation? Can I trust the results of single
precision
 simulations?

Yes, as you have no doubt read in the papers published by the GROMACS team.

 Regards,

 Guillaume

 *PS: I am not the only one encountering this behavior. In the literature,
 this problem has already been mentioned:
 http://jcp.aip.org/resource/1/jcpsa6/v126/i4/p046101_s1

... which is six years old, examining the properties of code seven years
old. Life has moved on! :-) Even if you have found a problem, it is a big
assumption that this is (still) the cause.

Mark

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] energy drift - comparison of double and single precision

2013-10-11 Thread Mark Abraham
Didn't see any problem in the .mdp. -4500 kJ/mol in 10ns over (guessing)
30K atoms is 0.015 kJ/mol/ns/atom. k_B T is at least 100 times larger than
that. I bet the rest of the lysozyme model physics is not accurate to less
than 1% ;-) There are some comparative numbers at
http://dx.doi.org/10.1016/j.cpc.2013.06.003 - the two systems are rather
different but they share the use of SETTLE.

Note that using md-vv guarantees the 2007 paper is inapplicable, because
GROMACS did not have a velocity Verlet integrator back then. Sharing the
.log files might be informative.

Mark


On Fri, Oct 11, 2013 at 11:38 PM, Guillaume Chevrot 
guillaume.chev...@gmail.com wrote:

 Hi,

 sorry for my last post! I re-write my e-mail (with some additional
 information) and I provide the links to my files ;-)

 I compared the total energy of 2 simulations:
 lysozyme in water / NVE ensemble / single precision / Gromacs 4.6.3
 lysozyme in water / NVE ensemble / double precision / Gromacs 4.6.3

 ... and what I found was quite ... disturbing (see the plots of the total
 energy: http://dx.doi.org/10.6084/m9.figshare.820153). I observe a
 constant
 drift in energy in the case of the single precision simulation.

 Did I do something wrong*? Any remarks are welcomed! Here is the link to
 the ‘mdout.mdp’ file (http://dx.doi.org/10.6084/m9.figshare.820154) so you
 can check what mdp options I used.

 My second question is: if I did not do something wrong, what are the
 consequences on the simulation? Can I trust the results of single precision
 simulations?

 Regards,

 Guillaume

 *PS: I am not the only one encountering this behavior. In the literature,
 this problem has already been mentioned:
 http://jcp.aip.org/resource/1/jcpsa6/v126/i4/p046101_s1




 2013/10/11 Mark Abraham mark.j.abra...@gmail.com

  On Oct 11, 2013 7:59 PM, Guillaume Chevrot 
 guillaume.chev...@gmail.com
  wrote:
  
   Hi all,
  
   I recently compared the total energy of 2 simulations:
   lysozyme in water / NVE ensemble / single precision
   lysozyme in water / NVE ensemble / double precision
  
   ... and what I found was quite ... disturbing (see the attached figure
 -
   plots of the total energy). I observe a constant drift in energy in the
   case of the single precision simulation.
  
   Did I do something wrong*? Any remarks are welcomed! I join the
  ‘mdout.mdp’
   file so you can check what mdp options I used.
 
  Maybe. Unfortunately we cannot configure the mailing list to allow people
  to send attachments to thousands of people, so you will need to do
  something like provide links to files on a sharing service.
 
  
   My second question is: if I did not do something wrong, what are the
   consequences on the simulation? Can I trust the results of single
  precision
   simulations?
 
  Yes, as you have no doubt read in the papers published by the GROMACS
 team.
 
   Regards,
  
   Guillaume
  
   *PS: I am not the only one encountering this behavior. In the
 literature,
   this problem has already been mentioned:
   http://jcp.aip.org/resource/1/jcpsa6/v126/i4/p046101_s1
 
  ... which is six years old, examining the properties of code seven years
  old. Life has moved on! :-) Even if you have found a problem, it is a big
  assumption that this is (still) the cause.
 
  Mark
 
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] DSSP installation on Ubuntu 12.10

2013-10-10 Thread Mark Abraham
Hi,

Since the release of 4.5.5, DSSP totally changed its command-line
interface. So old GROMACS code cannot work with new DSSP. You need to get
the old version of DSSP to use with old GROMACS, or new GROMACS code to
work with either DSSP.

Mark


On Thu, Oct 10, 2013 at 1:37 PM, Mass masstransfer_2...@yahoo.com wrote:

 Dear Gromacs users,
 I have asked this questions before and Justin gave some answers which I
 could solve my problem based on his answer. I am using Ubuntu 12.10 and
 installed gromacs 4.5.5-2.
 this is what I have done
 2- I moved this file to usr/local/bin
 1- first I downloaded the dssp
 wget ftp://ftp.cmbi.ru.nl/pub/software/dssp/dssp-2.0.4-linux-amd64-O~/dssp
 2- I moved this file to usr/local/bin

 then I run do_dssp  and I was asked to select a group
 Select a group: 1
 Selected 1: 'Protein'
 There are 162 residues in your selected group
 trn version: GMX_trn_file (single precision)
 Reading frame   0 time0.000
 Back Off! I just backed up ddQ0FCUF to ./#ddQ0FCUF.1#

 after that I am getting
 Program do_dssp, VERSION 4.5.5
 Source code file: /build/buildd/gromacs-4.5.5/src/tools/do_dssp.c, line:
 572

 Fatal error:
 Failed to execute command: /usr/local/bin/dssp -na ddQ0FCUF ddzUlAvc 
 /dev/null 2 /dev/null
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 I really appreciate if anyone can tell me simple and step-by step solution
 (I am a beginner user).

 Thanks

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] CHARMM36 force field available for GROMACS

2013-10-09 Thread Mark Abraham
Great! Many thanks Justin, and the CHARMM team!

Mark


On Tue, Oct 8, 2013 at 10:16 PM, Justin Lemkul jalem...@vt.edu wrote:


 All,

 I am pleased to announce the immediate availability of the latest CHARMM36
 force field in GROMACS format.  You can obtain the archive from our lab's
 website at 
 http://mackerell.umaryland.**edu/CHARMM_ff_params.htmlhttp://mackerell.umaryland.edu/CHARMM_ff_params.html
 .

 The present version contains up-to-date parameters for proteins, nucleic
 acids, lipids, some carbohydrates, CGenFF version 2b7, and a variety of of
 other small molecules.  Please refer to forcefield.doc, which contains a
 list of citations that describe the parameters, as well as the CHARMM force
 field files that were used to generate the distribution.

 We have validated the parameters by comparing energies of a wide variety
 of molecules within CHARMM and GROMACS and have found excellent agreement
 between the two.  If anyone has any issues or questions, please feel free
 to post them to this list or directly to me at the email address below.

 Happy simulating!

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu |
 (410) 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Installing Extended Genbox

2013-10-01 Thread Mark Abraham
I would generally not try to add it to an existing source repository.
Instead, follow one of the suggestions in
http://www.gromacs.org/Developer_Zone/Git/Gerrit#How_do_I_get_a_copy_of_my_commit_for_which_someone_else_has_uploaded_a_patch.3f
to
check out that version.

Mark


On Tue, Oct 1, 2013 at 6:24 AM, Tegar Nurwahyu Wijaya 
tnurwahyuwij...@gmail.com wrote:

 Dear all,

 I want to install Extended Genbox from gerrit.

 https://gerrit.gromacs.org/#/c/1175/

 How can I put this code into my existing gromacs installation?

 Thanks.

 Regards,
 Tegar
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Installing Extended Genbox

2013-10-01 Thread Mark Abraham
Since that patch is already merged, Tegar can just check out the (default)
master branch - see http://www.gromacs.org/Developer_Zone/Git. The CMake
build works the same way. I would suggest just using
your-build-directory/bin/genbox once you have built it, i.e. do not go to
the trouble of installing the development version. You should prefer to use
the normal versions of all the tools unless you want to live on the edge!

Mark


On Tue, Oct 1, 2013 at 5:01 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 10/1/13 10:58 AM, Tegar Nurwahyu Wijaya wrote:

 Hi Mark,

 Thank you for your reply. Actually I am not trying to add it to the
 repository.

 I have gromacs 4.6 installed in my computer. When I was trying using
 genbox, an error occurred caused by memory lacking. After searched this
 mailing list, I got that extended genbox code that can fix my problem. But
 I don't know how to put that code into my installed gromacs 4.6. Do you or
 anybody know how to do that?


 You need to clone the git repository (i.e. the development code) and then
 apply the patch with the instructions at the link Mark sent you.  Further
 up on the page is how you get started in terms of obtaining the development
 code from git.  It is unwise to try to apply a patch from the master branch
 on version 4.6; I doubt it would even work.

 -Justin


  Regards,
 Tegar


 On Tue, Oct 1, 2013 at 10:12 AM, Mark Abraham mark.j.abra...@gmail.com*
 *wrote:

  I would generally not try to add it to an existing source repository.
 Instead, follow one of the suggestions in

 http://www.gromacs.org/**Developer_Zone/Git/Gerrit#How_**
 do_I_get_a_copy_of_my_commit_**for_which_someone_else_has_**
 uploaded_a_patch.3fhttp://www.gromacs.org/Developer_Zone/Git/Gerrit#How_do_I_get_a_copy_of_my_commit_for_which_someone_else_has_uploaded_a_patch.3f
 to
 check out that version.

 Mark


 On Tue, Oct 1, 2013 at 6:24 AM, Tegar Nurwahyu Wijaya 
 tnurwahyuwij...@gmail.com wrote:

  Dear all,

 I want to install Extended Genbox from gerrit.

 https://gerrit.gromacs.org/#/**c/1175/https://gerrit.gromacs.org/#/c/1175/

 How can I put this code into my existing gromacs installation?

 Thanks.

 Regards,
 Tegar
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/**Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists


 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu |
 (410) 706-7441

 ==**

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Preprocessor statements

2013-09-26 Thread Mark Abraham
No, there's no way to do that. But you can monitor the output
trajectory file yourself, live.

Mark

On Thu, Sep 26, 2013 at 4:49 PM, Dr. Vitaly Chaban vvcha...@gmail.com wrote:
 Unlikely possible... But yeah, the feature might be handy.


 Dr. Vitaly V. Chaban


 On Thu, Sep 26, 2013 at 4:20 PM, grita cemilyi...@arcor.de wrote:
 Hi guys,

 Is it possible to specify in the topol.top file preprocessor statements, so
 that you can stop the simulation prematurely?

 I pull two molecules together and I'd like to stop the simulation if the
 center of mass distance of the molecules is less than xx nm.

 Best,
 grita

 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Preprocessor-statements-tp5011469.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segfault when running the alchemistry.org ethanol solvation free energy tutorial

2013-09-26 Thread Mark Abraham
I found the -multi version of that tutorial a bit temperamental...
Michael Shirts suggested that double precision is more reliable for
expanded ensemble. Hopefully he can chime in in a day or two.

Mark

On Thu, Sep 26, 2013 at 9:00 PM, Christopher Neale
chris.ne...@mail.utoronto.ca wrote:
 Dear Users:

 Has anyone successfully run the free energy tutorial at 
 http://www.alchemistry.org/wiki/GROMACS_4.6_example:_Direct_ethanol_solvation_free_energy
  ?

 I just tried it and I get a segmentation fault immediately (see output at the 
 end of this post).

 I get a segfault with both 4.6.3 and 4.6.1.

 Note that if I modify the .mdp file to set free-energy = no , then the 
 simulation runs just fine. (I have, of course, set init-lambda-state in the 
 .mdp file that I downloaded from the aforementioned site and I get a segfault 
 with any value of init-lambda-state from 0 to 8).

 gpc-f103n084-$ mdrun -nt 1 -deffnm ethanol.1 -dhdl ethanol.1.dhdl.xvg
  :-)  G  R  O  M  A  C  S  (-:

   GROup of MAchos and Cynical Suckers

 :-)  VERSION 4.6.3  (-:

 Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
  Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
 Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
 Michael Shirts, Alfons Sijbers, Peter Tieleman,

Berk Hess, David van der Spoel, and Erik Lindahl.

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  Copyright (c) 2001-2012,2013, The GROMACS development team at
 Uppsala University  The Royal Institute of Technology, Sweden.
 check out http://www.gromacs.org for more information.

  This program is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public License
 as published by the Free Software Foundation; either version 2.1
  of the License, or (at your option) any later version.

 :-)  mdrun  (-:

 Option Filename  Type Description
 
   -s  ethanol.1.tpr  InputRun input file: tpr tpb tpa
   -o  ethanol.1.trr  Output   Full precision trajectory: trr trj cpt
   -x  ethanol.1.xtc  Output, Opt. Compressed trajectory (portable xdr format)
 -cpi  ethanol.1.cpt  Input, Opt.  Checkpoint file
 -cpo  ethanol.1.cpt  Output, Opt. Checkpoint file
   -c  ethanol.1.gro  Output   Structure file: gro g96 pdb etc.
   -e  ethanol.1.edr  Output   Energy file
   -g  ethanol.1.log  Output   Log file
 -dhdl ethanol.1.dhdl.xvg  Output, Opt! xvgr/xmgr file
 -field  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -table  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
 -tabletf  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
 -tablep  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
 -tableb  ethanol.1.xvg  Input, Opt.  xvgr/xmgr file
 -rerun  ethanol.1.xtc  Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt
 -tpi  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -tpid ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -ei  ethanol.1.edi  Input, Opt.  ED sampling input
  -eo  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
   -j  ethanol.1.gct  Input, Opt.  General coupling stuff
  -jo  ethanol.1.gct  Output, Opt. General coupling stuff
 -ffout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -devout  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
 -runav  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -px  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -pf  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -ro  ethanol.1.xvg  Output, Opt. xvgr/xmgr file
  -ra  ethanol.1.log  Output, Opt. Log file
  -rs  ethanol.1.log  Output, Opt. Log file
  -rt  ethanol.1.log  Output, Opt. Log file
 -mtx  ethanol.1.mtx  Output, Opt. Hessian matrix
  -dn  ethanol.1.ndx  Output, Opt. Index file
 -multidir ethanol.1  Input, Opt., Mult. Run directory
 -membed  ethanol.1.dat  Input, Opt.  Generic data file
  -mp  ethanol.1.top  Input, Opt.  Topology file
  -mn  ethanol.1.ndx  Input, Opt.  Index file

 Option   Type   Value   Description
 --
 -[no]h   bool   no  Print help info and quit
 -[no]version bool   no  Print version info and quit
 -niceint0   Set the nicelevel
 -deffnm  string ethanol.1  Set the default filename for all file options
 -xvg enum   xmgrace  xvg plot formatting: xmgrace, xmgr or none
 -[no]pd  bool   no  Use particle decompostion
 -dd  vector 0 0 0   Domain decomposition grid, 0 is optimize
 -ddorder enum   interleave  DD node order: interleave, pp_pme or cartesian
 -npmeint-1  Number of separate nodes to be used for PME, -1

Re: [gmx-users] Re: grompp for minimization: note warning

2013-09-25 Thread Mark Abraham
If diff says there are no changes, then you're not comparing with the file
you changed...
On Sep 25, 2013 1:59 PM, shahab shariati shahab.shari...@gmail.com
wrote:

 Dear Mark

  The UNIX tool diff is your friend for comparing files.

 Thanks for your suggestion. I used diff and sdiff toll
 for comparing 2 files (before and after correction).

 diff old.gro new.gro

 These tolls did not give me any output file or text
 containing difference between 2 files.

 In this condition, how should I find difference between

 2 gro files?

 Best wishes for you
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] OPLS/AA + TIP5P, anybody?

2013-09-24 Thread Mark Abraham
You should be able to minimize with CG and TIP5P by eliminating
constraints, by making the water use a flexible molecule, e.g. define
= -DFLEXIBLE (or something). Check your water .itp file for how to do
it.

Mark

On Tue, Sep 24, 2013 at 10:25 PM, gigo g...@ibb.waw.pl wrote:
 Dear GMXers,
 Since I am interested in interactions of lone electron pairs of water oxygen
 within the active site of an enzyme that I work on, I decided to give TIP5P
 a shot. I use OPLSAA. I run into troubles very fast trying to minimize
 freshly solvated system. I found on the gmx-users
 (http://lists.gromacs.org/pipermail/gmx-users/2008-March/032732.html) that
 cg and constraints don't go together when TIP5P is to be used - thats OK. It
 turned out, however, that I was not able to minimize my protein even with
 steepest descent. The system minimizes with TIP4P pretty well (emtol=1.0).
 In the meantime I tried to minimize short peptide - 10aa, did not work as
 well. What happens? The LP of water used to get too close to positively
 charged hydrogens (without VDW radius) on arginine. It looks like this:

 Step=  579, Dmax= 8.0e-03 nm, Epot= -1.40714e+05 Fmax= 1.20925e+04, atom=
 171
 Step=  580, Dmax= 9.6e-03 nm, Epot= -1.41193e+05 Fmax= 8.13923e+04, atom=
 171
 Step=  581, Dmax= 1.1e-02 nm, Epot= -1.43034e+05 Fmax= 1.03648e+06, atom=
 11181
 Step=  585, Dmax= 1.7e-03 nm, Epot= -1.46878e+05 Fmax= 4.23958e+06, atom=
 11181
 Step=  587, Dmax= 1.0e-03 nm, Epot= -1.49565e+05 Fmax= 9.43285e+06, atom=
 11181
 Step=  589, Dmax= 6.2e-04 nm, Epot= -1.59042e+05 Fmax= 3.55920e+07, atom=
 11181
 Step=  591, Dmax= 3.7e-04 nm, Epot= -1.69054e+05 Fmax= 7.79944e+07, atom=
 11181
 Step=  593, Dmax= 2.2e-04 nm, Epot= -1.85575e+05 Fmax= 2.27640e+08, atom=
 11181
 Step=  595, Dmax= 1.3e-04 nm, Epot= -2.35034e+05 Fmax= 5.88938e+08, atom=
 17181
 Step=  597, Dmax= 8.0e-05 nm, Epot= -2.39154e+05 Fmax= 1.22615e+09, atom=
 11181
 Step=  598, Dmax= 9.6e-05 nm, Epot= -2.67157e+05 Fmax= 1.96782e+09, atom=
 11181
 Step=  600, Dmax= 5.8e-05 nm, Epot= -4.37260e+05 Fmax= 1.08988e+10, atom=
 11181
 Step=  602, Dmax= 3.5e-05 nm, Epot= -4.65654e+05 Fmax= 1.29609e+10, atom=
 11181
 Step=  604, Dmax= 2.1e-05 nm, Epot= -1.17945e+06 Fmax= 1.31028e+11, atom=
 11181
 Step=  607, Dmax= 6.3e-06 nm, Epot= -3.07551e+06 Fmax= 6.04297e+11, atom=
 11181
 Step=  610, Dmax= 1.9e-06 nm, Epot= -4.26709e+06 Fmax= 1.61390e+12, atom=
 11181
 Step=  611, Dmax= 2.3e-06 nm, Epot= -4.39724e+06 Fmax= 2.14416e+12, atom=
 11181
 Step=  613, Dmax= 1.4e-06 nm, Epot= -1.27489e+07 Fmax= 1.03223e+13, atom=
 17181
 Step=  614, Dmax= 1.6e-06 nm, Epot= -5.23118e+06 Fmax= 3.18465e+12, atom=
 11181
 Energy minimization has stopped, but the forces havenot converged to the
 (...)

 In this example atom 171 is HH21 of ARG, and 11181 is oxygen of water that
 got close to this ARG. Sometimes the epot turns nan at the end. If you would
 like to reproduce, I put the peptide.pdb, the mdp file and the running
 script at http://shroom.ibb.waw.pl/tip5p . If anybody have any suggestions
 how to minimize (deep) with OPLSAA + TIP5P in gromacs (4.6.3 preferably...)
 without constraining bond lengths (which is also problematic), I will be
 very very grateful.
 Best,

 Grzegorz Wieczorek
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] SPC with amber?

2013-09-24 Thread Mark Abraham
The FF+water combinations still work the same way they did 3 years
ago! :-) The important question is whether validation for the
observables has occurred. (And no relevant problems were seen). If the
paper does not support its decision to mix and match, go and ask them
why it was reasonable!

Mark

On Tue, Sep 24, 2013 at 10:58 PM, Rafael I. Silverman y de la Vega
rsilv...@ucsc.edu wrote:
 Dear all,
 I have been trying to evaluate a paper that used amber99 with SPC water to
 simulate a protein. How would this affect the results, is it important? I
 googled for a bit, all I found was:
  Amber, charmm and OPLS-AA were developed with TIP3P, and that should be
 the default. Except that charmm uses a TIP3P with lennard-Jones on the
 waters, and that should probably be the default with charmm.
  B.t.w., how transferable are water models between ff's? I've always been
 thought that they are actually non-transferable (or at least that is  what
 I remember), making e.g. Amber/SPCe a bad option, as would  gromos/tip4p.?
  Nobody really knows.

 from 2010, have things changed in 3 years, and forcefields work better with
 water models not developed specifically for that ff?
 Thanks
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Funky output trajectory (lines all over the place)

2013-09-23 Thread Mark Abraham
On Sep 23, 2013 9:23 AM, Jonathan Saboury jsab...@gmail.com wrote:

 I tried minimizing a box of cyclohexanes and water. The first frame is
 fine, but after that seemingly random lines form in vmd with the
 cyclohexanes. The waters seem to minimizing just fine though.

 I am sure I am just doing something extremely silly and I just don't know
 it because of ignorance. I have no formal training on simulations, you are
 my only hope!

Google is pretty useful, too ;-)
http://www.gromacs.org/Documentation/FAQsdeals with this kind of
issue.

Mark

 Perhaps using the em.gro with the em.trr is not the correct way to
 visualize? I used the command: vmd em.gro em.trr

 Or something is wrong with my em.mdp?
 em.mdp: http://pastebin.com/raw.php?i=LPPN5xRF

 Commands used: http://pastebin.com/raw.php?i=Jk0fKLJj
 Here are all the files, in case you need them:
 http://www.sendspace.com/file/gx8j97

 Sorry for dumping all of this, but I am genuinely stuck. I've tried
reading
 about the mdp file format but i only understand ~5%. If I could have done
 more I would have tried :/

 Thank you all, it is really appreciated.

 -Jonathan Saboury
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] energy minimization

2013-09-23 Thread Mark Abraham
On Sep 23, 2013 9:08 AM, marzieh dehghan dehghanmarz...@gmail.com wrote:

 Hi every body
 in order to protein- ligand docking, energy minimization was done by
 GROMACS. I did the following steps for insulin pdb file:

 1- pdb2gmx -ignh -f 3inc.pdb -o test.pdb -p topol.top -water spce
 2- grompp -f em.mdp -c test.pdb -p topol.top -o em.tpr
 3-mdrun -v -deffnm em
 4- editconf -f em.gro -o final.pdb -c -d 1.0 -bt cubic

 everything was perfect, but the final pdb file has two problems:

 1- chain ID was missed.
 2- insulin contains two chains (A  B) which connect by disulfide bond,
but
 after energy minimization, two chains are separated.

Did pdb2gmx even report it being made?

Mark

 I would like to know how to solve these problems?

 best regards

 --
 *Marzieh Dehghan

 PhD Candidate of Biochemistry
 Institute of biochemistry and Biophysics (IBB)
 University of Tehran, Tehran- Iran.*
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] The charge of cofactor and ligand

2013-09-23 Thread Mark Abraham
How do GAFF and acpype work?

Mark

On Mon, Sep 23, 2013 at 5:47 PM, aixintiankong aixintiank...@126.com wrote:
 Dear prof,
 can i use the RESP charge for the cofactor NAD+ and AM1-BBC charge for ligand 
 and then  use acpype to  generate GAFF force field parameter for the NAD+ and 
 ligand?
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] confusion about implicint solvent

2013-09-23 Thread Mark Abraham
On Mon, Sep 23, 2013 at 8:08 PM, Szilárd Páll szilard.p...@cbr.su.se wrote:
 Hi,

 Admittedly, both the documentation on these features and the
 communication on the known issues with these aspects of GROMACS has
 been lacking.

 Here's a brief summary/explanation:
 - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu
 which is essentially mdrun + OpenMM, hence it has some limitations,
 most notably it can only run on a single GPU. The performance,
 depending on setting, can be up to 10x higher than on the CPU.
 - GROMACS 4.6: the native GPU acceleration does supports only explicit
 solvent, mdrun + OpenMM is still available (exactly for implicit
 solvent runs), but has been moved to the contrib section which means
 that it is not fully supported. Moreover, OpenMM support - unless
 somebody volunteers for maintenance of the mdrun-OpenMM interface -
 will be dropped in the next release.

 I can't comment much on the implicit solvent code on the CPU side
 other than the fact that there have been issues which AFAIK limit the
 parallelization to a rather small number of cores, hence the
 achievable performance is also limited. I hope others can clarify this
 aspect.

IIRC the best 4.5 performance for CPU-only implicit solvent used
infinite cut-offs and SIMD acceleration. The SIMD is certainly broken
in 4.6 (and IIRC was explicitly disabled at some point after 4.6.3).
There is limited enthusiasm for fixing things (e.g. see parts of
http://redmine.gromacs.org/issues/1292) but nobody with the skills has
so far applied the time to do so. As always with an open-source
project, if you want something, be prepared to roll up your sleeves
and work, or hit your knees and pray! :-)

Mark

 Cheers,
 --
 Szilárd


 On Mon, Sep 23, 2013 at 7:34 PM, Francesco frac...@myopera.com wrote:
 Good afternoon everybody,
 I'm a bit confuse about gromacs performances with implicit solvent.

 I'm simulating a 1000 residues protein with explicit solvent, using both
 a cpu and a gpu cluster.
 With a gpu node (12 cores and 3 M2090 gpu ) I reach 10 ns/day, while
 with no gpu and 144 cores I got 34 ns/day.

 Because I have several mutants (more than 50) I have to reduce the
 average simulation time and I was considering different option such as
 the use of implicit solvent.
 I tried with both the clusters and using gromacs 4.6 and 4.5 but the
 performances are terrible (1 day for 100ps) comparing to the explicit
 solvent.

 I read all the other messages on the mailing-list and the documentation,
 but the mix of old and new features/posts really confuses me a lot.

 Here
 (http://www.gromacs.org/Documentation/Acceleration_and_parallelization)
 it is said that with the gpu 4.5 and implicit solvent I should expect a
 substantial speedup.

 Here (
 http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM#Benchmark_results.3a_GROMACS_CPU_vs_GPU
 ) I found this sentence It is ultimately up to you as a user to decide
 what simulations setups to use, but we would like to emphasize the
 simply amazing implicit solvent performance provided by GPUs.

 I follow the advise found in the mailing list and read both the
 documentation (site and manual), but I can't figured it out what should
 I do.
 How can you guys have amazing performances?

 I also found this answer from a last March post
 (http://gromacs.5086.x6.nabble.com/Implicit-solvent-MD-is-not-fast-and-not-accurate-td5006659.html#none)
 that confuses me even more.

 Performance issues are known. There are plans to implement the implicit
 solvent code for GPU and perhaps allow for better parallelization, but I
 don't know what the status of all that is.  As it stands (and as I have
 said before on this list and to the developers privately), the implicit
 code is largely unproductive because the performance is terrible. 

 Should I skip the idea of using implicit solvent and try something else?

 these are a set of parameters that I used (also the -pd flag)

 ; Run parameters
 integrator = sd
 tinit = 0
 nsteps = 5
 dt= 0.002

 ; Output control

 nstxout  = 5000
 nstvout   = 5000
 nstlog = 5000
 nstenergy   = 5000
 nstxtcout= 5000
 xtc_precision  = 1000
 energygrps = system

 ; Bond parameters
 continuation= no
 constraints  = all-bonds
 constraint_algorithm = lincs
 lincs_iter = 1
 lincs_order  = 4
 lincs_warnangle   = 30

 ; Neighborsearching
 ns_type  = simple
 nstlist = 0
 rlist= 0
 rcoulomb= 0
 rvdw  = 0

 ; Electrostatics
 coulombtype   = cut-off
 pbc= no
 comm_mode= Angular

 implicit_solvent = GBSA
 gb_algorithm = OBC
 nstgbradii = 1.0
 rgbradii  = 0
 gb_epsilon_solvent= 80
 gb_dielectric_offset= 0.009
 

Re: [gmx-users] restarting the crashed run

2013-09-22 Thread Mark Abraham
No, because then the state.cpt file would be redundant :-) All you can
do re-start from the beginning, because the .tpr file only has the
initial state. You can extend the number of steps, but you can't
magically produce the state after the first simulation just from the
initial one. (If you can, you'll be hugely popular here, though!)

Mark

On Sun, Sep 22, 2013 at 12:03 PM, Nidhi Katyal
nidhikatyal1...@gmail.com wrote:
 Thank you Justin for your reply.
 Pt 3 should be the correct way to proceed. But somehow if I have lost my
 state.cpt file, can I continue my run using following commands:

 tpbconv -s previous.tpr -extend timetoextendby -o next.tpr
 mdrun -v -s next.tpr -o next.trr -c next.gro -g next.log -e next.ene
 -noappend
 trjcat -f previous.trr next.trr -o combine.trr




 On Sun, Sep 22, 2013 at 1:43 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 9/21/13 3:46 PM, Nidhi Katyal wrote:

 Dear all
 I would like to know the difference between restarting our crashed runs by
 1) first generating next.tpr using tpbconv -extend option
  then running grompp with this *.tpr file


 Why would you run grompp?  If you're using it as a source of coordinates,
 you're going to be dealing with the initial state, not the last state of
 the previous simulation, so that's garbage.  If you're restarting a crash,
 then presumably there is no need at all to invoke tpbconv or grompp.


   and finally running mdrun but with no cpi option


 Makes no sense.  You're basically obliterating the previous simulation.


  2) same as 1 but with -cpi option


 Still no need for grompp, but if providing -cpi to mdrun, you're resuming
 from the correct state.


  3) using only mdrun command with cpi option and with previous *.tpr
  (ie not creating new tpr by tpbconv option)


 This is the correct way to proceed.  The run will pick up from the state
 stored in the .cpt file and proceed with the number of steps originally
 specified in the .tpr file.


  4) using procedure 3 but with no state.cpt file


 The run should start over.


  Secondly, if state.cpt contains all the information to continue the
 simulation then why the simulation should continue at all without
 providing
 these files as in procedure 1 and 4


 Without a .cpt file, the run starts over from the beginning.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu |
 (410) 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Minimum distance periodic images, protein simulation

2013-09-21 Thread Mark Abraham
You can try a run in implicit solvent to get a feel for the maximum
diameter of the protein while unfolding. You will not have any
certainty unless you can afford a box whose diameter is that of the
straight-line peptide...

Mark

On Sat, Sep 21, 2013 at 1:03 PM, aksharma arunsharma_...@yahoo.com wrote:
 Hi Justin,
 Thanks for your reply. I have some follow-up questions. Since the simulation
 is high temperature (450 K) there is slight unfolding of the protein.

 The box was set up as rhombic dodecahedron with 1.2 nm as the distance
 between solute and edge of box.

 pdb2gmx -f 1L2Y.pdb -o 1L2Y-processed.gro -ignh -water spce

 The cutoffs are 0.9 nm for VDW and electrostatics.

 Do you suggest using an even bigger box for studying unfolding? Or is there
 something else that could be going on? Do you have any ball park suggestions
 for a good size of the box or is this something that I would have to
 experiment with different sizes until I land a suitable box.

 Thanks a lot,



 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Minimum-distance-periodic-images-protein-simulation-tp5011343p5011347.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] jwe1050i + jwe0019i errors = SIGSEGV (Fujitsu)

2013-09-21 Thread Mark Abraham
On Sat, Sep 21, 2013 at 2:45 PM, James jamesresearch...@gmail.com wrote:
 Dear Mark and the rest of the Gromacs team,

 Thanks a lot for your response. I have been trying to isolate the problem
 and have also been in discussion with the support staff. They suggested it
 may be a bug in the gromacs code, and I have tried to isolate the problem
 more precisely.

First, do the GROMACS regression tests for Verlet kernels pass? (Run
them all, but those with nbnxn prefix are of interest here.) They
likely won't scale to 16 OMP threads, but you can vary OMP_NUM_THREADS
environment variable to see what you can see.

 Considering that the calculation is run under MPI with 16 OpenMP cores per
 MPI node, the error seems to occur under the following conditions:

 A few thousand atoms: 1 or 2 MPI nodes: OK
 Double the number of atoms (~15,000): 1 MPI node: OK, 2 MPI nodes: SIGSEGV
 error described below.

 So it seems that the error occurs for relatively large systems which use
 MPI.

~500 atoms per core (thread) is a system in the normal GROMACS scaling
regime. 16 OMP threads is more than is useful on other HPC systems,
but since we don't know what your hardware is, whether you are
investigating something useful is your decision.

 The crash mentions the calc_cell_indices function (see below). Is this
 somehow a problem with memory not being sufficient at the MPI interface at
 this function? I'm not sure how to proceed further. Any help would be
 greatly appreciated.

If there is a problem with GROMACS (which so far I doubt), we'd need a
stack trace that shows a line number (rather than addresses) in order
to start to locate it.

Mark

 Gromacs version is 4.6.3.

 Thank you very much for your time.

 James


 On 4 September 2013 16:05, Mark Abraham mark.j.abra...@gmail.com wrote:

 On Sep 4, 2013 7:59 AM, James jamesresearch...@gmail.com wrote:
 
  Dear all,
 
  I'm trying to run Gromacs on a Fujitsu supercomputer but the software is
  crashing.
 
  I run grompp:
 
  grompp_mpi_d -f parameters.mdp -c system.pdb -p overthe.top
 
  and it produces the error:
 
  jwe1050i-w The hardware barrier couldn't be used and continues processing
  using the software barrier.
  taken to (standard) corrective action, execution continuing.
  error summary (Fortran)
  error number error level error count
  jwe1050i w 1
  total error count = 1
 
  but still outputs topol.tpr so I can continue.

 There's no value in compiling grompp with MPI or in double precision.

  I then run with
 
  export FLIB_FASTOMP=FALSE
  source /home/username/Gromacs463/bin/GMXRC.bash
  mpiexec mdrun_mpi_d -ntomp 16 -v
 
  but it crashes:
 
  starting mdrun 'testrun'
  5 steps, 100.0 ps.
  jwe0019i-u The program was terminated abnormally with signal number
 SIGSEGV.
  signal identifier = SEGV_MAPERR, address not mapped to object
  error occurs at calc_cell_indices._OMP_1 loc 00233474 offset
  03b4
  calc_cell_indices._OMP_1 at loc 002330c0 called from loc
  02088fa0 in start_thread
  start_thread at loc 02088e4c called from loc 029d19b4 in
  __thread_start
  __thread_start at loc 029d1988 called from o.s.
  error summary (Fortran)
  error number error level error count
  jwe0019i u 1
  jwe1050i w 1
  total error count = 2
  [ERR.] PLE 0014 plexec The process terminated
 

 abnormally.(rank=1)(nid=0x03060006)(exitstatus=240)(CODE=2002,1966080,61440)
  [ERR.] PLE The program that the user specified may be illegal or
  inaccessible on the node.(nid=0x03060006)
 
  Any ideas what could be wrong? It works on my local intel machine.

 Looks like it wasn't compiled correctly for the target machine. What was
 the cmake command, what does mdrun -version output? Also, if this is the K
 computer, probably we can't help, because the compiler docs are officially
 unavailable to us. National secret, and all ;-)

 Mark

 
  Thanks in advance,
 
  James
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use

Re: [gmx-users] No such moleculetype SOL

2013-09-21 Thread Mark Abraham
On Sat, Sep 21, 2013 at 9:06 PM, Jonathan Saboury jsab...@gmail.com wrote:
 I am doing this tutorial:
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/biphasic/index.html

 I have set up the randomly placed cyclohexane and water throughout the box.
 The problem is when i try the command grompp -f em.mdp -c biphase.gro -p
 cyclohexane.top -o em.tpr it errors telling me No such moleculetype SOL.

 I know SOL is water, and the .top file does not include any sort of .itp
 that includes water.

Have a look at how Justin's tutorial's .top gets access to a water topology.

Mark

 I've tried to add #include
 amber99sb.ff/forcefield.itp with no avail.

 This is strictly just cyclohexane and water, I am not interested in putting
 a protein inside of it.

 Commands used: http://pastebin.com/raw.php?i=RaKNCpi4
 Files: http://www.sendspace.com/file/ibwk3l

 Thank you, the help you guys give is extremely appreciated :)
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MPI runs on a local computer

2013-09-20 Thread Mark Abraham
On Thu, Sep 19, 2013 at 2:48 PM, Xu, Jianqing x...@medimmune.com wrote:

 Dear all,

 I am learning the parallelization issues from the instructions on Gromacs 
 website. I guess I got a rough understanding of MPI, thread-MPI, OpenMP. But 
 I hope to get some advice about a correct way to run jobs.

 Say I have a local desktop having 16 cores. If I just want to run jobs on one 
 computer or a single node (but multiple cores), I understand that I don't 
 have to install and use OpenMPI, as Gromacs has its own thread-MPI included 
 already and it should be good enough to run jobs on one machine. However, for 
 some reasons, OpenMPI has already been installed on my machine, and I 
 compiled Gromacs with it by using the flag: -DGMX_MPI=ON. My questions are:


 1.   Can I still use this executable (mdrun_mpi, built with OpenMPI 
 library) to run multi-core jobs on my local desktop?

Yes

 Or the default Thread-MPI is actually a better option for a single computer 
 or single node (but multi-cores) for whatever reasons?

Yes - lower overhead.

 2.   Assuming I can still use this executable, let's say I want to use 
 half of the cores (8 cores) on my machine to run a job,

 mpirun -np 8 mdrun_mpi -v -deffnm md

 a). Since I am not using all the cores, do I still need to lock the 
 physical cores to use for better performance? Something like -nt for 
 Thread-MPI? Or it is not necessary?

You will see improved performance if you set the thread affinity.
There is no advantage in allowing the threads to move.

 b). For running jobs on a local desktop, or single node having ...  say 16 
 cores, or even 64 cores, should I turn off the separate PME nodes (-npme 
 0)? Or it is better to leave as is?

Depends, but usually best to use separate PME nodes. Try g_tune_pme,
as Carsten suggests.

 3.   If I want to run two different projects on my local desktop, say one 
 project takes 8 cores, the other takes 4 cores (assuming I have enough 
 memory), I just submit the jobs twice on my desktop:

 nohup mpirun -np 8 mdrun_mpi -v -deffnm md1  log1

 nohup mpirun -np 4 mdrun_mpi -v -deffnm md2  log2 

 Will this be acceptable ? Will two jobs be competing the resource and 
 eventually affect the performance?

Depends how many cores you have. If you want to share a node between
mdruns, you should specify how many (real- or thread-) MPI ranks for
each run, and how many OpenMP threads per rank, arrange for one thread
per core, and use mdrun -pin and mdrun -pinoffset suitably. You should
expect near linear scaling of each job when you are doing it right -
but learn the behaviour of running one job per node first!

Mark

 Sorry for so many detailed questions, but your help on this will be highly 
 appreciated!

 Thanks a lot,

 Jianqing



 To the extent this electronic communication or any of its attachments contain 
 information that is not in the public domain, such information is considered 
 by MedImmune to be confidential and proprietary. This communication is 
 expected to be read and/or used only by the individual(s) for whom it is 
 intended. If you have received this electronic communication in error, please 
 reply to the sender advising of the error in transmission and delete the 
 original message and any accompanying documents from your system immediately, 
 without copying, reviewing or otherwise using them for any purpose. Thank you 
 for your cooperation.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: grompp for minimization: note warning

2013-09-20 Thread Mark Abraham
The UNIX tool diff is your friend for comparing files.

On Fri, Sep 20, 2013 at 1:53 PM, shahab shariati
shahab.shari...@gmail.com wrote:
 Dear Tsjerk

 Thanks for your reply

 Before correcting the gro file, I knew that gro file is fixed format.
 I did this correction very carefully.

 Part of the gro file before and after correction is as follows:

 -
 before:
 -
14DOPCN4  755   0.260   1.726   6.354
14DOPCC5  756   0.263   1.741   6.204
14DOPCC1  757   0.136   1.777   6.423
14DOPCC2  758   0.279   1.580   6.384
14DOPCC3  759   0.383   1.799   6.403
14DOPCC6  760   0.386   1.685   6.132
14DOPCP8  761   0.628   1.683   6.064
14DOPC   OM9  762   0.640   1.548   6.123
14DOPC  OM10  763   0.747   1.771   6.072
14DOPC   OS7  764   0.511   1.755   6.145
14DOPC  OS11  765   0.576   1.681   5.913
14DOPC   C12  766   0.591   1.806   5.845
14DOPC   C13  767   0.470   1.901   5.846
14DOPC  OS14  768   0.364   1.830   5.782
14DOPC   C15  769   0.247   1.869   5.833
14DOPC   O16  770   0.238   1.946   5.927
14DOPC   C17  771   0.123   1.815   5.762
14DOPC   C34  772   0.490   2.037   5.777
14DOPC  OS35  773   0.541   2.029   5.644
14DOPC   C36  774   0.591   2.142   5.593
14DOPC   O37  775   0.595   2.252   5.646
14DOPC   C38  776   0.674   2.092   5.476
14DOPC   C18  777  -0.004   1.897   5.786
14DOPC   C19  778  -0.138   1.837   5.744
14DOPC   C20  779  -0.147   1.817   5.593
14DOPC   C21  780  -0.196   1.678   5.552
14DOPC   C22  781  -0.181   1.637   5.406
14DOPC   C23  782  -0.252   1.722   5.301
14DOPC   C24  783  -0.241   1.664   5.163
14DOPC   C25  784  -0.267   1.738   5.054
14DOPC   C26  785  -0.312   1.881   5.044
14DOPC   C27  786  -0.368   1.918   4.907
14DOPC   C28  787  -0.266   1.941   4.795
14DOPC   C29  788  -0.324   2.015   4.674
14DOPC   C30  789  -0.377   1.920   4.567
14DOPC   C31  790  -0.377   1.984   4.428
14DOPC   C32  791  -0.439   1.894   4.321
14DOPC   C33  792  -0.358   1.890   4.191
14DOPC   C39  793   0.818   2.145   5.475
14DOPC   C40  794   0.906   2.056   5.387
14DOPC   C41  795   1.042   2.123   5.364
14DOPC   C42  796   1.160   2.029   5.339
14DOPC   C43  797   1.136   1.965   5.202
14DOPC   C44  798   1.261   1.897   5.146
14DOPC   C45  799   1.314   1.786   5.232
14DOPC   C46  800   1.319   1.658   5.194
14DOPC   C47  801   1.274   1.602   5.062
14DOPC   C48  802   1.316   1.457   5.038
14DOPC   C49  803   1.266   1.407   4.902
14DOPC   C50  804   1.338   1.469   4.782
14DOPC   C51  805   1.307   1.406   4.646
14DOPC   C52  806   1.160   1.394   4.607
14DOPC   C53  807   1.119   1.442   4.468
14DOPC   C54  808   0.980   1.407   4.414
 -
 after:
 -
14DOPCC1  755   0.136   1.777   6.423
14DOPCC2  756   0.279   1.580   6.384
14DOPCC3  757   0.383   1.799   6.403
14DOPCN4  758   0.260   1.726   6.354
14DOPCC5  759   0.263   1.741   6.204
14DOPCC6  760   0.386   1.685   6.132
14DOPC   OS7  761   0.511   1.755   6.145
14DOPCP8  762   0.628   1.683   6.064
14DOPC   OM9  763   0.640   1.548   6.123
14DOPC  OM10  764   0.747   1.771   6.072
14DOPC  OS11  765   0.576   1.681   5.913
14DOPC   C12  766   0.591   1.806   5.845
14DOPC   C13  767   0.470   1.901   5.846
14DOPC  OS14  768   0.364   1.830   5.782
14DOPC   C15  769   0.247   1.869   5.833
14DOPC   O16  770   0.238   1.946   5.927
14DOPC   C17  771   0.123   1.815   5.762
14DOPC   C18  772  -0.004   1.897   5.786
14DOPC   C19  773  -0.138   1.837   5.744
14DOPC   C20  774  -0.147   1.817   5.593
14DOPC   C21  775  -0.196   1.678   5.552
14DOPC   C22  776  -0.181   1.637   5.406
14DOPC   C23  777  -0.252   1.722   5.301
14DOPC   C24  778  -0.241   1.664   5.163
14DOPC   C25  779  -0.267   1.738   5.054
14DOPC   C26  780  -0.312   1.881   5.044
14DOPC   C27  781  -0.368   1.918   4.907
14DOPC   C28  782  -0.266   1.941   4.795
14DOPC   C29  783  -0.324   2.015   4.674
14DOPC   C30  784  -0.377   1.920   4.567
14DOPC   C31  785  -0.377   1.984   4.428
14DOPC   C32  786  -0.439   1.894   4.321
14DOPC   C33  787  -0.358   1.890   4.191
14DOPC   C34  788   0.490   2.037   5.777
14DOPC  OS35  789   0.541   2.029   5.644
14DOPC   C36  790   0.591   2.142   5.593
14DOPC   O37  791   0.595   2.252   5.646
14DOPC   C38  792   0.674   2.092   5.476
14DOPC   C39  793   0.818   2.145   5.475
14DOPC   C40  794   0.906   2.056   5.387
14DOPC   C41  795   1.042   2.123   5.364
14DOPC   C42  796   1.160   2.029   5.339
14DOPC   C43  797   

Re: [gmx-users] Re: Charmm 36 forcefield with verlet cut-off scheme

2013-09-20 Thread Mark Abraham
Note that the group scheme does not reproduce the (AFAIK unpublished)
CHARMM switching scheme, either.

Mark

On Fri, Sep 20, 2013 at 4:26 AM, Justin Lemkul jalem...@vt.edu wrote:


 On 9/19/13 9:55 PM, akk5r wrote:

 Thanks Justin. I was told that the vdwtype = switch was an essential
 component of running Charmm36. Is that not the case?


 It is, but I suppose one can achieve a similar effect with the Verlet
 scheme. You can certainly use the traditional CHARMM settings if you use the
 group scheme, instead.  The vdw-modifier setting should give you a
 comparable result, but I have never tried it myself.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] modify nsteps in an existing tpr file

2013-09-19 Thread Mark Abraham
Indeed - your question was fair, and no undue criticism pertained! :-)
If you are trying to reproduce something, you must expect .tpr
differences between 4.0.x and 4.6.y. I illustrated the change that has
taken place in how VDW parameters are used internally in 4.6, and how
that is distinct from the (presumably) unchanged description of those
parameters. How and where to document this kind of thing so that
people who need it can find it and those who don't need it don't drown
in paper is an impossible problem!

Cheers,

Mark

On Wed, Sep 18, 2013 at 8:34 PM, Guanglei Cui
amber.mail.arch...@gmail.com wrote:
 It is only a simple question, not a criticism of any kind. I'm sure there
 may be perfect reasons to choose one implementation over another. To
 someone who is not familiar with the history of gmx development, it is
 something to be aware of. That's all.


 On Wed, Sep 18, 2013 at 4:56 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 Implementation and description of a model physics are two different
 things. You could compute KE of a particle with 0.5 * m * v^2, but if
 the mass is used nowhere else, why wouldn't you pre-multiply the mass
 by 0.5?

 Mark

 On Wed, Sep 18, 2013 at 4:31 PM, Guanglei Cui
 amber.mail.arch...@gmail.com wrote:
  hmm, does that mean the gmx force field file format or specifications are
  not backward compatible?
 
 
  On Wed, Sep 18, 2013 at 4:08 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
 
  There are technical differences between versions about how the VDW
  parameters are computed. You should not expect .tpr equivalence
  between minor version changes such as 4.0 and 4.6. You need to compile
  a 4.0.x grompp to see if your setup is equivalent, but having done so
  you should be able to use the same inputs to 4.6 grompp and get a
  correct simulation with 4.6 mdrun.
 
  Mark
 
  On Wed, Sep 18, 2013 at 1:55 PM, Guanglei Cui
  amber.mail.arch...@gmail.com wrote:
   Thanks. gmxcheck is quite helpful. Here is part of the output. It
 turns
  out
   the difference is mainly in the force field parameters, which
 indicates
  the
   top file provided may not be the one used to produce the tpr file.
  Perhaps
   it is best to contact the authors, unless the difference is due to
  certain
   changes between gmx 4.0.x and gmx 4.6.3.
  
   inputrec-nsteps (5000 - 5000)
   inputrec-nstcalclr (5 - 0)
   inputrec-nstdhdl (1 - 50)
   inputrec-fepvals-init_fep_state ( 0.0e+00 -
 -1.0e+00)
   inputrec-fepvals-lambda_neighbors[1] (0 - 1)
   inputrec-fepvals-sc_power (0 - 1)
   inputrec-dihre_fc (1.00e+03 - 0.00e+00)
   inputrec-grpopts.ngtc (4 - 1)
   inputrec-grpopts.ngener (4 - 1)
   inputrec-grpopts.nrdf[ 0] (8.610900e+04 - 2.136210e+05)
   idef-iparam[12]1: c6= 4.23112651e-03, c12= 4.76949208e-06
   idef-iparam[12]2: c6= 4.68938737e-08, c12= 1.15147106e-12
   idef-iparam[54]1: c6= 4.58155479e-03, c12= 4.48611081e-06
   idef-iparam[54]2: c6= 4.46544206e-08, c12= 8.37594751e-13
   idef-iparam[82]1: c6= 3.75142763e-03, c12= 4.22875655e-06
   idef-iparam[82]2: c6= 4.15773336e-08, c12= 1.02092445e-12
   idef-iparam[96]1: c6= 3.8381e-03, c12= 2.83264171e-06
   idef-iparam[96]2: c6= 2.83739432e-08, c12= 3.04270091e-13
   idef-iparam[124]1: c6= 4.26879199e-03, c12= 3.50070763e-06
   idef-iparam[124]2: c6= 3.50897622e-08, c12= 4.64908439e-13
   idef-iparam[152]1: c6= 3.59375845e-03, c12= 2.76020933e-06
   idef-iparam[152]2: c6= 2.76677472e-08, c12= 3.21553060e-13
   idef-iparam[166]1: c6= 7.79988989e-03, c12= 1.19875567e-05
   idef-iparam[166]2: c6= 1.12529349e-07, c12= 4.90395051e-12
   idef-iparam[168]1: c6= 4.23112651e-03, c12= 4.76949208e-06
   idef-iparam[168]2: c6= 4.68938737e-08, c12= 1.15147106e-12
   idef-iparam[171]1: c6= 4.58155479e-03, c12= 4.48611081e-06
   idef-iparam[171]2: c6= 4.46544206e-08, c12= 8.37594751e-13
   idef-iparam[173]1: c6= 3.75142763e-03, c12= 4.22875655e-06
   idef-iparam[173]2: c6= 4.15773336e-08, c12= 1.02092445e-12
   idef-iparam[174]1: c6= 3.8381e-03, c12= 2.83264171e-06
   idef-iparam[174]2: c6= 2.83739432e-08, c12= 3.04270091e-13
   idef-iparam[176]1: c6= 4.26879199e-03, c12= 3.50070763e-06
   idef-iparam[176]2: c6= 3.50897622e-08, c12= 4.64908439e-13
   idef-iparam[178]1: c6= 3.59375845e-03, c12= 2.76020933e-06
   idef-iparam[178]2: c6= 2.76677472e-08, c12= 3.21553060e-13
   idef-iparam[179]1: c6= 7.79988989e-03, c12= 1.19875567e-05
   idef-iparam[179]2: c6= 1.12529349e-07, c12= 4.90395051e-12
   idef-iparam[180]1: c6= 6.22385694e-03, c12= 5.03394313e-06
   idef-iparam[180]2: c6= 2.05496373e-24, c12= 0.e+00
   idef-iparam[181]1: c6= 3.93928867e-03, c12= 3.50622463e-06
   idef-iparam[181]2: c6= 3.50750256e-08, c12= 5.46337281e-13
   idef-iparam[194]1: c6= 3.93928867e-03, c12= 3.50622463e-06
   idef-iparam[194]2: c6= 3.50750256e-08, c12= 5.46337281e-13
   ...
  
  
   On Wed, Sep 18, 2013 at 10:14 AM, Mark Abraham 
 mark.j.abra...@gmail.com
  wrote:
  
   That -om mechanism has been broken for about a decade

Re: [gmx-users] Error while simulating Protein in SDS/Water

2013-09-18 Thread Mark Abraham
Look at the numbers, count the number of atoms you expect in each
moleculetype, and work out what the mismatch is.

Mark

On Wed, Sep 18, 2013 at 2:58 PM, naresh_sssihl knnar...@sssihl.edu.in wrote:
 Dear GMX users,

 I am trying to simulate a protein in SDS/Water box.

 1. No problems with pdb2gmx - .gro file and .top files were generated.
 /pdb2gmx -f protein.pdb -o protein_pro.gro -water spce/
 selected ff 13: GROMOS96 53a6 force field (JCC 2004 vol 25 pag 1656)

 2. Created a Cubic box using editconf
   /editconf -f protein_pro.gro -o protein_newbox.gro -c -d 1.0 -bt cubic/

 3. Then solvated the system using genbox
   genbox -cp protein_newbox.gro -cs spc216.gro -ci sds.gro -nmol 215 -o
 protein_solv.gro -p topol.top

 4. After this step I looked at the topol.top file and I found that it was
 not fully updated and so I manually updated by adding no. of SDS molecules
 under [ molecules ] section at the very end. Also I added #include sds.itp
 whereever it was required.
 In fact I followed the discussion between Justin, Mark and Anna Marabotti at
 the following link:
 http://lists.gromacs.org/pipermail/gmx-users/2009-June/042704.html  and did
 everything that was suggested.

 5. When I use grompp after the step 4
 grompp -f minim.mdp -c protein_solv.gro -p topol.top -o protein.tpr

 This is where I am getting a Fatal Error saying that the number of
 Co-ordinates in protein_solv.gro do not match with the number of
 co-ordinates in topol.top.

 Could you please help regarding this... Please give me your valuable
 suggestions.

 With Thanks and Best Regards

 Naresh




 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Error-while-simulating-Protein-in-SDS-Water-tp5011282.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] modify nsteps in an existing tpr file

2013-09-18 Thread Mark Abraham
That -om mechanism has been broken for about a decade, unfortunately.

You will need to include the file, or post a link a file, not attach
it, if you want users of this list to see it.

gmxcheck to compare your new and old .tpr files is useful to see what
you might need in the new .mdp file to reproduce the first one. Note
that grompp -c yourold.tpr is the best way to get the same starting
coordinates.

Mark

On Wed, Sep 18, 2013 at 3:48 PM, Guanglei Cui
amber.mail.arch...@gmail.com wrote:
 gmxdump -om writes out a mdp file based on the tpr, but that is not read by
 grompp. I tried to change or comment out mdp options that are not
 recognized by grompp. It is attached here. The simulation soon crashes with
 LINCS errors after 25 steps, while the original tpr runs properly. I'm not
 sure what's missing here.


 On Tue, Sep 17, 2013 at 6:12 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 No. Theoretically useful, but not implemented.

 Mark

 On Tue, Sep 17, 2013 at 4:45 PM, Guanglei Cui
 amber.mail.arch...@gmail.com wrote:
  Thanks. Is it possible to dump the parameters in the tpr file to a mdp
 file?
 
 
  On Tue, Sep 17, 2013 at 3:20 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
 
  mdrun -nsteps in 4.6 overrides the number of steps in the .tpr
 
  Mark
 
  On Tue, Sep 17, 2013 at 8:55 PM, Guanglei Cui
  amber.mail.arch...@gmail.com wrote:
   Dear GMX users,
  
   I'm new to Gromacs. So apologies if this question is too simple.
  
   I downloaded top/tpr files from the supplementary material of a
 published
   paper. The nsteps set in the tpr file is 100ns. I wish to do a small
 test
   run. Is there any way I can modify that? I've tried to create a mdp
 file
   that best matches the parameters found through gmxdump, but it gives
 me a
   lot of LINCS error. I can upload the mdp file and gmxdump file if you
 are
   kind to help. Thanks in advance.
  
   Best regards,
   --
   Guanglei Cui
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 
 
 
  --
  Guanglei Cui
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




 --
 Guanglei Cui

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] BGQ compilation with verlet kernels: #include file kernel_impl.h not found.

2013-09-18 Thread Mark Abraham
Thanks for the follow up.

The take-home lesson is that building for BlueGene/Q is unlike
building for the usual homogenous x86 cluster. You still need an MPI
and non-MPI build, but the latter should be targeted at the front end
(Linux on PowerPC, usually), unless/until GROMACS tools acquire MPI
functionality useful on a BlueGene/Q scale.

Mark

On Wed, Sep 18, 2013 at 2:07 AM, Christopher Neale
chris.ne...@mail.utoronto.ca wrote:
 Indeed, it works just fine when I compile with mpi. I never thought to check 
 that. My usual procedure is
 to compile the whole package without mpi and then to compile mdrun with mpi. 
 Thanks for the help Mark.

 Here is the compilation script that worked for me.

 module purge
 module load vacpp/12.1 xlf/14.1 mpich2/xl
 module load cmake/2.8.8
 module load fftw/3.3.2

 export FFTW_LOCATION=/scinet/bgq/Libraries/fftw-3.3.2

 cmake ../source/ \
   -DCMAKE_TOOLCHAIN_FILE=BlueGeneQ-static-XL-C \
   -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
   -DCMAKE_INSTALL_PREFIX=$(pwd) \
   -DGMX_X11=OFF \
   -DGMX_MPI=ON \
   -DGMX_PREFER_STATIC_LIBS=ON

 make -j 16
 make install

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] modify nsteps in an existing tpr file

2013-09-18 Thread Mark Abraham
There are technical differences between versions about how the VDW
parameters are computed. You should not expect .tpr equivalence
between minor version changes such as 4.0 and 4.6. You need to compile
a 4.0.x grompp to see if your setup is equivalent, but having done so
you should be able to use the same inputs to 4.6 grompp and get a
correct simulation with 4.6 mdrun.

Mark

On Wed, Sep 18, 2013 at 1:55 PM, Guanglei Cui
amber.mail.arch...@gmail.com wrote:
 Thanks. gmxcheck is quite helpful. Here is part of the output. It turns out
 the difference is mainly in the force field parameters, which indicates the
 top file provided may not be the one used to produce the tpr file. Perhaps
 it is best to contact the authors, unless the difference is due to certain
 changes between gmx 4.0.x and gmx 4.6.3.

 inputrec-nsteps (5000 - 5000)
 inputrec-nstcalclr (5 - 0)
 inputrec-nstdhdl (1 - 50)
 inputrec-fepvals-init_fep_state ( 0.0e+00 - -1.0e+00)
 inputrec-fepvals-lambda_neighbors[1] (0 - 1)
 inputrec-fepvals-sc_power (0 - 1)
 inputrec-dihre_fc (1.00e+03 - 0.00e+00)
 inputrec-grpopts.ngtc (4 - 1)
 inputrec-grpopts.ngener (4 - 1)
 inputrec-grpopts.nrdf[ 0] (8.610900e+04 - 2.136210e+05)
 idef-iparam[12]1: c6= 4.23112651e-03, c12= 4.76949208e-06
 idef-iparam[12]2: c6= 4.68938737e-08, c12= 1.15147106e-12
 idef-iparam[54]1: c6= 4.58155479e-03, c12= 4.48611081e-06
 idef-iparam[54]2: c6= 4.46544206e-08, c12= 8.37594751e-13
 idef-iparam[82]1: c6= 3.75142763e-03, c12= 4.22875655e-06
 idef-iparam[82]2: c6= 4.15773336e-08, c12= 1.02092445e-12
 idef-iparam[96]1: c6= 3.8381e-03, c12= 2.83264171e-06
 idef-iparam[96]2: c6= 2.83739432e-08, c12= 3.04270091e-13
 idef-iparam[124]1: c6= 4.26879199e-03, c12= 3.50070763e-06
 idef-iparam[124]2: c6= 3.50897622e-08, c12= 4.64908439e-13
 idef-iparam[152]1: c6= 3.59375845e-03, c12= 2.76020933e-06
 idef-iparam[152]2: c6= 2.76677472e-08, c12= 3.21553060e-13
 idef-iparam[166]1: c6= 7.79988989e-03, c12= 1.19875567e-05
 idef-iparam[166]2: c6= 1.12529349e-07, c12= 4.90395051e-12
 idef-iparam[168]1: c6= 4.23112651e-03, c12= 4.76949208e-06
 idef-iparam[168]2: c6= 4.68938737e-08, c12= 1.15147106e-12
 idef-iparam[171]1: c6= 4.58155479e-03, c12= 4.48611081e-06
 idef-iparam[171]2: c6= 4.46544206e-08, c12= 8.37594751e-13
 idef-iparam[173]1: c6= 3.75142763e-03, c12= 4.22875655e-06
 idef-iparam[173]2: c6= 4.15773336e-08, c12= 1.02092445e-12
 idef-iparam[174]1: c6= 3.8381e-03, c12= 2.83264171e-06
 idef-iparam[174]2: c6= 2.83739432e-08, c12= 3.04270091e-13
 idef-iparam[176]1: c6= 4.26879199e-03, c12= 3.50070763e-06
 idef-iparam[176]2: c6= 3.50897622e-08, c12= 4.64908439e-13
 idef-iparam[178]1: c6= 3.59375845e-03, c12= 2.76020933e-06
 idef-iparam[178]2: c6= 2.76677472e-08, c12= 3.21553060e-13
 idef-iparam[179]1: c6= 7.79988989e-03, c12= 1.19875567e-05
 idef-iparam[179]2: c6= 1.12529349e-07, c12= 4.90395051e-12
 idef-iparam[180]1: c6= 6.22385694e-03, c12= 5.03394313e-06
 idef-iparam[180]2: c6= 2.05496373e-24, c12= 0.e+00
 idef-iparam[181]1: c6= 3.93928867e-03, c12= 3.50622463e-06
 idef-iparam[181]2: c6= 3.50750256e-08, c12= 5.46337281e-13
 idef-iparam[194]1: c6= 3.93928867e-03, c12= 3.50622463e-06
 idef-iparam[194]2: c6= 3.50750256e-08, c12= 5.46337281e-13
 ...


 On Wed, Sep 18, 2013 at 10:14 AM, Mark Abraham 
 mark.j.abra...@gmail.comwrote:

 That -om mechanism has been broken for about a decade, unfortunately.

 You will need to include the file, or post a link a file, not attach
 it, if you want users of this list to see it.

 gmxcheck to compare your new and old .tpr files is useful to see what
 you might need in the new .mdp file to reproduce the first one. Note
 that grompp -c yourold.tpr is the best way to get the same starting
 coordinates.

 Mark

 On Wed, Sep 18, 2013 at 3:48 PM, Guanglei Cui
 amber.mail.arch...@gmail.com wrote:
  gmxdump -om writes out a mdp file based on the tpr, but that is not read
 by
  grompp. I tried to change or comment out mdp options that are not
  recognized by grompp. It is attached here. The simulation soon crashes
 with
  LINCS errors after 25 steps, while the original tpr runs properly. I'm
 not
  sure what's missing here.
 
 
  On Tue, Sep 17, 2013 at 6:12 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
 
  No. Theoretically useful, but not implemented.
 
  Mark
 
  On Tue, Sep 17, 2013 at 4:45 PM, Guanglei Cui
  amber.mail.arch...@gmail.com wrote:
   Thanks. Is it possible to dump the parameters in the tpr file to a mdp
  file?
  
  
   On Tue, Sep 17, 2013 at 3:20 PM, Mark Abraham 
 mark.j.abra...@gmail.com
  wrote:
  
   mdrun -nsteps in 4.6 overrides the number of steps in the .tpr
  
   Mark
  
   On Tue, Sep 17, 2013 at 8:55 PM, Guanglei Cui
   amber.mail.arch...@gmail.com wrote:
Dear GMX users,
   
I'm new to Gromacs. So apologies if this question is too simple.
   
I downloaded top/tpr files from the supplementary material of a
  published
paper. The nsteps set in the tpr file is 100ns. I wish

Re: [gmx-users] modify nsteps in an existing tpr file

2013-09-18 Thread Mark Abraham
Implementation and description of a model physics are two different
things. You could compute KE of a particle with 0.5 * m * v^2, but if
the mass is used nowhere else, why wouldn't you pre-multiply the mass
by 0.5?

Mark

On Wed, Sep 18, 2013 at 4:31 PM, Guanglei Cui
amber.mail.arch...@gmail.com wrote:
 hmm, does that mean the gmx force field file format or specifications are
 not backward compatible?


 On Wed, Sep 18, 2013 at 4:08 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 There are technical differences between versions about how the VDW
 parameters are computed. You should not expect .tpr equivalence
 between minor version changes such as 4.0 and 4.6. You need to compile
 a 4.0.x grompp to see if your setup is equivalent, but having done so
 you should be able to use the same inputs to 4.6 grompp and get a
 correct simulation with 4.6 mdrun.

 Mark

 On Wed, Sep 18, 2013 at 1:55 PM, Guanglei Cui
 amber.mail.arch...@gmail.com wrote:
  Thanks. gmxcheck is quite helpful. Here is part of the output. It turns
 out
  the difference is mainly in the force field parameters, which indicates
 the
  top file provided may not be the one used to produce the tpr file.
 Perhaps
  it is best to contact the authors, unless the difference is due to
 certain
  changes between gmx 4.0.x and gmx 4.6.3.
 
  inputrec-nsteps (5000 - 5000)
  inputrec-nstcalclr (5 - 0)
  inputrec-nstdhdl (1 - 50)
  inputrec-fepvals-init_fep_state ( 0.0e+00 - -1.0e+00)
  inputrec-fepvals-lambda_neighbors[1] (0 - 1)
  inputrec-fepvals-sc_power (0 - 1)
  inputrec-dihre_fc (1.00e+03 - 0.00e+00)
  inputrec-grpopts.ngtc (4 - 1)
  inputrec-grpopts.ngener (4 - 1)
  inputrec-grpopts.nrdf[ 0] (8.610900e+04 - 2.136210e+05)
  idef-iparam[12]1: c6= 4.23112651e-03, c12= 4.76949208e-06
  idef-iparam[12]2: c6= 4.68938737e-08, c12= 1.15147106e-12
  idef-iparam[54]1: c6= 4.58155479e-03, c12= 4.48611081e-06
  idef-iparam[54]2: c6= 4.46544206e-08, c12= 8.37594751e-13
  idef-iparam[82]1: c6= 3.75142763e-03, c12= 4.22875655e-06
  idef-iparam[82]2: c6= 4.15773336e-08, c12= 1.02092445e-12
  idef-iparam[96]1: c6= 3.8381e-03, c12= 2.83264171e-06
  idef-iparam[96]2: c6= 2.83739432e-08, c12= 3.04270091e-13
  idef-iparam[124]1: c6= 4.26879199e-03, c12= 3.50070763e-06
  idef-iparam[124]2: c6= 3.50897622e-08, c12= 4.64908439e-13
  idef-iparam[152]1: c6= 3.59375845e-03, c12= 2.76020933e-06
  idef-iparam[152]2: c6= 2.76677472e-08, c12= 3.21553060e-13
  idef-iparam[166]1: c6= 7.79988989e-03, c12= 1.19875567e-05
  idef-iparam[166]2: c6= 1.12529349e-07, c12= 4.90395051e-12
  idef-iparam[168]1: c6= 4.23112651e-03, c12= 4.76949208e-06
  idef-iparam[168]2: c6= 4.68938737e-08, c12= 1.15147106e-12
  idef-iparam[171]1: c6= 4.58155479e-03, c12= 4.48611081e-06
  idef-iparam[171]2: c6= 4.46544206e-08, c12= 8.37594751e-13
  idef-iparam[173]1: c6= 3.75142763e-03, c12= 4.22875655e-06
  idef-iparam[173]2: c6= 4.15773336e-08, c12= 1.02092445e-12
  idef-iparam[174]1: c6= 3.8381e-03, c12= 2.83264171e-06
  idef-iparam[174]2: c6= 2.83739432e-08, c12= 3.04270091e-13
  idef-iparam[176]1: c6= 4.26879199e-03, c12= 3.50070763e-06
  idef-iparam[176]2: c6= 3.50897622e-08, c12= 4.64908439e-13
  idef-iparam[178]1: c6= 3.59375845e-03, c12= 2.76020933e-06
  idef-iparam[178]2: c6= 2.76677472e-08, c12= 3.21553060e-13
  idef-iparam[179]1: c6= 7.79988989e-03, c12= 1.19875567e-05
  idef-iparam[179]2: c6= 1.12529349e-07, c12= 4.90395051e-12
  idef-iparam[180]1: c6= 6.22385694e-03, c12= 5.03394313e-06
  idef-iparam[180]2: c6= 2.05496373e-24, c12= 0.e+00
  idef-iparam[181]1: c6= 3.93928867e-03, c12= 3.50622463e-06
  idef-iparam[181]2: c6= 3.50750256e-08, c12= 5.46337281e-13
  idef-iparam[194]1: c6= 3.93928867e-03, c12= 3.50622463e-06
  idef-iparam[194]2: c6= 3.50750256e-08, c12= 5.46337281e-13
  ...
 
 
  On Wed, Sep 18, 2013 at 10:14 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
 
  That -om mechanism has been broken for about a decade, unfortunately.
 
  You will need to include the file, or post a link a file, not attach
  it, if you want users of this list to see it.
 
  gmxcheck to compare your new and old .tpr files is useful to see what
  you might need in the new .mdp file to reproduce the first one. Note
  that grompp -c yourold.tpr is the best way to get the same starting
  coordinates.
 
  Mark
 
  On Wed, Sep 18, 2013 at 3:48 PM, Guanglei Cui
  amber.mail.arch...@gmail.com wrote:
   gmxdump -om writes out a mdp file based on the tpr, but that is not
 read
  by
   grompp. I tried to change or comment out mdp options that are not
   recognized by grompp. It is attached here. The simulation soon crashes
  with
   LINCS errors after 25 steps, while the original tpr runs properly. I'm
  not
   sure what's missing here.
  
  
   On Tue, Sep 17, 2013 at 6:12 PM, Mark Abraham 
 mark.j.abra...@gmail.com
  wrote:
  
   No. Theoretically useful, but not implemented.
  
   Mark
  
   On Tue, Sep 17, 2013 at 4:45 PM, Guanglei Cui
   amber.mail.arch...@gmail.com

Re: [gmx-users] How to restart the crashed run

2013-09-17 Thread Mark Abraham
http://www.gromacs.org/Documentation/How-tos/Doing_Restarts suggests
the 3.x-era restart strategy when checkpoint files are unavailable.
But if you simply have no output files, then you have no ability to
restart.

Mark

On Tue, Sep 17, 2013 at 1:59 AM, Mahboobeh Eslami
mahboobeh.esl...@yahoo.com wrote:
 hi my friends

 please help me

 i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near the 
 end of the simulation
 How to restart the crashed run?

  in the gromacs.org following comment has been proposed

 mdrun -s topol.tpr -cpi state.cpt

 but i don't have state.cpt in my folder.

 I need urgent help
 Thank you very much

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Difficulties with MPI in gromacs 4.6.3

2013-09-17 Thread Mark Abraham
On Tue, Sep 17, 2013 at 2:04 AM, Kate Stafford kastaff...@gmail.com wrote:
 Hi all,

 I'm trying to install and test gromacs 4.6.3 on our new cluster, and am
 having difficulty with MPI. Gromacs has been compiled against openMPI
 1.6.5. The symptom is, running a very simple MPI process for any of the
 DHFR test systems:

 orterun -np 2 mdrun_mpi -s topol.tpr

 produces this openMPI warning:

 --
 An MPI process has executed an operation involving a call to the
 fork() system call to create a child process.  Open MPI is currently
 operating in a condition that could result in memory corruption or
 other system errors; your MPI job may hang, crash, or produce silent
 data corruption.  The use of fork() (or system() or other calls that
 create child processes) is strongly discouraged.

 The process that invoked fork was:

   Local host:  hb0c1n1.hpc (PID 58374)
   MPI_COMM_WORLD rank: 1

 If you are *absolutely sure* that your application will successfully
 and correctly survive a call to fork(), you may disable this warning
 by setting the mpi_warn_on_fork MCA parameter to 0.
 --

Hmm. That warning is a known issue in some cases:
http://www.open-mpi.org/faq/?category=openfabrics#ofa-fork but should
not be an issue for the above mdrun command, since it should call none
of popen/fork/system. You might like to try some of the diagnostics on
that page.

 ...which is immediately followed by program termination by the cluster
 queue due to exceeding the allotted memory for the job. This behavior
 persists no matter how much memory I use, up to 16GB per thread, which is
 surely excessive for any of the DHFR benchmarks. Turning the warning off,
 of course, simply suppresses the output, but doesn't affect the memory
 usage.

I can think of no reason for or past experience of this behaviour. Is
it possible for you to run mdrun_mpi in a debugger and get a call
stack trace to help us diagnose?

 The openMPI install works fine with other MPI-enabled programs, including
 gromacs 4.5.5, so the problem is specific to 4.6.3. The thread-MPI version
 of 4.6.3 is also fine.

OK, thanks, good diagnosis. Some low-level stuff did get refactored
after 4.6.1. I don't think that will be the issue here, but you could
see if it produces the same symptoms / magically works.

 The 4.6.3 MPI executable was compiled with:

 cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/nfs/apps/cuda/5.5.22
 -DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON

 But the presence of the GPU or static libs related flags seems not to
 affect the behavior. The gcc version (4.4 or 4.8) doesn't matter either.

 Any insight as to what I'm doing wrong here?

So far I'd say the problem is not of your making :-(

Mark
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Standard errors

2013-09-17 Thread Mark Abraham
Standard error and standard deviation measure different things. Please
consult a general work on reporting scientific results.

Mark

On Mon, Sep 16, 2013 at 7:40 AM, afsaneh maleki
maleki.afsa...@gmail.com wrote:
 Dear all



 I would like to calculate the standard deviation (as the error bar) for
 dV/dlanda.xvg file. I used g_analyze command as the following:



 g_analyze   -ffree_bi_0.9.xvg  -av  average_0.9

 I got:

 set   average  *standard  deviation*   *std. dev.  /
 sqrt(n-1)*…

 SS16.053822e+01 3.062230e+01  1.936724e-02…

 Is the amount of in third (standard deviation) or fourth column (std. dev.  /
 sqrt(n-1) ) better than to use as the standard errors?

 I want to draw dG/d lambda via lambda and show error bar for free energy.



 Thanks in advance

 Afsaneh
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Seeking solution for the error Atom OXT in residue TRP 323 was not found in rtp entry TRP with 24 atoms while sorting atoms.

2013-09-17 Thread Mark Abraham
Please answer all of Justin's questions. What is in the PDB file -
what should the C terminus be!

Mark

On Tue, Sep 17, 2013 at 2:27 AM, Santhosh Kumar Nagarajan
santhoshraja...@gmail.com wrote:
 I have tried it Tsjerk.. But the same error is shown again..

 -
 Santhosh Kumar Nagarajan
 MTech Bioinformatics
 SRM University
 Chennai
 India
 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Seeking-solution-for-the-error-Atom-OXT-in-residue-TRP-323-was-not-found-in-rtp-entry-TRP-with-24-at-tp5011015p5011224.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to restart the crashed run

2013-09-17 Thread Mark Abraham
Hi,

Please keep discussion on the mailing list.

If you have a .cpt file that is not called state.cpt, then you must
have asked for the checkpoint file to be named md.cpt in your original
mdrun command (e.g. with mdrun -cpo md). state.cpt is simply the
default filename (and mostly there is no reason to change that).
Simply use md.cpt, now that you have it :-)

Mark


On Tue, Sep 17, 2013 at 2:49 AM, Mahboobeh Eslami
mahboobeh.esl...@yahoo.com wrote:
 I have md.cpt but I don't have restart my run
 What is the purpose of the state.cpt file?
 thank you so much

 From: Mark Abraham mark.j.abra...@gmail.com
 To: Mahboobeh Eslami mahboobeh.esl...@yahoo.com; Discussion list for
 GROMACS users gmx-users@gromacs.org
 Sent: Tuesday, September 17, 2013 9:36 AM
 Subject: Re: [gmx-users] How to restart the crashed run

 http://www.gromacs.org/Documentation/How-tos/Doing_Restarts suggests
 the 3.x-era restart strategy when checkpoint files are unavailable.
 But if you simply have no output files, then you have no ability to
 restart.

 Mark

 On Tue, Sep 17, 2013 at 1:59 AM, Mahboobeh Eslami
 mahboobeh.esl...@yahoo.com wrote:
 hi my friends

 please help me

 i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near
 the end of the simulation
 How to restart the crashed run?

  in the gromacs.org following comment has been proposed

 mdrun -s topol.tpr -cpi state.cpt

 but i don't have state.cpt in my folder.

 I need urgent help
 Thank you very much

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] How to restart the crashed run

2013-09-17 Thread Mark Abraham
You named them last time you ran mdrun, either explicitly or with
-deffnm. Do the same.

Mark

On Tue, Sep 17, 2013 at 3:19 AM, Mahboobeh Eslami
mahboobeh.esl...@yahoo.com wrote:
 when i use following  omment for restart

 mdrun -s md.tpr -cpi md.cpt
 i get following error

 Output file appending has been requested,
 but some output files listed in the checkpoint file md.cpt
 are not present or are named differently by the current program:
 output files present: md.log
 output files not present or named differently: md.trr md.edr


 but i have md.trr  and md.edr in my folder.

 thank you so much


 - Forwarded Message -
 From: Mark Abraham mark.j.abra...@gmail.com
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Sent: Tuesday, September 17, 2013 11:22 AM
 Subject: Re: [gmx-users] How to restart the crashed run


 Hi,

 Please keep discussion on the mailing list.

 If you have a .cpt file that is not called state.cpt, then you must
 have asked for the checkpoint file to be named md.cpt in your original
 mdrun command (e.g. with mdrun -cpo md). state.cpt is simply the
 default filename (and mostly there is no reason to change that).
 Simply use md.cpt, now that you have it :-)

 Mark


 On Tue, Sep 17, 2013 at 2:49 AM, Mahboobeh Eslami
 mahboobeh.esl...@yahoo.com wrote:
 I have md.cpt but I don't have restart my run
 What is the purpose of the state.cpt file?
 thank you so much

 From: Mark Abraham mark.j.abra...@gmail.com
 To: Mahboobeh Eslami mahboobeh.esl...@yahoo.com; Discussion list for
 GROMACS users gmx-users@gromacs.org
 Sent: Tuesday, September 17, 2013 9:36 AM
 Subject: Re: [gmx-users] How to restart the crashed run

 http://www.gromacs.org/Documentation/How-tos/Doing_Restarts suggests
 the 3.x-era restart strategy when checkpoint files are unavailable.
 But if you simply have no output files, then you have no ability to
 restart.

 Mark

 On Tue, Sep 17, 2013 at 1:59 AM, Mahboobeh Eslami
 mahboobeh.esl...@yahoo.com wrote:
 hi my friends

 please help me

 i did 20 ns simulation by gromacs 4.5.5 but the power was shut down near
 the end of the simulation
 How to restart the crashed run?

  in the gromacs.org following comment has been proposed

 mdrun -s topol.tpr -cpi state.cpt

 but i don't have state.cpt in my folder.

 I need urgent help
 Thank you very much

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


  1   2   3   4   5   6   7   8   9   10   >