Re: [gmx-users] Diffusion/PBC

2013-11-05 Thread Tsjerk Wassenaar
Hi Debashis,

Makes sure that the anion and receptor are together in the reference
structure you use for trjconv -pbc nojump

Cheers,

Tsjerk


On Tue, Nov 5, 2013 at 8:12 AM, Debashis Sahu debashis.sah...@gmail.comwrote:

 Dear All,
   I have an problem related to jumping trajectory. In my MD
 run, there is a receptor molecule which is binding with an halogen anion in
 water solvent. In the original trajectory, the binding between them looks
 fine but jumping present. To remove the jumping of the system from
 trajectory, I have used 'nojump' as discussed in the forum. Now I got a
 jump-free trajectory, but due to the diffusion here, I have observed that
 the anion and the receptor are far away from each other. I could not fix
 the problem. can any one suggest me?
 Thanks in advance.
 with regards,
 *Debashis Sahu*
 *Central Salt and Marine Chemical Research Institute*
 *Bhavnagar, Gujarat*
 *India, 364002.*
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Using mpirun on CentOS 6.0

2013-11-05 Thread bharat gupta
Hi,

I am getting the following error while using the command -

[root@localhost INGT]# mpirun -np 24 mdrun_mpi -v -deffnm npt

Error -

/usr/bin/mpdroot: open failed for root's mpd conf
filempiexec_localhost.localdomain (__init__ 1208): forked process failed;
status=255

I complied gromacs using - ./configure --enable-shared --enable-mpi. I have
installed the mpich package , this is what I get when I check for mpirun
and mpiexec -

[root@localhost /]# which mpirun
/usr/bin/mpirun
[root@localhost /]# which mpiexec
/usr/bin/mpiexec

What could be the problem here ??

Thanks

Bharat
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] free energy

2013-11-05 Thread kiana moghaddam
Dear GMX Users



I am using parmbsc0 force field to study DNA-ligand interaction but my problem 
is free energy calculation (MMPBSA) for this interaction. How can I calculate 
free energy using MMPBSA approach?

Thank you very much for your time and consideration.


Best Regards
Kiana 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] extra gro file generation

2013-11-05 Thread sarah k
Dear all,

I'm going to perform a molecular dynamics simulation on a protein. As a
default the simulation gives one final *.gro file. I need to get a .gro
file after each say 500 ps of my simulation, in addition of the final file.
How can I do so?

Best regards,
Sarah Keshavarz
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] extra gro file generation

2013-11-05 Thread Riccardo Concu
Dear Sarah,
you have to use the trjconv command with the flags -b -e and -sep.
For example: trjconv -f xxx.trr -s xxx.tpr -o out.gro -b (initial frame
to read in ps) -e (last frame to read in ps) -sep
Regards
El mar, 05-11-2013 a las 01:04 -0800, sarah k escribió:
 Dear all,
 
 I'm going to perform a molecular dynamics simulation on a protein. As a
 default the simulation gives one final *.gro file. I need to get a .gro
 file after each say 500 ps of my simulation, in addition of the final file.
 How can I do so?
 
 Best regards,
 Sarah Keshavarz


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] choosing force field

2013-11-05 Thread massimo sandal
Just out of curiosity -why can you only choose between GROMOS force fields?


2013/11/5 pratibha kapoor kapoorpratib...@gmail.com

 Dear all

 I would like to carry out unfolding simulations of my dimeric protein and
 would like to know which is the better force field to work with out of
 gromos 96 43 or 53? Also, is gromos 96 43a1 force field redundant?
 When I searched the previous archive, I could see similar question was
 raised for gromos 96 43a3 ff and could make out that 53a6 53a7..have
 entirely different approach in parameterization compared to 43a3 ff. Also
 43a3 would give more stable structures.
 So is the case with my simulations but with force field 43a1 (instead of
 43a3). I could see an extra non native helix when I carried out simulations
 with ff 43a1 which is not present with 53a7 ff. I have no experimental
 data/re-sources to confirm this. Also simulations on my system has not been
 done before.
 I would like to know which out of the two simulations should I consider
 more reliable-43a1 or 53a7?
 Thanks in advance.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread James Starlight
My suggestions:

1) During compilstion using -march=corei7-avx-i I have obtained error that
somethng now found ( sorry I didnt save log) so I compile gromacs without
this flag

2) I have twice as better performance using just 1 gpu by means of

mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test

than using of both gpus

mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test

in the last case I have obtained warning

WARNING: Oversubscribing the available 12 logical CPU cores with 24 threads.
 This will cause considerable performance loss!

How it could be fixed?
All gpu are recognized correctly


2 GPUs detected:
  #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
compatible
  #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
compatible


James


2013/11/4 Szilárd Páll pall.szil...@gmail.com

 You can use the -march=native flag with gcc to optimize for the CPU
 your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
 CPUs.
 --
 Szilárd Páll


 On Mon, Nov 4, 2013 at 12:37 PM, James Starlight jmsstarli...@gmail.com
 wrote:
  Szilárd, thanks for suggestion!
 
  What kind of CPU optimisation should I take into account assumint that
 I'm
  using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as 12
  nodes in Debian).
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
  That should be enough. You may want to use the -march (or equivalent)
  compiler flag for CPU optimization.
 
  Cheers,
  --
  Szilárd Páll
 
 
  On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 
 jmsstarli...@gmail.com
  wrote:
   Dear Gromacs Users!
  
   I'd like to compile lattest 4.6 Gromacs with native GPU supporting on
 my
  i7
   cpu with dual GeForces Titans gpu mounted. With this config I'd like
 to
   perform simulations using cpu as well as both gpus simultaneously.
  
   What flags besides
  
   cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5
  
  
   should I define to CMAKE for compiling optimized gromacs on such
  workstation?
  
  
   Thanks for help
  
   James
   --
   gmx-users mailing listgmx-users@gromacs.org
   http://lists.gromacs.org/mailman/listinfo/gmx-users
   * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
   * Please don't post (un)subscribe requests to the list. Use the
   www interface or send it to gmx-users-requ...@gromacs.org.
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Energy minimization has stopped....

2013-11-05 Thread Kalyanashis Jana
Hi,
  Whenever I am trying to do position retrained MD run, It has been stopped
at middle of the MD run. I have given the following error. Can you please
suggest me something to resolve this error?
Energy minimization has stopped, but the forces havenot converged to the
requested precision Fmax  100 (whichmay not be possible for your system).
It
stoppedbecause the algorithm tried to make a new step whose sizewas too
small, or there was no change in the energy sincelast step. Either way, we
regard the minimization asconverged to within the available machine
precision,given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, butthis is often not
needed for preparing to run moleculardynamics.

writing lowest energy coordinates.

Steepest Descents converged to machine precision in 20514 steps,
but did not reach the requested Fmax  100.
Potential Energy  = -9.9811250e+06
Maximum force =  6.1228135e+03 on atom 15461
Norm of force =  1.4393512e+01

gcq#322: The Feeling of Power was Intoxicating, Magic (Frida Hyvonen)

-- 
Kalyanashis Jana
email: kalyan.chem...@gmail.com
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Richard Broadbent

Dear James,

On 05/11/13 11:16, James Starlight wrote:

My suggestions:

1) During compilstion using -march=corei7-avx-i I have obtained error that
somethng now found ( sorry I didnt save log) so I compile gromacs without
this flag

2) I have twice as better performance using just 1 gpu by means of

mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test

than using of both gpus

mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test

in the last case I have obtained warning

WARNING: Oversubscribing the available 12 logical CPU cores with 24 threads.
  This will cause considerable performance loss!

here you are requesting 2 thread mpi processes each with 12 openmp 
threads, hence a total of 24 threads however even with hyper threading 
enabled there are only 12 threads on your machine. Therefore, only 
allocate 12. Try


mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test

or even

mdrun -v  -deffnm md_CaM_test

I believe it should autodetect the GPUs and run accordingly for details 
of how to use gromacs with mpi/thread mpi openmp and GPUs see


http://www.gromacs.org/Documentation/Acceleration_and_parallelization

Which describes how to use these systems

Richard


How it could be fixed?
All gpu are recognized correctly


2 GPUs detected:
   #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
compatible
   #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
compatible


James


2013/11/4 Szilárd Páll pall.szil...@gmail.com


You can use the -march=native flag with gcc to optimize for the CPU
your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
CPUs.
--
Szilárd Páll


On Mon, Nov 4, 2013 at 12:37 PM, James Starlight jmsstarli...@gmail.com
wrote:

Szilárd, thanks for suggestion!

What kind of CPU optimisation should I take into account assumint that

I'm

using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as 12
nodes in Debian).

James


2013/11/4 Szilárd Páll pall.szil...@gmail.com


That should be enough. You may want to use the -march (or equivalent)
compiler flag for CPU optimization.

Cheers,
--
Szilárd Páll


On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 

jmsstarli...@gmail.com

wrote:

Dear Gromacs Users!

I'd like to compile lattest 4.6 Gromacs with native GPU supporting on

my

i7

cpu with dual GeForces Titans gpu mounted. With this config I'd like

to

perform simulations using cpu as well as both gpus simultaneously.

What flags besides

cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5


should I define to CMAKE for compiling optimized gromacs on such

workstation?



Thanks for help

James
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] extra gro file generation

2013-11-05 Thread Mirco Wahab

On 05.11.2013 10:04, sarah k wrote:

I'm going to perform a molecular dynamics simulation on a protein. As a
default the simulation gives one final *.gro file. I need to get a .gro
file after each say 500 ps of my simulation, in addition of the final file.
How can I do so?

Riccardo already gave the important hints in another posting,
here are some additional explanations.

# first, generate an empty subdirectory in order to keep
# the simulation directory clean. The rm command is
# important if you repeat these steps

$ mkdir -p GRO/ ; rm -rf GRO/*.gro

# then, decide which part of the system you need:
# 0 - evgerything
# 1 - the protein
# 2 - the cofactor (if any)
# Remember: these numbers correspond to the order of molecules
# named in the .top-file. If your protein is 1 and you
# need only that, do a

echo 1 | trjconv -b 500 -noh -novel -skip 2 -sep -nzero 5 -o GRO/out.gro

# this will dump the system part 1 (the protein or whatever),
# starting from 500 ps (-b) and saving every 2'nd trajectory snapshot.

For each option to trjconv (-noh, -novel), please read the manual 
(where all of this can be found).


Regards

M.


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread James Starlight
Dear Richard,


1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
gave me performance about 25ns/day for the explicit solved system consisted
of 68k atoms (charmm ff. 1.0 cutoofs)

gaves slightly worse performation in comparison to the 1)

finally

3) mdrun -deffnm md_CaM_test
running in the same regime as in the 2) so its also gave me 22ns/day for
the same system.

How the efficacy of using of dual-GPUs could be increased?

James


2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

 Dear James,


 On 05/11/13 11:16, James Starlight wrote:

 My suggestions:

 1) During compilstion using -march=corei7-avx-i I have obtained error that
 somethng now found ( sorry I didnt save log) so I compile gromacs without
 this flag

 2) I have twice as better performance using just 1 gpu by means of

 mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test

 than using of both gpus

 mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test

 in the last case I have obtained warning

 WARNING: Oversubscribing the available 12 logical CPU cores with 24
 threads.
   This will cause considerable performance loss!

  here you are requesting 2 thread mpi processes each with 12 openmp
 threads, hence a total of 24 threads however even with hyper threading
 enabled there are only 12 threads on your machine. Therefore, only allocate
 12. Try

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test

 or even

 mdrun -v  -deffnm md_CaM_test

 I believe it should autodetect the GPUs and run accordingly for details of
 how to use gromacs with mpi/thread mpi openmp and GPUs see

 http://www.gromacs.org/Documentation/Acceleration_and_parallelization

 Which describes how to use these systems

 Richard


  How it could be fixed?
 All gpu are recognized correctly


 2 GPUs detected:
#0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
 compatible
#1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
 compatible


 James


 2013/11/4 Szilárd Páll pall.szil...@gmail.com

  You can use the -march=native flag with gcc to optimize for the CPU
 your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
 CPUs.
 --
 Szilárd Páll


 On Mon, Nov 4, 2013 at 12:37 PM, James Starlight jmsstarli...@gmail.com
 
 wrote:

 Szilárd, thanks for suggestion!

 What kind of CPU optimisation should I take into account assumint that

 I'm

 using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as
 12
 nodes in Debian).

 James


 2013/11/4 Szilárd Páll pall.szil...@gmail.com

  That should be enough. You may want to use the -march (or equivalent)
 compiler flag for CPU optimization.

 Cheers,
 --
 Szilárd Páll


 On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 

 jmsstarli...@gmail.com

 wrote:

 Dear Gromacs Users!

 I'd like to compile lattest 4.6 Gromacs with native GPU supporting on

 my

 i7

 cpu with dual GeForces Titans gpu mounted. With this config I'd like

 to

 perform simulations using cpu as well as both gpus simultaneously.

 What flags besides

 cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5


 should I define to CMAKE for compiling optimized gromacs on such

 workstation?



 Thanks for help

 James
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at

 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at

 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!

 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 

Re: [gmx-users] Energy minimization has stopped....

2013-11-05 Thread jkrieger
What does your curve look like? What parameters are you using in the mdp?
How big is your system and what kind of molecules are in there? Providing
this kind of information would help people work out what the problem is.

Then again it may be ok that the minimisation has converged without
reaching the Fmax cutoff. 2 is a large number of steps.

 Hi,
   Whenever I am trying to do position retrained MD run, It has been
 stopped
 at middle of the MD run. I have given the following error. Can you please
 suggest me something to resolve this error?
 Energy minimization has stopped, but the forces havenot converged to the
 requested precision Fmax  100 (whichmay not be possible for your system).
 It
 stoppedbecause the algorithm tried to make a new step whose sizewas too
 small, or there was no change in the energy sincelast step. Either way, we
 regard the minimization asconverged to within the available machine
 precision,given your starting configuration and EM parameters.

 Double precision normally gives you higher accuracy, butthis is often not
 needed for preparing to run moleculardynamics.

 writing lowest energy coordinates.

 Steepest Descents converged to machine precision in 20514 steps,
 but did not reach the requested Fmax  100.
 Potential Energy  = -9.9811250e+06
 Maximum force =  6.1228135e+03 on atom 15461
 Norm of force =  1.4393512e+01

 gcq#322: The Feeling of Power was Intoxicating, Magic (Frida Hyvonen)

 --
 Kalyanashis Jana
 email: kalyan.chem...@gmail.com
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Energy minimization has stopped....

2013-11-05 Thread Justin Lemkul



On 11/5/13 6:28 AM, Kalyanashis Jana wrote:

Hi,
   Whenever I am trying to do position retrained MD run, It has been stopped
at middle of the MD run. I have given the following error. Can you please
suggest me something to resolve this error?
Energy minimization has stopped, but the forces havenot converged to the
requested precision Fmax  100 (whichmay not be possible for your system).
It
stoppedbecause the algorithm tried to make a new step whose sizewas too
small, or there was no change in the energy sincelast step. Either way, we
regard the minimization asconverged to within the available machine
precision,given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, butthis is often not
needed for preparing to run moleculardynamics.

writing lowest energy coordinates.

Steepest Descents converged to machine precision in 20514 steps,
but did not reach the requested Fmax  100.
Potential Energy  = -9.9811250e+06
Maximum force =  6.1228135e+03 on atom 15461
Norm of force =  1.4393512e+01



Visualize the output, specifically near atom 15461.  The forces there are too 
high and cannot be resolved further.  Any attempt to use these coordinates for 
dynamics will probably lead to a crash.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] free energy

2013-11-05 Thread Justin Lemkul



On 11/5/13 3:45 AM, kiana moghaddam wrote:

Dear GMX Users



I am using parmbsc0 force field to study DNA-ligand interaction but my problem 
is free energy calculation (MMPBSA) for this interaction. How can I calculate 
free energy using MMPBSA approach?

Thank you very much for your time and consideration.



An identical question was asked on the list last week, including responses about 
external software that will do these calculations.  Gromacs does not do MM/PBSA, 
but other programs will.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Energy minimization has stopped....

2013-11-05 Thread Kalyanashis
I have given my .mdp file,
; title =  trp_drg
warning =  10
cpp =  /usr/bin/cpp
define  =  -DPOSRES
constraints =  all-bonds
integrator  =  md
dt  =  0.002 ; ps !
nsteps  =  100 ; total 2000.0 ps.
nstcomm =  100
nstxout =  250 ; ouput coordinates every 0.5 ps
nstvout =  1000 ; output velocities every 2.0 ps
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  100
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.0
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  6
ewald_rtol  =  1e-5
optimize_fft=  yes
; Berendsen temparature coupling is on
Tcoupl  =  berendsen
tau_t   =  1.01.0-0.1  1.0   1.0
tc_grps =  SOLNA protein   OMP   CL
ref_t   =  300300300   300   300
; Pressure coupling is on
pcoupl  =  berendsen ; Use Parrinello-Rahman for research work
pcoupltype  =  isotropic ; Use semiisotropic when working with
membranes
tau_p   =  2.0
compressibility =  4.5e-5
ref_p   =  1.0
refcoord-scaling=  all
; Generate velocites is on at 300 K.
gen_vel = yes
gen_temp= 300.0
gen_seed= 173529


And It is a large protein system containing drug molecule and the atoms of
whole system is near about 16000.
As I did not get any .gro file, thus the MD run was not properly finished.
Please suggest me the probable source this kind error.
Thank you so much..



On Tue, Nov 5, 2013 at 5:29 PM, jkrie...@mrc-lmb.cam.ac.uk [via GROMACS] 
ml-node+s5086n5012256...@n6.nabble.com wrote:

 What does your curve look like? What parameters are you using in the mdp?
 How big is your system and what kind of molecules are in there? Providing
 this kind of information would help people work out what the problem is.

 Then again it may be ok that the minimisation has converged without
 reaching the Fmax cutoff. 2 is a large number of steps.

  Hi,
Whenever I am trying to do position retrained MD run, It has been
  stopped
  at middle of the MD run. I have given the following error. Can you
 please
  suggest me something to resolve this error?
  Energy minimization has stopped, but the forces havenot converged to the
  requested precision Fmax  100 (whichmay not be possible for your
 system).
  It
  stoppedbecause the algorithm tried to make a new step whose sizewas too
  small, or there was no change in the energy sincelast step. Either way,
 we
  regard the minimization asconverged to within the available machine
  precision,given your starting configuration and EM parameters.
 
  Double precision normally gives you higher accuracy, butthis is often
 not
  needed for preparing to run moleculardynamics.
 
  writing lowest energy coordinates.
 
  Steepest Descents converged to machine precision in 20514 steps,
  but did not reach the requested Fmax  100.
  Potential Energy  = -9.9811250e+06
  Maximum force =  6.1228135e+03 on atom 15461
  Norm of force =  1.4393512e+01
 
  gcq#322: The Feeling of Power was Intoxicating, Magic (Frida Hyvonen)
 
  --
  Kalyanashis Jana
  email: [hidden email]http://user/SendEmail.jtp?type=nodenode=5012256i=0
  --
  gmx-users mailing list[hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012256i=1
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=5012256i=2.

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 


 --
 gmx-users mailing list[hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012256i=3
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=5012256i=4.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://gromacs.5086.x6.nabble.com/Energy-minimization-has-stopped-tp5012252p5012256.html
  To start a new topic under GROMACS Users Forum, email
 ml-node+s5086n4370410...@n6.nabble.com
 To unsubscribe from GROMACS, click 
 

Re: [gmx-users] Energy minimization has stopped....

2013-11-05 Thread Kalyanashis Jana
Dear Justin,
  Can you please tell me, how I can solve this problem?? If I will change
the coordinate of atom 15461, will it help me? But you know, I did this
step changing the position of drug molecule and I got same error.


On Tue, Nov 5, 2013 at 5:44 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/5/13 6:28 AM, Kalyanashis Jana wrote:

 Hi,
Whenever I am trying to do position retrained MD run, It has been
 stopped
 at middle of the MD run. I have given the following error. Can you please
 suggest me something to resolve this error?
 Energy minimization has stopped, but the forces havenot converged to the
 requested precision Fmax  100 (whichmay not be possible for your system).
 It
 stoppedbecause the algorithm tried to make a new step whose sizewas too
 small, or there was no change in the energy sincelast step. Either way, we
 regard the minimization asconverged to within the available machine
 precision,given your starting configuration and EM parameters.

 Double precision normally gives you higher accuracy, butthis is often not
 needed for preparing to run moleculardynamics.

 writing lowest energy coordinates.

 Steepest Descents converged to machine precision in 20514 steps,
 but did not reach the requested Fmax  100.
 Potential Energy  = -9.9811250e+06
 Maximum force =  6.1228135e+03 on atom 15461
 Norm of force =  1.4393512e+01


 Visualize the output, specifically near atom 15461.  The forces there are
 too high and cannot be resolved further.  Any attempt to use these
 coordinates for dynamics will probably lead to a crash.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 
Kalyanashis Jana
email: kalyan.chem...@gmail.com
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Energy minimization has stopped....

2013-11-05 Thread Justin Lemkul



On 11/5/13 7:19 AM, Kalyanashis wrote:

I have given my .mdp file,
; title =  trp_drg
warning =  10
cpp =  /usr/bin/cpp
define  =  -DPOSRES
constraints =  all-bonds
integrator  =  md
dt  =  0.002 ; ps !
nsteps  =  100 ; total 2000.0 ps.
nstcomm =  100
nstxout =  250 ; ouput coordinates every 0.5 ps
nstvout =  1000 ; output velocities every 2.0 ps
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  100
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.0
fourierspacing  =  0.12
fourier_nx  =  0
fourier_ny  =  0
fourier_nz  =  0
pme_order   =  6
ewald_rtol  =  1e-5
optimize_fft=  yes
; Berendsen temparature coupling is on
Tcoupl  =  berendsen
tau_t   =  1.01.0-0.1  1.0   1.0
tc_grps =  SOLNA protein   OMP   CL
ref_t   =  300300300   300   300


These settings make no sense.  Please read 
http://www.gromacs.org/Documentation/Terminology/Thermostats.



; Pressure coupling is on
pcoupl  =  berendsen ; Use Parrinello-Rahman for research work
pcoupltype  =  isotropic ; Use semiisotropic when working with
membranes
tau_p   =  2.0
compressibility =  4.5e-5
ref_p   =  1.0
refcoord-scaling=  all
; Generate velocites is on at 300 K.
gen_vel = yes
gen_temp= 300.0
gen_seed= 173529


And It is a large protein system containing drug molecule and the atoms of
whole system is near about 16000.
As I did not get any .gro file, thus the MD run was not properly finished.
Please suggest me the probable source this kind error.


The run crashes because your energy minimization effectively failed.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Energy minimization has stopped....

2013-11-05 Thread Justin Lemkul



On 11/5/13 7:31 AM, Kalyanashis Jana wrote:

Dear Justin,
   Can you please tell me, how I can solve this problem?? If I will change
the coordinate of atom 15461, will it help me? But you know, I did this
step changing the position of drug molecule and I got same error.



You should first do what I suggested before.  The reason for a large force is 
either (1) bad atomic clashes that should be apparent upon visual inspection or 
(2) bad topology for the drug.


-Justin



On Tue, Nov 5, 2013 at 5:44 PM, Justin Lemkul jalem...@vt.edu wrote:




On 11/5/13 6:28 AM, Kalyanashis Jana wrote:


Hi,
Whenever I am trying to do position retrained MD run, It has been
stopped
at middle of the MD run. I have given the following error. Can you please
suggest me something to resolve this error?
Energy minimization has stopped, but the forces havenot converged to the
requested precision Fmax  100 (whichmay not be possible for your system).
It
stoppedbecause the algorithm tried to make a new step whose sizewas too
small, or there was no change in the energy sincelast step. Either way, we
regard the minimization asconverged to within the available machine
precision,given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, butthis is often not
needed for preparing to run moleculardynamics.

writing lowest energy coordinates.

Steepest Descents converged to machine precision in 20514 steps,
but did not reach the requested Fmax  100.
Potential Energy  = -9.9811250e+06
Maximum force =  6.1228135e+03 on atom 15461
Norm of force =  1.4393512e+01



Visualize the output, specifically near atom 15461.  The forces there are
too high and cannot be resolved further.  Any attempt to use these
coordinates for dynamics will probably lead to a crash.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists







--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] mdrun

2013-11-05 Thread MUSYOKA THOMMAS
Dear Users,
I am running MD simulations of a protein-ligand system. Sometimes when i do
an mdrun, be it for the energy minimization or during the nvt and npt
equillibration  or the actual md run step, sometimes the output files are
named in a very odd way (strange extensions) e.g em.gro.tprr or md.tpr.cpt,
md.tpr.xtc.

Can anyone explain the cause of this?

Thanks
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun

2013-11-05 Thread Justin Lemkul



On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:

Dear Users,
I am running MD simulations of a protein-ligand system. Sometimes when i do
an mdrun, be it for the energy minimization or during the nvt and npt
equillibration  or the actual md run step, sometimes the output files are
named in a very odd way (strange extensions) e.g em.gro.tprr or md.tpr.cpt,
md.tpr.xtc.

Can anyone explain the cause of this?



You are issuing the command in a way that you probably don't want.  I suspect 
what you are doing is:


mdrun -deffnm md.tpr

The -deffnm option is for the base file name and should not include an 
extension.  mdrun is only doing what you tell it; you're saying, all my files 
are named md.tpr, and you can put whatever the necessary extension is on them.


What you want is:

mdrun -deffnm md

-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] free energy

2013-11-05 Thread Kieu Thu Nguyen
Dear Kiana,

You can contact with Paissoni Cristina (email: paissoni.crist...@hsr.it) to
get tool using MM/PBSA with GROMACS.
Hope it help :)

Cheers,
Kieu Thu


On Tue, Nov 5, 2013 at 7:18 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/5/13 3:45 AM, kiana moghaddam wrote:

 Dear GMX Users



 I am using parmbsc0 force field to study DNA-ligand interaction but my
 problem is free energy calculation (MMPBSA) for this interaction. How can I
 calculate free energy using MMPBSA approach?

 Thank you very much for your time and consideration.


 An identical question was asked on the list last week, including responses
 about external software that will do these calculations.  Gromacs does not
 do MM/PBSA, but other programs will.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mdrun

2013-11-05 Thread MUSYOKA THOMMAS
Dear Dr Justin,
Much appreciation. You nailed it.
Kind regards.


On Tue, Nov 5, 2013 at 2:41 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:

 Dear Users,
 I am running MD simulations of a protein-ligand system. Sometimes when i
 do
 an mdrun, be it for the energy minimization or during the nvt and npt
 equillibration  or the actual md run step, sometimes the output files are
 named in a very odd way (strange extensions) e.g em.gro.tprr or
 md.tpr.cpt,
 md.tpr.xtc.

 Can anyone explain the cause of this?


 You are issuing the command in a way that you probably don't want.  I
 suspect what you are doing is:

 mdrun -deffnm md.tpr

 The -deffnm option is for the base file name and should not include an
 extension.  mdrun is only doing what you tell it; you're saying, all my
 files are named md.tpr, and you can put whatever the necessary extension is
 on them.

 What you want is:

 mdrun -deffnm md

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists




-- 

*MUSYOKA THOMMAS MUTEMIMob nos **+27844846540*
*B.Sc Biochemistry (Kenyatta University),MSc Pharmaceutical Science
(Nagasaki University)*

*PhD Student-Bioinformatics (Rhodes University)*Skype ID- MUSYOKA THOMMAS
MUTEMI
Alternative email - thom...@sia.co.ke








*Do all the good you can, By all the means you can, In all the ways you
can, In all the places you can,At all the times you can,To all the people
you can,As long as ever you can. - John Wesley. *
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] CHARMM .mdp settings for GPU

2013-11-05 Thread rajat desikan
Dear All,
I intend to run a membrane-protein system in GPU. I am slightly confused
about the .mdp settings

Non-gpu settings (according to original CHARMM FF paper):

rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

For cutoff-scheme = Verlet , shouldn't rvdw=rcoulomb? How should the above
settings be modified?

Thank you.


-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Mark Abraham
On Tue, Nov 5, 2013 at 12:55 PM, James Starlight jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


Richard suggested

mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

which looks correct to me. -ntomp 6 is probably superfluous

Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   You can use the -march=native flag with gcc to optimize for the CPU
  your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
  CPUs.
  --
  Szilárd Páll
 
 
  On Mon, Nov 4, 2013 at 12:37 PM, James Starlight 
 jmsstarli...@gmail.com
  
  wrote:
 
  Szilárd, thanks for suggestion!
 
  What kind of CPU optimisation should I take into account assumint that
 
  I'm
 
  using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as
  12
  nodes in Debian).
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   That should be enough. You may want to use the -march (or equivalent)
  compiler flag for CPU optimization.
 
  Cheers,
  --
  Szilárd Páll
 
 
  On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 
 
  jmsstarli...@gmail.com
 
  wrote:
 
  Dear Gromacs Users!
 
  I'd like to compile lattest 4.6 Gromacs with native GPU supporting
 on
 
  my
 
  i7
 
  cpu with dual GeForces Titans gpu mounted. With this config I'd like
 
  to
 
  perform simulations using cpu as well as both gpus simultaneously.
 
  What flags besides
 
  cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5
 
 
  should I define to CMAKE for compiling optimized gromacs on such
 
  workstation?
 
 
 
  Thanks for help
 
  James
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
   --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
 
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  

[gmx-users] Re: Replacing atom

2013-11-05 Thread J Alizadeh
Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.

thanks
j.rahrow


On Thu, Oct 31, 2013 at 12:47 PM, J Alizadeh j.alizade...@gmail.com wrote:

 Hi,
   I need to replace an atom with another in the considered system.
   I'd like to know if it is possible and if so what changes I need to do.

 thanks
 j.rahrow

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Timo Graen

29420 Atoms with a some tuning of the write out and communication intervals:
nodes again: 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs @ 4fs vsites
1 node   212 ns/day
2 nodes  295 ns/day
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] extra gro file generation

2013-11-05 Thread sarah k
Dear Riccardo Concu and Mirco Wahab,

Thanks for your perfect responses.

Regards,
Sarah
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Replacing atom

2013-11-05 Thread Justin Lemkul



On 11/5/13 10:34 AM, J Alizadeh wrote:

Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.



The coordinate file replacement is trivial.  Just open the file in a text editor 
and repname the atom.  The topology is trickier, because you need a whole new 
set of parameters.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gentle heating with implicit solvent

2013-11-05 Thread Gianluca Interlandi
I wonder whether increasing the surface tension parameter 
sa-surface-tension might solve the problem with the protein unfolding.


Thanks,

 Gianluca

On Mon, 4 Nov 2013, Gianluca Interlandi wrote:


Hi Justin,

We are using infinite cutoffs (all vs all). Here is the mdp file for the 
heating (please note that -DPOSRES is commented out) and the time step is 1 
fs:


; VARIOUS PREPROCESSING OPTIONS =
title=
cpp  = /lib/cpp
include  =
;define   = -DPOSRES

; RUN CONTROL PARAMETERS =
integrator   = md
; start time and timestep in ps =
tinit= 0
dt   = 0.001
nsteps   = 20
; mode for center of mass motion removal =
comm-mode= Linear
; number of steps for center of mass motion removal =
nstcomm  = 1
; group(s) for center of mass motion removal =
comm-grps=

; LANGEVIN DYNAMICS OPTIONS =
; Temperature, friction coefficient (amu/ps) and random seed =
;bd-temp  = 300
bd-fric  = 0
ld_seed  = 1993

; IMPLICIT SOLVENT OPTIONS =
implicit-solvent = GBSA
gb-algorithm = OBC
rgbradii = 0

; ENERGY MINIMIZATION OPTIONS =
; Force tolerance and initial step-size =
emtol= 0.01
emstep   = 0.01
; Max number of iterations in relax_shells =
niter= 100
; Step size (1/ps^2) for minimization of flexible constraints =
fcstep   = 0
; Frequency of steepest descents steps when doing CG =
nstcgsteep   = 1000

; OUTPUT CONTROL OPTIONS =
; Output frequency for coords (x), velocities (v) and forces (f) =
nstxout  = 0
nstvout  = 0
nstfout  = 0
; Output frequency for energies to log file and energy file =
nstlog   = 100
nstenergy= 100
; Output frequency and precision for xtc file =
nstxtcout= 1000
xtc_precision= 1000
; This selects the subset of atoms for the xtc file. You can =
; select multiple groups. By default all atoms will be written. =
xtc-grps =
; Selection of energy groups =
energygrps   =

; NEIGHBORSEARCHING PARAMETERS =
; nblist update frequency =
nstlist  = 0
; ns algorithm (simple or grid) =
ns_type  = simple
; Periodic boundary conditions: xyz or no =
pbc  = no
; nblist cut-off =
rlist= 0
;rlistlong= 1.8
domain-decomposition = no

; OPTIONS FOR ELECTROSTATICS AND VDW =
; Method for doing electrostatics =
coulombtype  = Cut-off
rcoulomb_switch  = 0
rcoulomb = 0
; Dielectric constant (DC) for cut-off or DC of reaction field =
epsilon_r= 1
; Method for doing Van der Waals =
vdw-type = Cut-off
; cut-off lengths=
rvdw_switch  = 0
rvdw = 0
; Apply long range dispersion corrections for Energy and Pressure =
DispCorr = No
; Spacing for the PME/PPPM FFT grid =
fourierspacing   = 0.1
; FFT grid size, when a value is 0 fourierspacing will be used =
fourier_nx   = 0
fourier_ny   = 0
fourier_nz   = 0
; EWALD/PME/PPPM parameters =
pme_order= 4
ewald_rtol   = 1e-05
ewald_geometry   = 3d
epsilon_surface  = 0
optimize_fft = no

; OPTIONS FOR WEAK COUPLING ALGORITHMS =
; Temperature coupling   =
Tcoupl   = V-rescale
; Groups to couple separately =
tc_grps  = Protein
; Time constant (ps) and reference temperature (K) =
tau_t= 0.1
ref_t= 300
; Pressure coupling  =
Pcoupl   = no
Pcoupltype   = isotropic
refcoord_scaling = All
; Time constant (ps), compressibility (1/bar) and reference P (bar) =
tau_p= 1.0
compressibility  = 4.5e-5
ref_p= 1.0

; SIMULATED ANNEALING CONTROL =
annealing= single
; Number of time points to use for specifying annealing in each group
annealing_npoints = 21
; List of times at the annealing points for each group
annealing_time= 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 
150 160 170 180 190 200

; Temp. at each annealing point, for each group
annealing_temp= 5 30 30 60 60 90 90 120 120 150 150 180 180 210 
210 240 240 270 270 300 300


; GENERATE VELOCITIES FOR STARTUP RUN =
gen_vel  = yes
gen_temp = 5
gen_seed = 173529

; OPTIONS FOR BONDS =
constraints  = none
; Type of constraint algorithm =
;constraint_algorithm = Lincs
; Do not constrain the start configuration =
unconstrained_start  = no
; Use successive overrelaxation to 

Re: [gmx-users] Re: Using gromacs on Rocks cluster

2013-11-05 Thread Mark Abraham
You need to configure your MPI environment to do so (so read its docs).
GROMACS can only do whatever that makes available.

Mark


On Tue, Nov 5, 2013 at 2:16 AM, bharat gupta bharat.85.m...@gmail.comwrote:

 Hi,

 I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
 32 processors (cpu). But while running the nvt equilibration step, it uses
 only 1 cpu and the others remain idle. I have complied the Gromacs using
 enable-mpi option. How can make the mdrun use all the 32 processors ??

 --
 Bharat
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Szilárd Páll
Timo,

Have you used the default settings, that is one rank/GPU? If that is
the case, you may want to try using multiple ranks per GPU, this can
often help when you have 4-6 cores/GPU. Separate PME ranks are not
switched on by default with GPUs, have you tried using any?

Cheers,
--
Szilárd Páll


On Tue, Nov 5, 2013 at 3:29 PM, Timo Graen tgr...@gwdg.de wrote:
 29420 Atoms with a some tuning of the write out and communication intervals:
 nodes again: 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs @ 4fs vsites
 1 node   212 ns/day
 2 nodes  295 ns/day

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Dwey
Hi Mike,


I have similar configurations except a cluster of AMD-based linux
platforms with 2 GPU cards.

Your  suggestion works. However, the performance of 2 GPU  discourages
me  because , for example,  with 1 GPU, our computer node can easily
obtain a  simulation of 31ns/day for a protein of 300 amino acids but
with 2 GPUs, it goes as far as 38 ns/day. I am very curious as to  why
 the performance of 2 GPUs is under expectation. Is there any overhead
that we should pay attention to ?  Note that these 2GPU cards are
linked by a SLI bridge within the same node.

Since the computer nodes of our cluster have at least one GPU  but
they are connected by slow network cards ( 1GB/sec), unfortunately, I
reasonably doubt that the performance will not be proportional to the
total number of  GPU cards.  I am wondering if you have any suggestion
about a cluster of GPU nodes.   For example, will a infiniband
networking help increase a final performance when we execute a mpi
task ? or what else ?  or forget about mpi and use single GPU instead.

Any suggestion is highly appreciated.
Thanks.

Dwey

 Date: Tue, 5 Nov 2013 16:20:39 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Gromacs-4.6 on two Titans GPUs
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 camnumasm5ht40ub+unppv7gmhqzxsb6psewma+hblv+gnb2...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Tue, Nov 5, 2013 at 12:55 PM, James Starlight 
 jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


 Richard suggested

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 which looks correct to me. -ntomp 6 is probably superfluous

 Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   You can use the -march=native flag with gcc to optimize for the CPU
  your are building on or e.g. -march=corei7-avx-i for Intel Ivy Bridge
  CPUs.
  --
  Szilárd Páll
 
 
  On Mon, Nov 4, 2013 at 12:37 PM, James Starlight 
 jmsstarli...@gmail.com
  
  wrote:
 
  Szilárd, thanks for suggestion!
 
  What kind of CPU optimisation should I take into account assumint that
 
  I'm
 
  using dual-GPU Nvidia TITAN workstation with 6 cores i7 (recognized as
  12
  nodes in Debian).
 
  James
 
 
  2013/11/4 Szilárd Páll pall.szil...@gmail.com
 
   That should be enough. You may want to use the -march (or equivalent)
  compiler flag for CPU optimization.
 
  Cheers,
  --
  Szilárd Páll
 
 
  On Sun, Nov 3, 2013 at 10:01 AM, James Starlight 
 
  jmsstarli...@gmail.com
 
  wrote:
 
  Dear Gromacs Users!
 
  I'd like to compile lattest 4.6 Gromacs with native GPU supporting
 on
 
  my
 
  i7
 
  cpu with dual GeForces Titans gpu mounted. With this config I'd like
 
  to
 
  perform simulations using cpu as well as both gpus simultaneously.
 
  What flags besides
 
  cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-5.5
 
 
  should I define to CMAKE for compiling optimized 

[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Dwey Kauffman
Hi Timo,

  Can you provide a benchmark with  1  Xeon E5-2680 with   1  Nvidia
k20x GPGPU on the same test of 29420 atoms ?

Are these two GPU cards (within the same node) connected by a SLI (Scalable
Link Interface) ? 

Thanks,
Dwey

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012276.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Szilárd Páll
Hi Dwey,

First and foremost, make sure to read the
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
page, in particular the Multiple MPI ranks per GPU section which
applies in your case.

Secondly, please do post log files (pastebin is your friend), the
performance table at the end of the log tells much the performance
story and based on that I/we can make suggestions.

Using multiple GPU requires domain-decomposition which does have a
considerable overhead, especially comparing no DD with DD (i.e. 1 GPU
run with 2 GPU run). However, in your case I suspect that the
bottleneck is multi-threaded scaling on the AMD CPUs and you should
probably decrease the number of threads per MPI rank and share GPUs
between 2-4 ranks.

Regarding scaling across nodes, you can't expect much from gigabit
ethernet - especially not from the cheaper cards/switches, in my
experience even reaction field runs don't scale across nodes with 10G
ethernet if you have more than 4-6 ranks per node trying to
communicate (let alone with PME). However, on infiniband clusters we
have seen scaling to 100 atoms/core (at peak).

Cheers,
--
Szilárd

On Tue, Nov 5, 2013 at 9:29 PM, Dwey mpi...@gmail.com wrote:
 Hi Mike,


 I have similar configurations except a cluster of AMD-based linux
 platforms with 2 GPU cards.

 Your  suggestion works. However, the performance of 2 GPU  discourages
 me  because , for example,  with 1 GPU, our computer node can easily
 obtain a  simulation of 31ns/day for a protein of 300 amino acids but
 with 2 GPUs, it goes as far as 38 ns/day. I am very curious as to  why
  the performance of 2 GPUs is under expectation. Is there any overhead
 that we should pay attention to ?  Note that these 2GPU cards are
 linked by a SLI bridge within the same node.

 Since the computer nodes of our cluster have at least one GPU  but
 they are connected by slow network cards ( 1GB/sec), unfortunately, I
 reasonably doubt that the performance will not be proportional to the
 total number of  GPU cards.  I am wondering if you have any suggestion
 about a cluster of GPU nodes.   For example, will a infiniband
 networking help increase a final performance when we execute a mpi
 task ? or what else ?  or forget about mpi and use single GPU instead.

 Any suggestion is highly appreciated.
 Thanks.

 Dwey

 Date: Tue, 5 Nov 2013 16:20:39 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Gromacs-4.6 on two Titans GPUs
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 camnumasm5ht40ub+unppv7gmhqzxsb6psewma+hblv+gnb2...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Tue, Nov 5, 2013 at 12:55 PM, James Starlight 
 jmsstarli...@gmail.comwrote:

 Dear Richard,


 1)  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 gave me performance about 25ns/day for the explicit solved system consisted
 of 68k atoms (charmm ff. 1.0 cutoofs)

 gaves slightly worse performation in comparison to the 1)


 Richard suggested

 mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test,

 which looks correct to me. -ntomp 6 is probably superfluous

 Mark


 finally

 3) mdrun -deffnm md_CaM_test
 running in the same regime as in the 2) so its also gave me 22ns/day for
 the same system.

 How the efficacy of using of dual-GPUs could be increased?

 James


 2013/11/5 Richard Broadbent richard.broadben...@imperial.ac.uk

  Dear James,
 
 
  On 05/11/13 11:16, James Starlight wrote:
 
  My suggestions:
 
  1) During compilstion using -march=corei7-avx-i I have obtained error
 that
  somethng now found ( sorry I didnt save log) so I compile gromacs
 without
  this flag
 
  2) I have twice as better performance using just 1 gpu by means of
 
  mdrun -ntmpi 1 -ntomp 12 -gpu_id 0 -v  -deffnm md_CaM_test
 
  than using of both gpus
 
  mdrun -ntmpi 2 -ntomp 12 -gpu_id 01 -v  -deffnm md_CaM_test
 
  in the last case I have obtained warning
 
  WARNING: Oversubscribing the available 12 logical CPU cores with 24
  threads.
This will cause considerable performance loss!
 
   here you are requesting 2 thread mpi processes each with 12 openmp
  threads, hence a total of 24 threads however even with hyper threading
  enabled there are only 12 threads on your machine. Therefore, only
 allocate
  12. Try
 
  mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v  -deffnm md_CaM_test
 
  or even
 
  mdrun -v  -deffnm md_CaM_test
 
  I believe it should autodetect the GPUs and run accordingly for details
 of
  how to use gromacs with mpi/thread mpi openmp and GPUs see
 
  http://www.gromacs.org/Documentation/Acceleration_and_parallelization
 
  Which describes how to use these systems
 
  Richard
 
 
   How it could be fixed?
  All gpu are recognized correctly
 
 
  2 GPUs detected:
 #0: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 #1: NVIDIA GeForce GTX TITAN, compute cap.: 3.5, ECC:  no, stat:
  compatible
 
 
  James
 
 
  2013/11/4 

Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Szilárd Páll
On Tue, Nov 5, 2013 at 9:55 PM, Dwey Kauffman mpi...@gmail.com wrote:
 Hi Timo,

   Can you provide a benchmark with  1  Xeon E5-2680 with   1  Nvidia
 k20x GPGPU on the same test of 29420 atoms ?

 Are these two GPU cards (within the same node) connected by a SLI (Scalable
 Link Interface) ?

Note that SLI has no use for compute, only for graphics.

--
Szilárd

 Thanks,
 Dwey

 --
 View this message in context: 
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012276.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Gromacs-4.6 on two Titans GPUs

2013-11-05 Thread Dwey Kauffman
Hi Szilard,

   Thanks for your suggestions. I am  indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.

### 8 core AMD with  1 GPU,
Force evaluation time GPU/CPU: 4.006 ms/2.578 ms = 1.554
For optimal performance this ratio should be close to 1!


NOTE: The GPU has 20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME
grid.

   Core t (s)   Wall t (s)(%)
   Time:   216205.51027036.812  799.7
 7h30:36
 (ns/day)(hour/ns)
Performance:   31.9560.751

### 8 core AMD with 2 GPUs

   Core t (s)   Wall t (s)(%)
   Time:   178961.45022398.880  799.0
 6h13:18
 (ns/day)(hour/ns)
Performance:   38.5730.622
Finished mdrun on node 0 Sat Jul 13 09:24:39 2013


However, in your case I suspect that the 
bottleneck is multi-threaded scaling on the AMD CPUs and you should 
probably decrease the number of threads per MPI rank and share GPUs 
between 2-4 ranks.


OK but can you give a example of mdrun command ? given a 8 core AMD with 2
GPUs.
I will try to run it again.


Regarding scaling across nodes, you can't expect much from gigabit 
ethernet - especially not from the cheaper cards/switches, in my 
experience even reaction field runs don't scale across nodes with 10G 
ethernet if you have more than 4-6 ranks per node trying to 
communicate (let alone with PME). However, on infiniband clusters we 
have seen scaling to 100 atoms/core (at peak). 

From your comments, it sounds like a cluster of AMD cpus is difficult to
scale across nodes in our current setup.

Let's assume we install Infiniband (20 or 40GB/s) in the same system of 16
nodes of 8 core AMD with 1 GPU only. Considering the same AMD system, what
is a good way to obtain better performance  when we run a task across nodes
? in other words, what dose mudrun_mpi look like ?

Thanks,
Dwey




--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Gromacs-4-6-on-two-Titans-GPUs-tp5012186p5012279.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Dwey Kauffman
Hi Szilard,

 Thanks.

From Timo's benchmark, 
1  node142 ns/day 
2  nodes FDR14 218 ns/day 
4  nodes FDR14 257 ns/day 
8  nodes FDR14 326 ns/day 


It looks like a infiniband network is required in order to scale up when
running a task across nodes. Is it correct ?   


Dwey


--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012280.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] RE: Gibbs Energy Calculation and charges

2013-11-05 Thread Dallas Warren
Thank you for the pointer Michael.

couple-intramol = no
What a diff of the output from gmxdump of the two tpr files shows is in both of 
these cases (normal and double charged), when:
lambda is set to 1.0 (atoms within both molecules will have zero charge)
lambda is set to 0.00 and 0.50, respectively (both will have the same 
charge)
There are the following differences:
functype[] = LJC14_Q, qi and qj are set to the original charges, not 
the ones scaled by lambda
functype[] = LJC_NB, qi and qj are set to the original charges, not the 
ones scaled by lambda
atom[] = q is set to the original charges, not the ones scaled by 
lambda

So this explains why I do not see the two topologies giving the same value at 
the same atomic charges, since the topologies being simulated are not the same. 
 The 1-4 charge interactions are still at the original charges, as are the 1-5 
and beyond.

What is the reason that the charges are being left untouched here with changing 
lambda?  I can understand when we are dealing with LJ/van der Waals, since the 
1-4 are important for the proper dihedrals, but what is the reason for charges 
being left untouched?  Having thought this through, answered it myself, it is 
because here we are interested in molecule - external environment interactions 
being turned off, we are moving the entire molecule from full interacting with 
its external environment to non-interacting.  The molecule itself should be 
left alone.

Turning to the side issue of turning couple-intramol on -

couple-intramol = yes
diff of the equivalent files, there are the following differences:
functype[] = LJC14_Q, qi and qj are set to the original charges, not 
the ones scaled by lambda
atom[] = q is set to the original charges, not the ones scaled by 
lambda

Which confirms what Michael mentioned earlier about couple-intramol only 
affecting those 1-5 and beyond i.e. LJC_NB

Which then begs the question, why does the value of dH/dl change so 
dramatically when this option is turned on, as I observed at 
http://ozreef.org/stuff/gromacs/couple-intramol.png  The only thing being 
changed is the fact that LJC_NB is now being scaled with lambda.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


 -Original Message-
 From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] On Behalf Of Michael Shirts
 Sent: Thursday, 31 October 2013 1:52 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] RE: Gibbs Energy Calculation and charges
 
 I likely won't have much time to look at it tonight, but you can see
 exactly what the option is doing to the topology.  run gmxdump on the
 tpr.  All of the stuff that couple-intramol does is in grompp, so the
 results will show up in the detailed listings of the interactions, and
 which ones have which values set for the A and B states.
 
 On Wed, Oct 30, 2013 at 5:36 PM, Dallas Warren
 dallas.war...@monash.edu wrote:
  Michael, thanks for taking the time to comment and have a look.
 
  The real issue I am having is a bit deeper into the topic than that,
 my last reply was just an observation on something else.  Will
 summarise what I have been doing etc.
 
  I have a molecule that are calculating the Gibbs energy of hydration
 and solvation (octanol).  In a second topology the only difference is
 that the atomic charges have been doubled.  Considering that charges
 are scaled linearly with lambda, the normal charge values of dH/dl from
 lambda 0 to 1 obtained should reproduce that of the double charged
 molecule from lambda 0.5 to 1.0.  Is that a correct interpretation?
 Since the only difference should be that charge of the atoms and over
 that range the charge will be identical.
 
  I was using couple-intramol = no and the following are the results
 from those simulations.
 
  For the OE atom within the molecule, I have plotted the following
 graphs of dH/dl versus charge of that atom for both of the topologies.
  octanol - http://ozreef.org/stuff/octanol.gif
  water - http://ozreef.org/stuff/water.gif
  mdp file - http://ozreef.org/stuff/gromacs/mdout.mdp
 
  The mismatch between the two topologies is the real issue that I am
 having.  I was hoping to get the two to overlap.
 
  My conclusion based on this is that there is actually something else
 being changed with the topology by GROMACS when the simulations are
 being run.  The comments in the manual allude to that, but not entirely
 sure what is going on.
 
  From the manual:
 
 couple-intramol:
 
 no
  All intra-molecular non-bonded interactions for moleculetype
 couple-moltype are replaced by 

[gmx-users] RE: Gibbs Energy Calculation and charges

2013-11-05 Thread Dallas Warren
Thanks for the suggestion Chris.  Had a quick look and can't see easily how to 
do this, but I think I am at a point now where it is not an issue and don't 
have to actually do this.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


 -Original Message-
 From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] On Behalf Of Christopher Neale
 Sent: Saturday, 2 November 2013 3:50 AM
 To: gmx-users@gromacs.org
 Subject: [gmx-users] Gibbs Energy Calculation and charges
 
 Dear Dallas:
 
 Seems like you could test Michael's idea by removing all 1-4 NB
 interactions from your topology. It won't produce any biologically
 useful results, but might be a worthwhile check to see if indeed this
 is the issue.
 
 To do this, I figure you would set gen-pairs to no in the [ defaults
 ] directive of forcefield.itp, remove the [ pairtypes ] section from
 ffnonbonded.itp, and remove the [ pairs ] section from your molecular
 .itp file. (You can quickly check that the 1-4 energy is zero in all
 states to ensure that this works).
 
 If that gives you the result that you expect, then you could go on to
 explicitely state the 1-4 interactions for the A and B states (I
 presume that this is possible). Of course, you should be able to jump
 directly to this second test, but the first test might be useful
 because it rules out the possibility that you make a typo somewhere.
 
 Chris.
 
 -- original message --
 
 I think the grammar got a little garbled there, so I'm not sure quite
 what you are claiming.
 
 One important thing to remember; 1-4 interactions are treated as
 bonded interactions right now FOR COUPLE intramol (not for lambda
 dependence of the potential energy function), so whether
 couple-intramol is set to yes or no does not affect these interactions
 at all.  It only affects the nonbondeds with distances greater than
 1-5.  At least to me, this is nonintuitive (and we're coming up with a
 better scheme for 5.0), but might that explain what you are getting?
 
 On Tue, Oct 29, 2013 at 9:44 PM, Dallas Warren Dallas.Warren at
 monash.edu wrote:
  Just want this to make another pass, just in case those in the know
 missed it.
 
  Using couple-intrmol = yes the resulting dH/dl plot actually looks
 like that at lamba = 1 it is actually equal to couple-intramol = no
 with lambda = 0.
 
  Should that be the case?
 
  Catch ya,
 
  Dr. Dallas Warren
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: gmx-users Digest, Vol 115, Issue 16

2013-11-05 Thread Stephanie Teich-McGoldrick
Message: 5
Date: Mon, 04 Nov 2013 13:32:52 -0500
From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Analysis tools and triclinic boxes
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 5277e854.9000...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Justin,

Thanks for the response. My question was prompted by line 243 in
gmx_cluster.c which states /* Should use pbc_dx when analysing multiple
molecueles,but the box is not stored for every frame.*/ I just wanted to
verify that analysis tools are written for any box shape.

Cheers,
Stephanie



On 11/4/13 1:29 PM, Stephanie Teich-McGoldrick wrote:
 Dear all,

 I am using gromacs 4.6.3 with a triclinic box. Based on the manual and
mail
 list, it is my understanding that the default box shape in gromacs in a
 triclinic box. Can I assume that all the analysis tools also work for a
 triclinic box.


All analysis tools should work correctly for all box types.  Is there a
specific
issue you are having, or just speculation?

-Justin

--
==


Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==


On Mon, Nov 4, 2013 at 12:28 PM, gmx-users-requ...@gromacs.org wrote:

 Send gmx-users mailing list submissions to
 gmx-users@gromacs.org

 To subscribe or unsubscribe via the World Wide Web, visit
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 or, via email, send a message with subject or body 'help' to
 gmx-users-requ...@gromacs.org

 You can reach the person managing the list at
 gmx-users-ow...@gromacs.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gmx-users digest...


 Today's Topics:

1. Re: Re: Installation Gromacs 4.5.7 on rocluster cluster   with
   centos 6.0 (Mark Abraham)
2. Analysis tools and triclinic boxes (Stephanie Teich-McGoldrick)
3. Group protein not found in indexfile (Steve Seibold)
4. Re: Group protein not found in indexfile (Justin Lemkul)
5. Re: Analysis tools and triclinic boxes (Justin Lemkul)
6. Re: TFE-water simulation (Archana Sonawani-Jagtap)
7. Re: Gentle heating with implicit solvent (Gianluca Interlandi)


 --

 Message: 1
 Date: Mon, 4 Nov 2013 17:05:36 +0100
 From: Mark Abraham mark.j.abra...@gmail.com
 Subject: Re: [gmx-users] Re: Installation Gromacs 4.5.7 on rocluster
 cluster with centos 6.0
 To: Discussion list for GROMACS users gmx-users@gromacs.org
 Message-ID:
 CAMNuMAQcWLcKA=GPG1Ewr8s4A=PoTGOSWaqa=
 s_uzbyw8uf...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1

 On Mon, Nov 4, 2013 at 12:01 PM, bharat gupta bharat.85.m...@gmail.com
 wrote:

  Hi,
 
  I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
  fine till .configure command, but I am getting error at the make command
 :-
 
  Error:
  
  [root@cluster gromacs-4.5.7]# make
 

 These is no need to run make as root - doing so guarantees you have almost
 no knowledge of the final state of your entire machine.


  /bin/sh ./config.status --recheck
  running CONFIG_SHELL=/bin/sh /bin/sh ./configure  --enable-mpi
  LDFLAGS=-L/opt/rocks/lib CPPFLAGS=-I/opt/rocks/include  --no-create
  --no-recursion
  checking build system type... x86_64-unknown-linux-gnu
  checking host system type... x86_64-unknown-linux-gnu
  ./configure: line 2050: syntax error near unexpected token `tar-ustar'
  ./configure: line 2050: `AM_INIT_AUTOMAKE(tar-ustar)'
  make: *** [config.status] Error 2
 

 Looks like the system has an archaic autotools setup. Probably you can
 comment out the line with tar-ustar from the original configure script, or
 remove tar-ustar. Or use the CMake build.


 
 
  I have another query regarding the gromacs that comes with the Rocks
  cluster distribution. The mdrun of that gromacs has been complied without
  mpi option. How can I recomplie with mpi option. As I need the .configure
  file which is not there in the installed gromacs folder of the rocks
  cluster ...
 

 The 4.5-era GROMACS installation instructions are up on the website.
 Whatever's distributed with Rocks is more-or-less irrelevant.

 Mark


 
 
  Thanks in advance for help
 
 
 
 
  Regards
  
  Bharat
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read 

[gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread david.chalm...@monash.edu
Hi Szilárd and all,

Thanks very much for the information.  I am more interested in getting
single simulations to go as fast as possible (within reason!) rather than
overall throughput.  Would you expect that the more expensive dual
Xeon/Titan systems would perform better in this respect? 

Cheers

David

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012283.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: Hardware for best gromacs performance?

2013-11-05 Thread Mark Abraham
Yes, that has been true for GROMACS for a few years. Low-latency
communication is essential if you want a whole MD step to happen in around
1ms wall time.

Mark
On Nov 5, 2013 11:24 PM, Dwey Kauffman mpi...@gmail.com wrote:

 Hi Szilard,

  Thanks.

 From Timo's benchmark,
 1  node142 ns/day
 2  nodes FDR14 218 ns/day
 4  nodes FDR14 257 ns/day
 8  nodes FDR14 326 ns/day


 It looks like a infiniband network is required in order to scale up when
 running a task across nodes. Is it correct ?


 Dwey


 --
 View this message in context:
 http://gromacs.5086.x6.nabble.com/Hardware-for-best-gromacs-performance-tp5012124p5012280.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Diffusion/PBC

2013-11-05 Thread Trayder Thomas
Your best bet is probably to center everything on the receptor. That will
prevent jumping of the receptor only, which is hopefully all you need.

-Trayder


On Tue, Nov 5, 2013 at 7:14 PM, Tsjerk Wassenaar tsje...@gmail.com wrote:

 Hi Debashis,

 Makes sure that the anion and receptor are together in the reference
 structure you use for trjconv -pbc nojump

 Cheers,

 Tsjerk


 On Tue, Nov 5, 2013 at 8:12 AM, Debashis Sahu debashis.sah...@gmail.com
 wrote:

  Dear All,
I have an problem related to jumping trajectory. In my MD
  run, there is a receptor molecule which is binding with an halogen anion
 in
  water solvent. In the original trajectory, the binding between them looks
  fine but jumping present. To remove the jumping of the system from
  trajectory, I have used 'nojump' as discussed in the forum. Now I got a
  jump-free trajectory, but due to the diffusion here, I have observed that
  the anion and the receptor are far away from each other. I could not fix
  the problem. can any one suggest me?
  Thanks in advance.
  with regards,
  *Debashis Sahu*
  *Central Salt and Marine Chemical Research Institute*
  *Bhavnagar, Gujarat*
  *India, 364002.*
  --
  gmx-users mailing listgmx-users@gromacs.org
  http://lists.gromacs.org/mailman/listinfo/gmx-users
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
  * Please don't post (un)subscribe requests to the list. Use the
  www interface or send it to gmx-users-requ...@gromacs.org.
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 



 --
 Tsjerk A. Wassenaar, Ph.D.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_lie and ligand only simulation

2013-11-05 Thread Kavyashree M
Dear users,

When the simulation was carried out with PME
rcoulomb was set equal to rlist. But when I need to
to ligand-water simulation without PME (with RF-0)
then it requires rlist greater by 0.1-0.3 than rcoulomb.
So if I rerun protein-ligand-water simulation there
could be more differences in the energies isnt it?

Thank you
Regards
Kavya


On Sat, Nov 2, 2013 at 9:51 PM, Kavyashree M hmkv...@gmail.com wrote:

 Ok thank you. I thought it was for protein-ligand-water
 that needs to be rerun without PME.

 Thanks
 Regards
 Kavya



 On Sat, Nov 2, 2013 at 9:45 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/2/13 12:14 PM, Kavyashree M wrote:

 Sir,

 Thank you. Should the ligand-water MD be done without PME?


 I already answered this.  Please read my previous reply again.

 -Justin


  Thank you
 Regards

 Kavya


 On Sat, Nov 2, 2013 at 9:13 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 11/2/13 1:22 AM, Kavyashree M wrote:

  Dear Users,

 Its mentioned in the list that it would be
 wrong to use g_lie on a simulation which
 uses PME.

 So kindly suggest any other way available
 to get the free energy of ligand binding other
 using g_lie?


  The original simulation should be done with PME, then the energies
 recalculated using mdrun -rerun without PME.  More detailed methods are
 available in the list archive; this topic gets discussed a lot.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


 --
 ==

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441

 ==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists