[gmx-users] Fwd: [gmx-developers] Simulated annealing problem

2013-03-14 Thread David van der Spoel

This is a user problem, not a development problem.


 Original Message 
Subject: [gmx-developers] Simulated annealing problem
Date: Thu, 14 Mar 2013 10:15:39 +0530
From: Gaurav Jerath g.jer...@iitg.ernet.in
Reply-To: Discussion list for GROMACS development 
gmx-develop...@gromacs.org

To: gmx-develop...@gromacs.org

Hi,
I am trying to anneal two protein molecules.
The usual protocol for a MD simulation was followed.
But the problem arises that at high temperatures, the number of hydrogen
bonds are increasing instead of getting decreased. The GROMOS43a1 force
field was used and the mdp file for the simulation is shown below:


;
title   =  Yo
cpp =  /usr/bin/cpp
constraints =  all-bonds
integrator  =  md
dt  =  0.002;ps !
nsteps  =  500  ; total 10ns.
;nstcomm =  1
nstxout =  250
nstvout =  1000
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  10
ns_type =  grid
rlist   =  1.0
rcoulomb=  1.0
rvdw=  1.0
; Berendsen temperature coupling is on in two groups
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= Protein	non-Protein; two coupling groups - more 
accurate

tau_t= 0.1  0.1  ; time constant, in ps
ref_t= 300  300  ; reference temperature, one for each
group, in K
; Energy monitoring
energygrps  =  Protein  SOL
; Isotropic pressure coupling is now on
Pcoupl  =  berendsen
Pcoupltype  = isotropic
tau_p   =  2.75
compressibility =  4.5e-5
ref_p   =  1.0
; Generate velocites is off at 310 K.
gen_vel =  no
gen_temp=  300.0
gen_seed=  173529
; SIMULATED ANNEALING CONTROL =
annealing   =  periodic periodic
annealing_npoints= 3 3
annealing_time  = 0 5000 1 0 5000 1
annealing_temp  = 300 500 300 300 500 300


Kindly help me as I am unable to figure out if there is a problem in my
parameters file


-- 
gmx-developers mailing list
gmx-develop...@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-developers
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-developers-requ...@gromacs.org.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

[gmx-users] Simulated annealing problem

2013-03-14 Thread Gaurav Jerath

Hi,
I am trying to anneal two protein molecules.
The usual protocol for a MD simulation was followed.
But the problem arises that at high temperatures, the number of hydrogen
bonds are increasing instead of getting decreased. The GROMOS43a1 force
field was used and the mdp file for the simulation is shown below:


;
title   =  Yo
cpp =  /usr/bin/cpp
constraints =  all-bonds
integrator  =  md
dt  =  0.002;ps !
nsteps  =  500; total 10ns.
;nstcomm =  1
nstxout =  250
nstvout =  1000
nstfout =  0
nstlog  =  100
nstenergy   =  100
nstlist =  10
ns_type =  grid
rlist   =  1.0
rcoulomb=  1.0
rvdw=  1.0
; Berendsen temperature coupling is on in two groups
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= Proteinnon-Protein; two coupling groups - more
accurate
tau_t= 0.1  0.1  ; time constant, in ps
ref_t= 300  300  ; reference temperature, one
for each
group, in K
; Energy monitoring
energygrps  =  Protein  SOL
; Isotropic pressure coupling is now on
Pcoupl  =  berendsen
Pcoupltype  = isotropic
tau_p   =  2.75
compressibility =  4.5e-5
ref_p   =  1.0
; Generate velocites is off at 310 K.
gen_vel =  no
gen_temp=  300.0
gen_seed=  173529
; SIMULATED ANNEALING CONTROL =
annealing   =  periodic periodic
annealing_npoints= 3 3
annealing_time  = 0 5000 1 0 5000 1
annealing_temp  = 300 500 300 300 500 300


Kindly help me as I am unable to figure out if there is a problem in my
parameters file
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] MD in Vacuum

2013-03-14 Thread Erik Marklund

Hi,

You also need to consider the ensemble you want to investigate. If you  
simulate under constant energy you need a shorter timestep than you  
would have in solution and probably double precision. In the articles  
I list below we used a timestep of 0.5 fs and 1 fs, respectively, and  
double precision. Monitor the total energy and you will see it  
drifting early on if you e.g. have a timestep that is too long. You  
could also try to apply the constraints more accurately than default.


* Patriksson, A., Marklund, E., and Van der Spoel, D. (2007) Protein  
Structures under Electrospray Conditions. Biochemistry, 56(4):933pp
* Marklund, E. G., Larsson, S. D., Patriksson, A., Van der Spoel, D.,  
and Caleman, C. (2009) Structural stability of electrosprayed  
proteins: temperature and hydration effects. Physical Chemistry  
Chemical Physics, 11(36):8069pp


Modesto Orozco and coworkers have also simulated biomolecules in the  
gas phase. I suggest to also have a look at their work.


Best,

On Mar 13, 2013, at 8:22 PM, Justin Lemkul wrote:




On 3/13/13 12:53 PM, Lara Bunte wrote:

Hello

In all my tutorials for md they investigate a molecule in solution.  
I want to do md simulations in vacuum and I did not find a good  
tutorial. So I want to ask you what should I do for a md simulation  
in vaccum?




Consult the literature.  Tutorials will not cover every topic, and  
since most MD force fields were designed for condensed-phase  
simulations, it's awfully hard to explain to a new user why we're  
making wild changes to normal settings and hope that they  
understand :)


I take a molecule, optimize the structure, generate a topology with  
gromacs and what after this? Do I also have to make a box? what is  
the procedure?




Usually in vacuo simulations are done in the absence of periodicity  
and with infinite cutoffs.  In general, an approach like the  
following is used:


comm-mode = Angular
nstlist = 0
rlist = 0
rvdw = 0
rcoulomb = 0
pbc = no
ns-type = simple
vdwtype = cutoff
coulombtype = cutoff

Check literature to be sure to establish a protocol with some basis  
in precedent.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search 
 before posting!
* Please don't post (un)subscribe requests to the list. Use the www  
interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] position restraints

2013-03-14 Thread Shima Arasteh
Dear gmx users,

I want to use restraints on backbone of my protein to keep its secondary 
structure during minimization and equilibration steps. To do so, I generated 
backbone-restrain.itp and then included it to top file. Next, added define = 
-DPOSRES to minim.mdp file. 
After minimization, when I check the minimization output file, I saw that the 
backbone of input and output files are not exactly the same. 
Is there any step which I have not done to set the restraints correctly?


Thanks in advance.

Sincerely,
Shima 
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] position restraints

2013-03-14 Thread Erik Marklund

Restraints allow, by definition, for slight deviations.

Erik

On Mar 14, 2013, at 12:51 PM, Shima Arasteh wrote:


Dear gmx users,

I want to use restraints on backbone of my protein to keep its  
secondary structure during minimization and equilibration steps. To  
do so, I generated backbone-restrain.itp and then included it to top  
file. Next, added define = -DPOSRES to minim.mdp file.
After minimization, when I check the minimization output file, I saw  
that the backbone of input and output files are not exactly the same.
Is there any step which I have not done to set the restraints  
correctly?



Thanks in advance.

Sincerely,
Shima
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search 
 before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] query for gromacs-4.5.4

2013-03-14 Thread Chaitali Chandratre
Hello Sir,

The job runs for 8 processes given 1 , 2 or 8 nodes but not for more than
that.
16 proceses : Segmentation fault and
For 32 processes it gives :
Fatal error : 467 particles communicated to PME node 4 are more than 2/3
times
the cut off out of the domain decomposition cell of their charge group in
deimesion x.
This usually means that your system is not well equilibrated

What can be the reason...

Thanks,
Chaitali

On Tue, Mar 12, 2013 at 3:50 PM, Mark Abraham mark.j.abra...@gmail.comwrote:

 It could be anything. But until we see some GROMACS diagnostic messages,
 nobody can tell.

 Mark

 On Tue, Mar 12, 2013 at 10:08 AM, Chaitali Chandratre 
 chaitujo...@gmail.com
  wrote:

  Sir,
 
  Thanks for your reply
  But the same script runs on some other cluster with apprx same
  configuration but not on cluster on which I am doing set up.
 
  Also job hangs after some 16000 steps but not come out immediately.
  It might be problem with configuration or what?
 
  Thanks...
 
  Chaitali
 
  On Tue, Mar 12, 2013 at 2:18 PM, Mark Abraham mark.j.abra...@gmail.com
  wrote:
 
   They're just MPI error messages and don't provide any useful GROMACS
   diagnostics. Look in the end of the .log file, stderr and stdout for
  clues.
  
   One possibility is that your user's system is too small to scale
   effectively. Below about 1000 atoms/core you're wasting your time
 unless
   you've balanced the load really well. There is a
   simulation-system-dependent point below which fatal GROMACS errors are
   assured.
  
   Mark
  
   On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
   chaitujo...@gmail.comwrote:
  
Hello Sir,
   
Actually I have been given work to setup gromacs-4.5.4 in our cluster
   which
is being used
by users.I am not gromacs user and not aware of its internal details.
I have got only .tpr file from user and I need to test my
 installation
using that .tpr file.
   
It works fine for 2 nodes 8 processes , 1 node 8 processes.
 But when number of processes are equal to 16 it gives segmentation
  fault
and
 if number of processes are equal to 32 it gives
error message like
 HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221):
 assert
(!closed) failed
 ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send SIGUSR1
downstream
 HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77):
  callback
returned error status
 HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388):
 error
waiting for event
[ main (./ui/mpich/mpiexec.c:718): process manager error waiting for
completion
   
I am not clear like whether problem is there in my installation or
  what?
   
Thanks and Regards,
   Chaitalij
   
On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul jalem...@vt.edu
 wrote:
   


 On 3/6/13 4:20 AM, Chaitali Chandratre wrote:

 Dear Sir ,

 I am new to this installation and setup area. I need some
  information
for
 -stepout option for


 What more information do you need?


  mdrun_mpi and also probable causes for segmentation fault in
  gromacs-4.5.4.
 (my node having 64 GB mem running with 16 processes, nsteps =
   2000)


 There are too many causes to name.  Please consult
http://www.gromacs.org/
 **Documentation/Terminology/**Blowing_Up
http://www.gromacs.org/Documentation/Terminology/Blowing_Up.
  If you need further help, please be more specific, including a
description
 of the system, steps taken to minimize and/or equilibrate it, and
 any
 complete .mdp file(s) that you are using.

 -Justin

 --
 ==**==

 Justin A. Lemkul, Ph.D.
 Research Scientist
 Department of Biochemistry
 Virginia Tech
 Blacksburg, VA
 jalemkul[at]vt.edu | (540) 231-9080
 http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

 ==**==
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-users
http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Search
http://www.gromacs.org/Support/Mailing_Lists/Searchbefore posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists
http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the 

[gmx-users] Re: Gromacs-4.6 installation on cygwin problem

2013-03-14 Thread neshathaq
Mr M. Thanks a lot for your help...
I will contact you if I get any problem.



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Gromacs-4-6-installation-on-cygwin-problem-tp5006297p5006319.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] query for gromacs-4.5.4

2013-03-14 Thread Mark Abraham
One cannot run an arbitrary simulation on an arbitrary number of
processors. The particles in the simulation have to get mapped to
processors that handle them. This is done in small parcels called domains.
The atoms move between domains, and so the responsibility moves too. If the
velocities are high with respect to the size of the domains, the
responsibility might shift too far for the implementation to handle. This
is happening here. GROMACS does not handle arbitrary redistribution of
particles to processors because the algorithm to do that would be too
expensive for the normal case where the redistribution is more orderly.

So either the velocities are too high (the system is not well
equilibrated), and you are getting lucky on small numbers of nodes that
the domains are big enough to cope with the equilibration process, or you
are trying to use domains that are just too small for normal velocities.

We'd need to know what's in the .tpr to have an opinion about what a
reasonable maximum number of nodes might be, but a simulation with less
than a thousand atoms per processor  starts having to pay attention to
these issues.

Mark

On Thu, Mar 14, 2013 at 1:25 PM, Chaitali Chandratre
chaitujo...@gmail.comwrote:

 Hello Sir,

 The job runs for 8 processes given 1 , 2 or 8 nodes but not for more than
 that.
 16 proceses : Segmentation fault and
 For 32 processes it gives :
 Fatal error : 467 particles communicated to PME node 4 are more than 2/3
 times
 the cut off out of the domain decomposition cell of their charge group in
 deimesion x.
 This usually means that your system is not well equilibrated

 What can be the reason...

 Thanks,
 Chaitali

 On Tue, Mar 12, 2013 at 3:50 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  It could be anything. But until we see some GROMACS diagnostic messages,
  nobody can tell.
 
  Mark
 
  On Tue, Mar 12, 2013 at 10:08 AM, Chaitali Chandratre 
  chaitujo...@gmail.com
   wrote:
 
   Sir,
  
   Thanks for your reply
   But the same script runs on some other cluster with apprx same
   configuration but not on cluster on which I am doing set up.
  
   Also job hangs after some 16000 steps but not come out immediately.
   It might be problem with configuration or what?
  
   Thanks...
  
   Chaitali
  
   On Tue, Mar 12, 2013 at 2:18 PM, Mark Abraham 
 mark.j.abra...@gmail.com
   wrote:
  
They're just MPI error messages and don't provide any useful GROMACS
diagnostics. Look in the end of the .log file, stderr and stdout for
   clues.
   
One possibility is that your user's system is too small to scale
effectively. Below about 1000 atoms/core you're wasting your time
  unless
you've balanced the load really well. There is a
simulation-system-dependent point below which fatal GROMACS errors
 are
assured.
   
Mark
   
On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
chaitujo...@gmail.comwrote:
   
 Hello Sir,

 Actually I have been given work to setup gromacs-4.5.4 in our
 cluster
which
 is being used
 by users.I am not gromacs user and not aware of its internal
 details.
 I have got only .tpr file from user and I need to test my
  installation
 using that .tpr file.

 It works fine for 2 nodes 8 processes , 1 node 8 processes.
  But when number of processes are equal to 16 it gives segmentation
   fault
 and
  if number of processes are equal to 32 it gives
 error message like
  HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221):
  assert
 (!closed) failed
  ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send
 SIGUSR1
 downstream
  HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77):
   callback
 returned error status
  HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388):
  error
 waiting for event
 [ main (./ui/mpich/mpiexec.c:718): process manager error waiting
 for
 completion

 I am not clear like whether problem is there in my installation or
   what?

 Thanks and Regards,
Chaitalij

 On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul jalem...@vt.edu
  wrote:

 
 
  On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
 
  Dear Sir ,
 
  I am new to this installation and setup area. I need some
   information
 for
  -stepout option for
 
 
  What more information do you need?
 
 
   mdrun_mpi and also probable causes for segmentation fault in
   gromacs-4.5.4.
  (my node having 64 GB mem running with 16 processes, nsteps =
2000)
 
 
  There are too many causes to name.  Please consult
 http://www.gromacs.org/
  **Documentation/Terminology/**Blowing_Up
 http://www.gromacs.org/Documentation/Terminology/Blowing_Up.
   If you need further help, please be more specific, including a
 description
  of the system, steps taken to minimize and/or equilibrate it, 

[gmx-users] template parallelization

2013-03-14 Thread subhadipdas
First, here my system :
Cent Os 64 bits
gcc : 4.4.6
fftw : 3.3.2 including SSE 2

I am writing a code to find out the number of five and six membered rings in
structure I of hydrate . My code takes long time to compile for my 500ps
trajectory . So my query is that can the template file be made to run in
parallel fashion ? If so ,how 



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/template-parallelization-tp5006320.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: TMD using free energy code

2013-03-14 Thread Landraille
I have done a test for a TMD between two conformations of a POPC in water
with this mdp file:

define = -DPOSRES_TMD   ; POSRES-TMD defined in the top file and including
only the heavy atom of POPC 
refcoord-scaling = all
cpp= /usr/bin/cpp
constraints =  hbonds
constraint_algorithm = LINCS
integrator  =  md
dt  =  0.002
nsteps  =  5
nstxout =  5
nstvout =  5
nstfout =  0
nstlog  =  500
nstenergy   =  500
nstxtcout   =  500
nstlist =  5
ns_type =  grid
coulombtype =  PME
rlist   =  1
rcoulomb=  1
rvdw=  1
; Berendsen temperature coupling 
Tcoupl  =  v-rescale
tau_t   =  0.1 0.1
tc_grps =  POPC SOL
ref_t   =  300 300
; Pressure coupling 
Pcoupl  =  Parrinello-Rahman
pcoupltype  =  isotropic
tau_p   =  0.5
ref_p   =  1
compressibility =  4.5e-05
unconstrained_start =  yes
gen_vel =  no
gen_temp=  300
gen_seed=  173529
optimize_fft=  yes
; free-energy
free_energy  = yes
init_lambda  = 0.0
delta_lambda   = 0

and these commands :
grompp -f TMD.mdp -c begin.gro -p topol.top -o tmd.tpr -n index.ndx -r
begin.gro -rb end.gro
mdrun -v -deffnm tmd

Nevertheless, it doesn't work even with non-zero value for delta_lambda. I
don't know which free energy options I have to modify. Indeed, according to
the manual, the other options are tuned for FEP or TI.

Any help would be greatly appreciated.

Landraille 

PS : I use v4.5.5. Indeed, with the v4.6, mdrun doesn't start and I have the
error message Function type U-B not implemented in ip_pert.





--
View this message in context: 
http://gromacs.5086.n6.nabble.com/TMD-using-free-energy-code-tp5006260p5006321.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] template parallelization

2013-03-14 Thread Erik Marklund
Do you really mean compile time? If so, issue make -j X (where x is  
the number of jobs used for building).


If you mean runtime then the easiest thing is to split your trajectory  
in parts and run the processes in parallel, then patch the results  
together. That's if the calculations can be done on a per-frame basis  
(i.e. embarrassingly parallelizable problems). Next in line is to wrap  
some performance critical loop(s) in OpenMP pragmas and link to the  
OpenMP libraries on your system. If you want others to benefit from  
the code you should adhere to the parallelization framework in  
gromacs, e.g. use gromacs' wrappers and preprocessor directives for  
OpenMP and MPI magic. The latter may be more trouble than it's worth  
if your analysis is only for a limited userbase. There is a framework  
underway in gromacs for parallel analysis tools, but as far as I know  
it's not finalized yet.


Erik

On Mar 14, 2013, at 2:22 PM, subhadipdas wrote:


First, here my system :
Cent Os 64 bits
gcc : 4.4.6
fftw : 3.3.2 including SSE 2

I am writing a code to find out the number of five and six membered  
rings in
structure I of hydrate . My code takes long time to compile for my  
500ps
trajectory . So my query is that can the template file be made to  
run in

parallel fashion ? If so ,how



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/template-parallelization-tp5006320.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search 
 before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: template parallelization

2013-03-14 Thread subhadipdas
Sorry but i can't understand the part about parallelization framework in 
gromacs . Can you please elaborate on this 



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/template-parallelization-tp5006320p5006323.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: template parallelization

2013-03-14 Thread Erik Marklund
Gromacs currently use OpenMP and/or MPI with it's own wrappers for  
parallel computation. There is a general framework being developed for  
parallelizing analysis tools, but I don't know the specifics and I  
beleive it's not ready for use yet.


Is that helping you?

Erik

On Mar 14, 2013, at 3:20 PM, subhadipdas wrote:

Sorry but i can't understand the part about parallelization  
framework in

gromacs . Can you please elaborate on this



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/template-parallelization-tp5006320p5006323.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search 
 before posting!

* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: template parallelization

2013-03-14 Thread subhadipdas
Thanks for the reply . Looks like it is not a layman's problem but the reply
u sent me has given me the idea where to start . 



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/template-parallelization-tp5006320p5006325.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] bonding energy

2013-03-14 Thread fatemeh ramezani

I would check the energy connection between an amino acid and a ligand. After 
simulation, I run the following command:

g_dist_mpi  -f   x.trr   -s  x.tpr   -n index.ndx -o pro.xvg

Then I choose the bond  option (option 1)  from the  proposed options.
Is this true for energy bonding finding between two molecules?


thank you

Fatemeh Ramezani
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] bonding energy

2013-03-14 Thread Justin Lemkul



On 3/14/13 4:08 PM, fatemeh ramezani wrote:


I would check the energy connection between an amino acid and a ligand. After 
simulation, I run the following command:

g_dist_mpi  -f   x.trr   -s  x.tpr   -n index.ndx -o pro.xvg

Then I choose the bond  option (option 1)  from the  proposed options.
Is this true for energy bonding finding between two molecules?



g_dist has nothing to do with bonds, so what you've posted doesn't make any 
sense.

There will be no bonding energy between two species that do not have actual 
chemical bonds defined in the topology, and even then, an .edr file (analyzed 
using g_energy) does not have decomposed bonding terms for individual 
interactions.  Gromacs just doesn't work like that.  You can get the sum of 
nonbonded interactions by setting appropriate energygrps in the .mdp file, and 
there are some rather complex methods for getting bonded energies (i.e. subset 
.tpr and .trr/.xtc, use mdrun -rerun on only those atoms of interest), but often 
such quantities are not particularly useful.


-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] polymer duplicate atoms

2013-03-14 Thread cqgzc
Hi
I use the pdb2gmx to generate topology file of polymer f2311(fluororubber)
with the incorrect information of GROMACS structural file (conf.gro). There
are 680 atoms in my input (.pdb) file. But, 660 atoms are deleted in
conf.gro file because of duplicate atoms. The printed message as follow:
  http://gromacs.5086.n6.nabble.com/file/n5006328/info.jpg 
Residues Fre, Fbg and Fen denote repeat unit., head and tail part of
polymer, respectively. How can I obtain the correct file structure
(conf.gro) to avoid deleting duplicate atoms. Any help with my best
appreciation. Attachment is my pdb file.
f2311.pdb http://gromacs.5086.n6.nabble.com/file/n5006328/f2311.pdb  



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/polymer-duplicate-atoms-tp5006328.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


RE: [gmx-users] polymer duplicate atoms

2013-03-14 Thread Dallas Warren
The residue number is not incrementing in your coordinate file (it stays as 1), 
so it thinks it is all the same residue.

Catch ya,

Dr. Dallas Warren
Drug Discovery Biology
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
+61 3 9903 9304
-
When the only tool you own is a hammer, every problem begins to resemble a 
nail. 


 -Original Message-
 From: gmx-users-boun...@gromacs.org [mailto:gmx-users-
 boun...@gromacs.org] On Behalf Of cqgzc
 Sent: Friday, 15 March 2013 12:17 PM
 To: gmx-users@gromacs.org
 Subject: [gmx-users] polymer duplicate atoms
 
 Hi
 I use the pdb2gmx to generate topology file of polymer
 f2311(fluororubber)
 with the incorrect information of GROMACS structural file (conf.gro).
 There
 are 680 atoms in my input (.pdb) file. But, 660 atoms are deleted in
 conf.gro file because of duplicate atoms. The printed message as
 follow:
   http://gromacs.5086.n6.nabble.com/file/n5006328/info.jpg
 Residues Fre, Fbg and Fen denote repeat unit., head and tail part of
 polymer, respectively. How can I obtain the correct file structure
 (conf.gro) to avoid deleting duplicate atoms. Any help with my best
 appreciation. Attachment is my pdb file.
 f2311.pdb http://gromacs.5086.n6.nabble.com/file/n5006328/f2311.pdb
 
 
 
 --
 View this message in context:
 http://gromacs.5086.n6.nabble.com/polymer-duplicate-atoms-
 tp5006328.html
 Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Salt bridge analysis tool

2013-03-14 Thread albielyons
Hi,

I am trying to analyse my trajectory for specific salt bridges and the only
program I can find is g_saltbr with the -sep command, which gives back all
atom pairs that come within the set cutoff at any point in simulation time,
whether they are charged and H-bond partners or not and regardless of
whether they are together long enough or just glancing past each other. 

This means that I am getting 80 000+ salt bridges in my protein, which is
obviously incorrect. I cannot find any other program to analyse salt
bridges, does anyone know if there is a better one? Or if there is a way to
make g_saltbr's results more specific and managable.

Thank you very much for your consideration.
Regards,



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Salt-bridge-analysis-tool-tp5006330.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Salt bridge analysis tool

2013-03-14 Thread Justin Lemkul



On 3/14/13 10:14 PM, albielyons wrote:

Hi,

I am trying to analyse my trajectory for specific salt bridges and the only
program I can find is g_saltbr with the -sep command, which gives back all
atom pairs that come within the set cutoff at any point in simulation time,
whether they are charged and H-bond partners or not and regardless of
whether they are together long enough or just glancing past each other.

This means that I am getting 80 000+ salt bridges in my protein, which is
obviously incorrect. I cannot find any other program to analyse salt
bridges, does anyone know if there is a better one? Or if there is a way to
make g_saltbr's results more specific and managable.



tpbconv -zeroq can be used to zero out charges on groups that are not of 
interest, thereby reducing the number of charge pairs that can be detected.


g_dist can also be used to measure distances between specific groups that you 
are interested in.


Either approach requires creation of index groups that specify groups of 
interest.

-Justin

--


Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin


--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists