Re: [gmx-users] REMD stall out

2020-02-21 Thread Daniel Burns
This was not actually the solution.  Wanted to follow up in case
someone else is experiencing this problem.  We are reinstalling the openmp
version.

On Thu, Feb 20, 2020 at 3:10 PM Daniel Burns  wrote:

> Hi again,
>
> It seems including our openmp module was responsible for the issue the
> whole time.  When I submit the job only loading pmix and gromacs, replica
> exchange proceeds.
>
> Thank you,
>
> Dan
>
> On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
> wrote:
>
>> Hi,
>>
>> That could be caused by configuration of the parallel file system or MPI
>> on
>> your cluster. If only one file descriptor is available per node to an MPI
>> job, then your symptoms are explained. Some kinds of compute jobs follow
>> such a model, so maybe someone optimized something for that.
>>
>> Mark
>>
>> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>>
>> > HI Szilard,
>> >
>> > I've deleted all my output but all the writing to the log and console
>> stops
>> > around the step noting the domain decomposition (or other preliminary
>> > task).  It is the same with or without Plumed - the TREMD with Gromacs
>> only
>> > was the first thing to present this issue.
>> >
>> > I've discovered that if each replica is assigned its own node, the
>> > simulations proceed.  If I try to run several replicas on each node
>> > (divided evenly), the simulations stall out before any trajectories get
>> > written.
>> >
>> > I have tried many different -np and -ntomp options as well as several
>> slurm
>> > job submission scripts with node/ thread configurations but multiple
>> > simulations per node will not work.  I need to be able to run several
>> > replicas on the same node to get enough data since it's hard to get more
>> > than 8 nodes (and as a result, replicas).
>> >
>> > Thanks for your reply.
>> >
>> > -Dan
>> >
>> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
>> wrote:
>> >
>> > > Hi,
>> > >
>> > > I continue to have trouble getting an REMD job to run.  It never
>> makes it
>> > > to the point that it generates trajectory files but it never gives any
>> > > error either.
>> > >
>> > > I have switched from a large TREMD with 72 replicas to the Plumed
>> > > Hamiltonian method with only 6 replicas.  Everything is now on one
>> node
>> > and
>> > > each replica has 6 cores.  I've turned off the dynamic load balancing
>> on
>> > > this attempt per the recommendation from the Plumed site.
>> > >
>> > > Any ideas on how to troubleshoot?
>> > >
>> > > Thank you,
>> > >
>> > > Dan
>> > >
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-requ...@gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-20 Thread Daniel Burns
Hi again,

It seems including our openmp module was responsible for the issue the
whole time.  When I submit the job only loading pmix and gromacs, replica
exchange proceeds.

Thank you,

Dan

On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
wrote:

> Hi,
>
> That could be caused by configuration of the parallel file system or MPI on
> your cluster. If only one file descriptor is available per node to an MPI
> job, then your symptoms are explained. Some kinds of compute jobs follow
> such a model, so maybe someone optimized something for that.
>
> Mark
>
> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>
> > HI Szilard,
> >
> > I've deleted all my output but all the writing to the log and console
> stops
> > around the step noting the domain decomposition (or other preliminary
> > task).  It is the same with or without Plumed - the TREMD with Gromacs
> only
> > was the first thing to present this issue.
> >
> > I've discovered that if each replica is assigned its own node, the
> > simulations proceed.  If I try to run several replicas on each node
> > (divided evenly), the simulations stall out before any trajectories get
> > written.
> >
> > I have tried many different -np and -ntomp options as well as several
> slurm
> > job submission scripts with node/ thread configurations but multiple
> > simulations per node will not work.  I need to be able to run several
> > replicas on the same node to get enough data since it's hard to get more
> > than 8 nodes (and as a result, replicas).
> >
> > Thanks for your reply.
> >
> > -Dan
> >
> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
> wrote:
> >
> > > Hi,
> > >
> > > I continue to have trouble getting an REMD job to run.  It never makes
> it
> > > to the point that it generates trajectory files but it never gives any
> > > error either.
> > >
> > > I have switched from a large TREMD with 72 replicas to the Plumed
> > > Hamiltonian method with only 6 replicas.  Everything is now on one node
> > and
> > > each replica has 6 cores.  I've turned off the dynamic load balancing
> on
> > > this attempt per the recommendation from the Plumed site.
> > >
> > > Any ideas on how to troubleshoot?
> > >
> > > Thank you,
> > >
> > > Dan
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Daniel Burns
Thanks Mark and Szilard,

I forwarded Mark's suggestion to IT.  I'll see what they have to say and
then I'll try the simulation again and open an issue on redime.

Thank you,

Dan

On Mon, Feb 17, 2020 at 9:09 AM Mark Abraham 
wrote:

> Hi,
>
> That could be caused by configuration of the parallel file system or MPI on
> your cluster. If only one file descriptor is available per node to an MPI
> job, then your symptoms are explained. Some kinds of compute jobs follow
> such a model, so maybe someone optimized something for that.
>
> Mark
>
> On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:
>
> > HI Szilard,
> >
> > I've deleted all my output but all the writing to the log and console
> stops
> > around the step noting the domain decomposition (or other preliminary
> > task).  It is the same with or without Plumed - the TREMD with Gromacs
> only
> > was the first thing to present this issue.
> >
> > I've discovered that if each replica is assigned its own node, the
> > simulations proceed.  If I try to run several replicas on each node
> > (divided evenly), the simulations stall out before any trajectories get
> > written.
> >
> > I have tried many different -np and -ntomp options as well as several
> slurm
> > job submission scripts with node/ thread configurations but multiple
> > simulations per node will not work.  I need to be able to run several
> > replicas on the same node to get enough data since it's hard to get more
> > than 8 nodes (and as a result, replicas).
> >
> > Thanks for your reply.
> >
> > -Dan
> >
> > On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns 
> wrote:
> >
> > > Hi,
> > >
> > > I continue to have trouble getting an REMD job to run.  It never makes
> it
> > > to the point that it generates trajectory files but it never gives any
> > > error either.
> > >
> > > I have switched from a large TREMD with 72 replicas to the Plumed
> > > Hamiltonian method with only 6 replicas.  Everything is now on one node
> > and
> > > each replica has 6 cores.  I've turned off the dynamic load balancing
> on
> > > this attempt per the recommendation from the Plumed site.
> > >
> > > Any ideas on how to troubleshoot?
> > >
> > > Thank you,
> > >
> > > Dan
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Mark Abraham
Hi,

That could be caused by configuration of the parallel file system or MPI on
your cluster. If only one file descriptor is available per node to an MPI
job, then your symptoms are explained. Some kinds of compute jobs follow
such a model, so maybe someone optimized something for that.

Mark

On Mon, 17 Feb 2020 at 15:56, Daniel Burns  wrote:

> HI Szilard,
>
> I've deleted all my output but all the writing to the log and console stops
> around the step noting the domain decomposition (or other preliminary
> task).  It is the same with or without Plumed - the TREMD with Gromacs only
> was the first thing to present this issue.
>
> I've discovered that if each replica is assigned its own node, the
> simulations proceed.  If I try to run several replicas on each node
> (divided evenly), the simulations stall out before any trajectories get
> written.
>
> I have tried many different -np and -ntomp options as well as several slurm
> job submission scripts with node/ thread configurations but multiple
> simulations per node will not work.  I need to be able to run several
> replicas on the same node to get enough data since it's hard to get more
> than 8 nodes (and as a result, replicas).
>
> Thanks for your reply.
>
> -Dan
>
> On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:
>
> > Hi,
> >
> > I continue to have trouble getting an REMD job to run.  It never makes it
> > to the point that it generates trajectory files but it never gives any
> > error either.
> >
> > I have switched from a large TREMD with 72 replicas to the Plumed
> > Hamiltonian method with only 6 replicas.  Everything is now on one node
> and
> > each replica has 6 cores.  I've turned off the dynamic load balancing on
> > this attempt per the recommendation from the Plumed site.
> >
> > Any ideas on how to troubleshoot?
> >
> > Thank you,
> >
> > Dan
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi Dan,

What you describe in not an expected behaviro and it is something we should
look into.

What GROMACS version were you using? One thing that may help diagnosing the
issue is: try to disable replica exchange and run -multidir that way. Does
the simulation proceed?

Can you please open an issue on redmine.gromacs.org and upload the
necessary input files to reproduce, logs of your runs that reproduced the
issue.

Cheers,
--
Szilárd


On Mon, Feb 17, 2020 at 3:56 PM Daniel Burns  wrote:

> HI Szilard,
>
> I've deleted all my output but all the writing to the log and console stops
> around the step noting the domain decomposition (or other preliminary
> task).  It is the same with or without Plumed - the TREMD with Gromacs only
> was the first thing to present this issue.
>
> I've discovered that if each replica is assigned its own node, the
> simulations proceed.  If I try to run several replicas on each node
> (divided evenly), the simulations stall out before any trajectories get
> written.
>
> I have tried many different -np and -ntomp options as well as several slurm
> job submission scripts with node/ thread configurations but multiple
> simulations per node will not work.  I need to be able to run several
> replicas on the same node to get enough data since it's hard to get more
> than 8 nodes (and as a result, replicas).
>
> Thanks for your reply.
>
> -Dan
>
> On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:
>
> > Hi,
> >
> > I continue to have trouble getting an REMD job to run.  It never makes it
> > to the point that it generates trajectory files but it never gives any
> > error either.
> >
> > I have switched from a large TREMD with 72 replicas to the Plumed
> > Hamiltonian method with only 6 replicas.  Everything is now on one node
> and
> > each replica has 6 cores.  I've turned off the dynamic load balancing on
> > this attempt per the recommendation from the Plumed site.
> >
> > Any ideas on how to troubleshoot?
> >
> > Thank you,
> >
> > Dan
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD stall out

2020-02-17 Thread Daniel Burns
HI Szilard,

I've deleted all my output but all the writing to the log and console stops
around the step noting the domain decomposition (or other preliminary
task).  It is the same with or without Plumed - the TREMD with Gromacs only
was the first thing to present this issue.

I've discovered that if each replica is assigned its own node, the
simulations proceed.  If I try to run several replicas on each node
(divided evenly), the simulations stall out before any trajectories get
written.

I have tried many different -np and -ntomp options as well as several slurm
job submission scripts with node/ thread configurations but multiple
simulations per node will not work.  I need to be able to run several
replicas on the same node to get enough data since it's hard to get more
than 8 nodes (and as a result, replicas).

Thanks for your reply.

-Dan

On Tue, Feb 11, 2020 at 12:56 PM Daniel Burns  wrote:

> Hi,
>
> I continue to have trouble getting an REMD job to run.  It never makes it
> to the point that it generates trajectory files but it never gives any
> error either.
>
> I have switched from a large TREMD with 72 replicas to the Plumed
> Hamiltonian method with only 6 replicas.  Everything is now on one node and
> each replica has 6 cores.  I've turned off the dynamic load balancing on
> this attempt per the recommendation from the Plumed site.
>
> Any ideas on how to troubleshoot?
>
> Thank you,
>
> Dan
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi,

If I understand correctly your jobs stall, what is in the log output? What
about the console? Does this happen without PLUMED?

--
Szilárd


On Tue, Feb 11, 2020 at 7:56 PM Daniel Burns  wrote:

> Hi,
>
> I continue to have trouble getting an REMD job to run.  It never makes it
> to the point that it generates trajectory files but it never gives any
> error either.
>
> I have switched from a large TREMD with 72 replicas to the Plumed
> Hamiltonian method with only 6 replicas.  Everything is now on one node and
> each replica has 6 cores.  I've turned off the dynamic load balancing on
> this attempt per the recommendation from the Plumed site.
>
> Any ideas on how to troubleshoot?
>
> Thank you,
>
> Dan
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD stall out

2020-02-11 Thread Daniel Burns
Hi,

I continue to have trouble getting an REMD job to run.  It never makes it
to the point that it generates trajectory files but it never gives any
error either.

I have switched from a large TREMD with 72 replicas to the Plumed
Hamiltonian method with only 6 replicas.  Everything is now on one node and
each replica has 6 cores.  I've turned off the dynamic load balancing on
this attempt per the recommendation from the Plumed site.

Any ideas on how to troubleshoot?

Thank you,

Dan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2019-09-08 Thread Omkar Singh
Hello gmx users,
I am getting an error "load imbalance " in remd nvt equilibrium step.  Can
anyone help me regarding this issue?
Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Thank you for your email sir.

On Wed, Sep 4, 2019 at 2:42 PM Mark Abraham 
wrote:

> Hi,
>
> On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Respected Mark Abraham,
> >   The command-line and the job
> > submission script is given below
> >
> > #!/bin/bash
> > #SBATCH -n 130 # Number of cores
> >
>
> Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
> to run. It's not a core request.
>
> #SBATCH -N 5   # no of nodes
> >
>
> This requires a certain number of nodes. So to implement both your
> instructions, MPI has to start 26 tasks per node. That would make sense if
> you had nodes with a multiple 26 cores. My guess is that your nodes have a
> multiple of 16 cores, based on the error message. MPI saw that you asked to
> allocate more tasks on cores than available cores, and decided not to set a
> number of OpenMP threads per MPI task, so that fell back on a default,
> which produced 16, which GROMACS can see doesn't make sense.
>
> If you want to use -N and -n, then you need to make a choice that makes
> sense for the number of cores per node. Easier might be to use -n 130 and
> -c 2 to express what I assume is your intent to have 2 cores per MPI task.
> Now slurm+MPI can pass that message along properly to OpenMP.
>
> Your other message about -ntomp can only have come from running gmx_mpi_d
> -ntmpi, so just a typo we don't need to worry about further.
>
> Mark
>
> #SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> > #SBATCH -p cpu # Partition to submit to
> > #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> > #SBATCH -e hostname_%j.err # File to which STDERR will be written
> > #loading gromacs
> > module load gromacs/2018.4
> > #specifying work_dir
> > WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
> >
> >
> > mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> > equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> > equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> > equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> > equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> > equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> > equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> > equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> > -deffnm remd_nvt -cpi remd_nvt.cpt -append
> >
> > On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > We need to see your command line in order to have a chance of helping.
> > >
> > > Mark
> > >
> > > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> > 177cy500.bra...@nitk.edu.in
> > > >
> > > wrote:
> > >
> > > > Dear all,
> > > > I am running one REMD simulation with 65 replicas. I am
> > using
> > > > 130 cores for the simulation. I am getting the following error.
> > > >
> > > > Fatal error:
> > > > Your choice of number of MPI ranks and amount of resources results in
> > > using
> > > > 16
> > > > OpenMP threads per rank, which is most likely inefficient. The
> optimum
> > is
> > > > usually between 1 and 6 threads per rank. If you want to run with
> this
> > > > setup,
> > > > specify the -ntomp option. But we suggest to change the number of MPI
> > > > ranks.
> > > >
> > > > when I am using -ntomp option ...it is throwing another error
> > > >
> > > > Fatal error:
> > > > Setting the number of thread-MPI ranks is only supported with
> > thread-MPI
> > > > and
> > > > GROMACS was compiled without thread-MPI
> > > >
> > > >
> > > > while GROMACS is compiled with threated-MPI...
> > > >
> > > > plerase help me in this regard.
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > 

Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Respected Mark Abraham,
>   The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>

Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
to run. It's not a core request.

#SBATCH -N 5   # no of nodes
>

This requires a certain number of nodes. So to implement both your
instructions, MPI has to start 26 tasks per node. That would make sense if
you had nodes with a multiple 26 cores. My guess is that your nodes have a
multiple of 16 cores, based on the error message. MPI saw that you asked to
allocate more tasks on cores than available cores, and decided not to set a
number of OpenMP threads per MPI task, so that fell back on a default,
which produced 16, which GROMACS can see doesn't make sense.

If you want to use -N and -n, then you need to make a choice that makes
sense for the number of cores per node. Easier might be to use -n 130 and
-c 2 to express what I assume is your intent to have 2 cores per MPI task.
Now slurm+MPI can pass that message along properly to OpenMP.

Your other message about -ntomp can only have come from running gmx_mpi_d
-ntmpi, so just a typo we don't need to worry about further.

Mark

#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> #SBATCH -p cpu # Partition to submit to
> #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> #SBATCH -e hostname_%j.err # File to which STDERR will be written
> #loading gromacs
> module load gromacs/2018.4
> #specifying work_dir
> WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
>
>
> mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> -deffnm remd_nvt -cpi remd_nvt.cpt -append
>
> On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > We need to see your command line in order to have a chance of helping.
> >
> > Mark
> >
> > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> 177cy500.bra...@nitk.edu.in
> > >
> > wrote:
> >
> > > Dear all,
> > > I am running one REMD simulation with 65 replicas. I am
> using
> > > 130 cores for the simulation. I am getting the following error.
> > >
> > > Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 16
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks.
> > >
> > > when I am using -ntomp option ...it is throwing another error
> > >
> > > Fatal error:
> > > Setting the number of thread-MPI ranks is only supported with
> thread-MPI
> > > and
> > > GROMACS was compiled without thread-MPI
> > >
> > >
> > > while GROMACS is compiled with threated-MPI...
> > >
> > > plerase help me in this regard.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Respected Mark Abraham,
  The command-line and the job
submission script is given below

#!/bin/bash
#SBATCH -n 130 # Number of cores
#SBATCH -N 5   # no of nodes
#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
#SBATCH -p cpu # Partition to submit to
#SBATCH -o hostname_%j.out # File to which STDOUT will be written
#SBATCH -e hostname_%j.err # File to which STDERR will be written
#loading gromacs
module load gromacs/2018.4
#specifying work_dir
WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1


mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
-deffnm remd_nvt -cpi remd_nvt.cpt -append

On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> We need to see your command line in order to have a chance of helping.
>
> Mark
>
> On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Dear all,
> > I am running one REMD simulation with 65 replicas. I am using
> > 130 cores for the simulation. I am getting the following error.
> >
> > Fatal error:
> > Your choice of number of MPI ranks and amount of resources results in
> using
> > 16
> > OpenMP threads per rank, which is most likely inefficient. The optimum is
> > usually between 1 and 6 threads per rank. If you want to run with this
> > setup,
> > specify the -ntomp option. But we suggest to change the number of MPI
> > ranks.
> >
> > when I am using -ntomp option ...it is throwing another error
> >
> > Fatal error:
> > Setting the number of thread-MPI ranks is only supported with thread-MPI
> > and
> > GROMACS was compiled without thread-MPI
> >
> >
> > while GROMACS is compiled with threated-MPI...
> >
> > plerase help me in this regard.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

We need to see your command line in order to have a chance of helping.

Mark

On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am getting the following error.
>
> Fatal error:
> Your choice of number of MPI ranks and amount of resources results in using
> 16
> OpenMP threads per rank, which is most likely inefficient. The optimum is
> usually between 1 and 6 threads per rank. If you want to run with this
> setup,
> specify the -ntomp option. But we suggest to change the number of MPI
> ranks.
>
> when I am using -ntomp option ...it is throwing another error
>
> Fatal error:
> Setting the number of thread-MPI ranks is only supported with thread-MPI
> and
> GROMACS was compiled without thread-MPI
>
>
> while GROMACS is compiled with threated-MPI...
>
> plerase help me in this regard.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD-error

2019-09-03 Thread Bratin Kumar Das
Dear all,
I am running one REMD simulation with 65 replicas. I am using
130 cores for the simulation. I am getting the following error.

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
16
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI ranks.

when I am using -ntomp option ...it is throwing another error

Fatal error:
Setting the number of thread-MPI ranks is only supported with thread-MPI and
GROMACS was compiled without thread-MPI


while GROMACS is compiled with threated-MPI...

plerase help me in this regard.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD

2019-08-01 Thread Bratin Kumar Das
Thanks for clarification.

On Thu, Aug 1, 2019 at 7:43 PM Justin Lemkul  wrote:

>
>
> On 7/31/19 1:44 AM, Bratin Kumar Das wrote:
> > Hi,
> >  I have some doubt regarding REMD simulation.
> >  1. In the .mdp file of each replica is it necessary to keep the
> > gen-temp constant?
> > as example: 300 k is the lowest temp of REMD simulation. Is it necessary
> to
> > keep the gen-temp=300 in each replica.
>
> No, because each subsystem needs to be equilibrated independently at the
> desired temperature.
>
> >  2. Is it necessary to provide -replex flag during the equilbration
> > phase of REMD simulation
>
> No, because these simulations are independent of one another. Only
> during the actual REMD do you need -replex.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD

2019-08-01 Thread Justin Lemkul




On 7/31/19 1:44 AM, Bratin Kumar Das wrote:

Hi,
 I have some doubt regarding REMD simulation.
 1. In the .mdp file of each replica is it necessary to keep the
gen-temp constant?
as example: 300 k is the lowest temp of REMD simulation. Is it necessary to
keep the gen-temp=300 in each replica.


No, because each subsystem needs to be equilibrated independently at the 
desired temperature.



 2. Is it necessary to provide -replex flag during the equilbration
phase of REMD simulation


No, because these simulations are independent of one another. Only 
during the actual REMD do you need -replex.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2019-07-30 Thread Bratin Kumar Das
Hi,
I have some doubt regarding REMD simulation.
1. In the .mdp file of each replica is it necessary to keep the
gen-temp constant?
as example: 300 k is the lowest temp of REMD simulation. Is it necessary to
keep the gen-temp=300 in each replica.
2. Is it necessary to provide -replex flag during the equilbration
phase of REMD simulation
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Thank you

On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul,  wrote:

>
>
> On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
> > Hi Szilard,
> > Thank you for your reply. I rectified as you said. For
> trial
> > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is
> running
> > or not. I gave the following command to run remd
> > *mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
> > After giving the command it is giving following error
> > Program: gmx mdrun, version 2018.4
> > Source file: src/gromacs/utility/futil.cpp (line 514)
> > MPI rank:0 (out of 32)
> >
> > File input/output error:
> > remd0.tpr
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >   I am not able to understand why it is coming
>
> The error means the input file (remd0.tpr) does not exist in the working
> directory.
>
> -Justin
>
> >
> > On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll, 
> wrote:
> >
> >> This is an MPI / job scheduler error: you are requesting 2 nodes with
> >> 20 processes per node (=40 total), but starting 80 ranks.
> >> --
> >> Szilárd
> >>
> >> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> >> <177cy500.bra...@nitk.edu.in> wrote:
> >>> Hi,
> >>> I am running remd simulation in gromacs-2016.5. After generating
> the
> >>> multiple .tpr file in each directory by the following command
> >>> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -p
> >>> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> >>> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> >>> -reseed 175320 -deffnm remd_equil*
> >>> It is giving the following error
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>
> --
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>> I am not understanding the error. Any suggestion will be highly
> >>> appriciated. The mdp file and the qsub.sh file is attached below
> >>>
> >>> qsub.sh...
> >>> #! /bin/bash
> >>> #PBS -V
> >>> #PBS -l nodes=2:ppn=20
> >>> #PBS -l walltime=48:00:00
> >>> #PBS -N mdrun-serial
> >>> #PBS -j oe
> >>> #PBS -o output.log
> >>> #PBS -e error.log
> >>> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> >>> cd $PBS_O_WORKDIR
> >>> module load openmpi3.0.0
> >>> module load gromacs-2016.5
> >>> NP='cat $PBS_NODEFILE | wc -1'
> >>> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun
> -v
> >>> -s nvt.tpr -deffnm nvt
> >>> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> >> -multi
> >>> 8 -replex 1000 -deffnm remd_out
> >>> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -r
> >>> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >>>
> >>> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> >>> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] remd error

2019-07-29 Thread Justin Lemkul



On 7/29/19 7:55 AM, Bratin Kumar Das wrote:

Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
  I am not able to understand why it is coming


The error means the input file (remd0.tpr) does not exist in the working 
directory.


-Justin



On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:


This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:

Hi,
I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--
--

There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--

I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr

-multi

8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
--
Gromacs Users mailing list

* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send a mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Hi Szilard,
   Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
 I am not able to understand why it is coming

On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:

> This is an MPI / job scheduler error: you are requesting 2 nodes with
> 20 processes per node (=40 total), but starting 80 ranks.
> --
> Szilárd
>
> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> <177cy500.bra...@nitk.edu.in> wrote:
> >
> > Hi,
> >I am running remd simulation in gromacs-2016.5. After generating the
> > multiple .tpr file in each directory by the following command
> > *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> > topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> > I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> > -reseed 175320 -deffnm remd_equil*
> > It is giving the following error
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> >
> --
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> > I am not understanding the error. Any suggestion will be highly
> > appriciated. The mdp file and the qsub.sh file is attached below
> >
> > qsub.sh...
> > #! /bin/bash
> > #PBS -V
> > #PBS -l nodes=2:ppn=20
> > #PBS -l walltime=48:00:00
> > #PBS -N mdrun-serial
> > #PBS -j oe
> > #PBS -o output.log
> > #PBS -e error.log
> > #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> > cd $PBS_O_WORKDIR
> > module load openmpi3.0.0
> > module load gromacs-2016.5
> > NP='cat $PBS_NODEFILE | wc -1'
> > # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> > -s nvt.tpr -deffnm nvt
> > #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> -multi
> > 8 -replex 1000 -deffnm remd_out
> > for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> > em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >
> > for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> > remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-25 Thread Szilárd Páll
This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:
>
> Hi,
>I am running remd simulation in gromacs-2016.5. After generating the
> multiple .tpr file in each directory by the following command
> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> -reseed 175320 -deffnm remd_equil*
> It is giving the following error
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> --
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> I am not understanding the error. Any suggestion will be highly
> appriciated. The mdp file and the qsub.sh file is attached below
>
> qsub.sh...
> #! /bin/bash
> #PBS -V
> #PBS -l nodes=2:ppn=20
> #PBS -l walltime=48:00:00
> #PBS -N mdrun-serial
> #PBS -j oe
> #PBS -o output.log
> #PBS -e error.log
> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> cd $PBS_O_WORKDIR
> module load openmpi3.0.0
> module load gromacs-2016.5
> NP='cat $PBS_NODEFILE | wc -1'
> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> -s nvt.tpr -deffnm nvt
> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
> 8 -replex 1000 -deffnm remd_out
> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
>
> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] remd error

2019-07-18 Thread Bratin Kumar Das
Hi,
   I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
--
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD - subsystems not compatible

2019-04-24 Thread Per Larsson
Thanks Mark for reminding me about the existence of the log files. 
Problem solved, the difference is clearly indicated (number of atoms, my stupid 
mistake. 

Cheers
/Per



> 24 apr. 2019 kl. 16:51 skrev Mark Abraham :
> 
> Hi,
> 
> Generally the REMD code has written some analysis to the log file above
> this error message that should provide context.
> 
> More generally, you can use gmx check to compare the .tpr files and observe
> that the differences between them are only what you expect.
> 
> Mark
> 
> On Wed, 24 Apr 2019 at 15:28, Per Larsson  wrote:
> 
>> Hi gmx-users,
>> 
>> I am trying to start a replica exchange simulation of a model peptide in
>> water, but can’t get it to run properly.
>> I have limited experience with REMD, so I thought I’d ask here for all the
>> rookie mistakes it is possible to do.
>> I have also seen the earlier discussions about the error message, but
>> those seemed to be related to restarts and/or continuations, rather than
>> not being able to run at all.
>> 
>> My gromacs version is 2016 (for compatibility reasons), and the exact
>> error message I get is this:
>> 
>> ---
>> Program: gmx mdrun, version 2016.5
>> Source file: src/gromacs/mdlib/main.cpp (line 115)
>> MPI rank:32 (out of 62)
>> 
>> Fatal error:
>> The 62 subsystems are not compatible
>> 
>> I followed Marks tutorial on the gromacs website and have a small
>> bash-script that loops over all desired temperatures, run equilibration
>> etc.
>> I then start the simulation like this:
>> 
>> $MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -deffnm sim -replex 500 -multidir
>> ~pfs/ferring/gnrh_aa/dipep_remd/sim*
>> 
>> What could be the source of this incompatibility?
>> 
>> Many thanks
>> /Per
>> 
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD - subsystems not compatible

2019-04-24 Thread Per Larsson
Hi gmx-users, 

I am trying to start a replica exchange simulation of a model peptide in water, 
but can’t get it to run properly. 
I have limited experience with REMD, so I thought I’d ask here for all the 
rookie mistakes it is possible to do.
I have also seen the earlier discussions about the error message, but those 
seemed to be related to restarts and/or continuations, rather than not being 
able to run at all. 

My gromacs version is 2016 (for compatibility reasons), and the exact error 
message I get is this:

---
Program: gmx mdrun, version 2016.5
Source file: src/gromacs/mdlib/main.cpp (line 115)
MPI rank:32 (out of 62)

Fatal error:
The 62 subsystems are not compatible

I followed Marks tutorial on the gromacs website and have a small bash-script 
that loops over all desired temperatures, run equilibration etc. 
I then start the simulation like this:

$MPIRUN $GMX mdrun $ntmpi -ntomp $ntomp -deffnm sim -replex 500 -multidir 
~pfs/ferring/gnrh_aa/dipep_remd/sim* 

What could be the source of this incompatibility?

Many thanks
/Per


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD Plots

2019-01-12 Thread Shan Jayasinghe
Hi Joel,

Thank you very much.



On Wed, Jan 9, 2019 at 3:27 PM Joel Awuah  wrote:

> Hi Shan,
> I am not quite sure if you want to generate an REMD simulation mobility in
> temperature space for the 30 replicas. If that be the case, then you can
> use the data in the replica_temperature.xvg file to plot replica index vs
> REMD steps. The 1st column in the file corresponds to the REMD steps and
> 2nd to 31st correspond to the mobility of replicas 0 to 29.
>
> Hope this  helps?
>
> cheers
> Joel
>
>
> On Wed, 9 Jan 2019 at 13:23, Shan Jayasinghe  >
> wrote:
>
> > Dear Gromacs users,
> >
> > How do we plot a graph for temperature vs swap step number using a REMD
> > simulation with 30 systems. I already generated the replica_temp.xvg and
> > replica_index.xvg files using demux.pl script.
> >
> > Thank you.
> >
> > Best Regards
> > Shan Jayasinghe
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> Joel Baffour Awuah
> PhD Candidate
> *Institute for Frontier Materials*
>
> *Deakin University*
> *Waurn Ponds, 3126 VIC*
> *Australia +61450070635*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Best Regards
Shan Jayasinghe
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Plots

2019-01-08 Thread Joel Awuah
Hi Shan,
I am not quite sure if you want to generate an REMD simulation mobility in
temperature space for the 30 replicas. If that be the case, then you can
use the data in the replica_temperature.xvg file to plot replica index vs
REMD steps. The 1st column in the file corresponds to the REMD steps and
2nd to 31st correspond to the mobility of replicas 0 to 29.

Hope this  helps?

cheers
Joel


On Wed, 9 Jan 2019 at 13:23, Shan Jayasinghe 
wrote:

> Dear Gromacs users,
>
> How do we plot a graph for temperature vs swap step number using a REMD
> simulation with 30 systems. I already generated the replica_temp.xvg and
> replica_index.xvg files using demux.pl script.
>
> Thank you.
>
> Best Regards
> Shan Jayasinghe
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Joel Baffour Awuah
PhD Candidate
*Institute for Frontier Materials*

*Deakin University*
*Waurn Ponds, 3126 VIC*
*Australia +61450070635*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Plots

2019-01-08 Thread Shan Jayasinghe
Dear Gromacs users,

How do we plot a graph for temperature vs swap step number using a REMD
simulation with 30 systems. I already generated the replica_temp.xvg and
replica_index.xvg files using demux.pl script.

Thank you.

Best Regards
Shan Jayasinghe
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Simulation for a system at different Concentration

2018-07-25 Thread Ligesh Lichu
Dear all,
   I am planning to do REMD for a system containing Protein, Urea, Osmolyte
and water, I want to generate the temperature replica. So the REMD
temperature generator asks for number of water molecules only, but here in
my case there are something more in the system, So is it fine that I can
proceed further with this?

My next query is that I am going to do the REMD in two different
concentration, for each concentration the number of water molecules will be
different, so my temperature replica also will be different for the same
system at different concentration. So I want to know whether REMD can be
effective in the case of a system at different concentration (number of
water molecules changes, so temperature replica will be different)
having Protein, Urea, Osmolyte and water?

 Can someone please help me with this.. Thanks in advance...

-Ligesh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Showing Zero Exchange Probability

2018-07-24 Thread Ligesh Lichu
Sorry for the delay..Thank you Mark..

On Sat, Jul 21, 2018 at 2:09 AM, Mark Abraham 
wrote:

> Hi,
>
> You can't arbitrarily choose both the temperature range and number of
> replicas and get non-zero exchange probability. See
> https://pubs.acs.org/action/showCitFormats?doi=10.1021%2Fct800016r. For a
> given average exchange probability, choose a range and thus the number of
> replicas, or the number of replicas and thus the range.
>
> Mark
>
> On Fri, Jul 20, 2018, 10:39 Ligesh Lichu  wrote:
>
> > Dear all,
> > I have performed REMD for a system containing Protein, Reline, Urea
> and
> > Water in the temperature range 290 to 450 K consist of 16 replicas out of
> > 47 replicas generated by REMD temperature generator. But after the MD
> > simulation the exchange probability is zero. I have used position
> > restraints for reline, urea and protein. Is there any chance that
> position
> > restraints  cause the exchange probability to be zero?  I have one more
> > query that, the REMD temperature generator produced around 45 to 54
> > replicas for my system in the required temperature range. But I have only
> > 80 processors to do the job, So is it necessary to choose the consecutive
> > temperature replicas given by the REMD temperature generator or I can
> skip
> > some temperatures in between?
> >
> > If I am using the equation *Ti = T0 exp (k* i)*, what determines the
> value
> > of 'k'  how it affects the exchange probability? How can I choose the
> value
> > of 'k' for an arbitrary system?
> >
> > Thanks in advance...
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Mark Abraham
Hi,

You can't arbitrarily choose both the temperature range and number of
replicas and get non-zero exchange probability. See
https://pubs.acs.org/action/showCitFormats?doi=10.1021%2Fct800016r. For a
given average exchange probability, choose a range and thus the number of
replicas, or the number of replicas and thus the range.

Mark

On Fri, Jul 20, 2018, 10:39 Ligesh Lichu  wrote:

> Dear all,
> I have performed REMD for a system containing Protein, Reline, Urea and
> Water in the temperature range 290 to 450 K consist of 16 replicas out of
> 47 replicas generated by REMD temperature generator. But after the MD
> simulation the exchange probability is zero. I have used position
> restraints for reline, urea and protein. Is there any chance that position
> restraints  cause the exchange probability to be zero?  I have one more
> query that, the REMD temperature generator produced around 45 to 54
> replicas for my system in the required temperature range. But I have only
> 80 processors to do the job, So is it necessary to choose the consecutive
> temperature replicas given by the REMD temperature generator or I can skip
> some temperatures in between?
>
> If I am using the equation *Ti = T0 exp (k* i)*, what determines the value
> of 'k'  how it affects the exchange probability? How can I choose the value
> of 'k' for an arbitrary system?
>
> Thanks in advance...
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Abhishek Acharya
Hello,

The REMD generator provides an estimate of the number of replicas that may
be necessary (based on the system size) for performing replica exchange
properly. Since you got 47 replicas, you can maybe play around with
replicas more or less around that range. Maybe 16 replicas is much less for
your system. You can check whether the energy distributions obtained from
the replicas overlap properly or not.

If your system is such that you can make do with sampling a smaller
subspace of your system, then perhaps REST2 (Replica Exchange with Solute
Scaling, IIRC) may be be helpful. Although, I haven't come across many
articles lately that have used it. I would also suggest you explore other
sampling methods and see if they can be adapted to your problem of interest.

Best Regards,
Abhishek


On Fri, Jul 20, 2018 at 5:55 PM, Ligesh Lichu  wrote:

> I have tried an exchange every 2 ps. That is every 1000 steps.
>
> On Fri, Jul 20, 2018 at 5:34 PM, Smith, Micholas D. 
> wrote:
>
> > How frequently are you trying to exchange?
> >
> > ===
> > Micholas Dean Smith, PhD. MRSC
> > Post-doctoral Research Associate
> > University of Tennessee/Oak Ridge National Laboratory
> > Center for Molecular Biophysics
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Ligesh
> > Lichu 
> > Sent: Friday, July 20, 2018 4:39 AM
> > To: gmx-us...@gromacs.org
> > Subject: [gmx-users] REMD Showing Zero Exchange Probability
> >
> > Dear all,
> > I have performed REMD for a system containing Protein, Reline, Urea
> and
> > Water in the temperature range 290 to 450 K consist of 16 replicas out of
> > 47 replicas generated by REMD temperature generator. But after the MD
> > simulation the exchange probability is zero. I have used position
> > restraints for reline, urea and protein. Is there any chance that
> position
> > restraints  cause the exchange probability to be zero?  I have one more
> > query that, the REMD temperature generator produced around 45 to 54
> > replicas for my system in the required temperature range. But I have only
> > 80 processors to do the job, So is it necessary to choose the consecutive
> > temperature replicas given by the REMD temperature generator or I can
> skip
> > some temperatures in between?
> >
> > If I am using the equation *Ti = T0 exp (k* i)*, what determines the
> value
> > of 'k'  how it affects the exchange probability? How can I choose the
> value
> > of 'k' for an arbitrary system?
> >
> > Thanks in advance...
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Ligesh Lichu
I have tried an exchange every 2 ps. That is every 1000 steps.

On Fri, Jul 20, 2018 at 5:34 PM, Smith, Micholas D. 
wrote:

> How frequently are you trying to exchange?
>
> ===
> Micholas Dean Smith, PhD. MRSC
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Ligesh
> Lichu 
> Sent: Friday, July 20, 2018 4:39 AM
> To: gmx-us...@gromacs.org
> Subject: [gmx-users] REMD Showing Zero Exchange Probability
>
> Dear all,
> I have performed REMD for a system containing Protein, Reline, Urea and
> Water in the temperature range 290 to 450 K consist of 16 replicas out of
> 47 replicas generated by REMD temperature generator. But after the MD
> simulation the exchange probability is zero. I have used position
> restraints for reline, urea and protein. Is there any chance that position
> restraints  cause the exchange probability to be zero?  I have one more
> query that, the REMD temperature generator produced around 45 to 54
> replicas for my system in the required temperature range. But I have only
> 80 processors to do the job, So is it necessary to choose the consecutive
> temperature replicas given by the REMD temperature generator or I can skip
> some temperatures in between?
>
> If I am using the equation *Ti = T0 exp (k* i)*, what determines the value
> of 'k'  how it affects the exchange probability? How can I choose the value
> of 'k' for an arbitrary system?
>
> Thanks in advance...
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Smith, Micholas D.
How frequently are you trying to exchange?

===
Micholas Dean Smith, PhD. MRSC
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Ligesh Lichu 

Sent: Friday, July 20, 2018 4:39 AM
To: gmx-us...@gromacs.org
Subject: [gmx-users] REMD Showing Zero Exchange Probability

Dear all,
I have performed REMD for a system containing Protein, Reline, Urea and
Water in the temperature range 290 to 450 K consist of 16 replicas out of
47 replicas generated by REMD temperature generator. But after the MD
simulation the exchange probability is zero. I have used position
restraints for reline, urea and protein. Is there any chance that position
restraints  cause the exchange probability to be zero?  I have one more
query that, the REMD temperature generator produced around 45 to 54
replicas for my system in the required temperature range. But I have only
80 processors to do the job, So is it necessary to choose the consecutive
temperature replicas given by the REMD temperature generator or I can skip
some temperatures in between?

If I am using the equation *Ti = T0 exp (k* i)*, what determines the value
of 'k'  how it affects the exchange probability? How can I choose the value
of 'k' for an arbitrary system?

Thanks in advance...
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Showing Zero Exchange Probability

2018-07-20 Thread Ligesh Lichu
Dear all,
I have performed REMD for a system containing Protein, Reline, Urea and
Water in the temperature range 290 to 450 K consist of 16 replicas out of
47 replicas generated by REMD temperature generator. But after the MD
simulation the exchange probability is zero. I have used position
restraints for reline, urea and protein. Is there any chance that position
restraints  cause the exchange probability to be zero?  I have one more
query that, the REMD temperature generator produced around 45 to 54
replicas for my system in the required temperature range. But I have only
80 processors to do the job, So is it necessary to choose the consecutive
temperature replicas given by the REMD temperature generator or I can skip
some temperatures in between?

If I am using the equation *Ti = T0 exp (k* i)*, what determines the value
of 'k'  how it affects the exchange probability? How can I choose the value
of 'k' for an arbitrary system?

Thanks in advance...
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2018-06-08 Thread Eric Smoll
Hello GROMACS users,

As far as I understand, increasing the number of random exchanges to a
large number (mdrun suggests N^3 where N is the number of replicas) moves a
REMD simulation from a neighbor exchange procedure to a Gibbs exchange
procedure.  Can anyone provide some practical advice or references useful
in deciding which to use?  Naively, I would guess that a Gibbs exchange
procedure would converge faster for a REMD equilibration with a large
number of replicas (~100). Is this usually true?

Best,
Eric
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD

2018-05-26 Thread Eric Smoll
Hello Gromacs Users,

I am interested in calculating the equilibrium distribution of molecular
structures at the vacuum-liquid interface of several different low vapor
pressure liquids. All of these liquids are very viscous at or near
room-temperature and I suspect that conformational barriers may inhibit
sampling at the vacuum-liquid interface. However, in NVT MD simulations,
these liquids increase fluidity at higher temperatures (400-500K) while
maintaining a fluid state and a reasonably well-defined vacuum-liquid
interface.

Can I use NVT REMD to efficiently overcome any kinetic trapping that might
be going on and obtain a true equilibrium distribution of molecular
structures at the vacuum-liquid interface? A superficial literature search
does not yield examples of NVT REMD on a liquid interface. I am curious if
there are issues or complications with this approach. Is there a better
alternative?

the manual states that "all possible pairs are tested for exchange" in
Gibbs REMD. Looking through the mdrun help output, it seems like this
option can be selected by setting the "-nex" flag. However, the comment for
this flag suggests using N^3. Isn't something like N*(N-1)/2 more
appropriate (where N is the number of replicas).

Thanks for the guidance!

Best,
Eric
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-04 Thread Mark Abraham
Hi,

Unfortunately nobody has implemented demux for the energy files. You could
consider contributing a modification of demux.pl :-)

Mark

On Fri, May 4, 2018 at 8:42 AM Sundari  wrote:

> Hello Guys,
>
> Kindly suggest me something about my doubt.
>
> On Thu, May 3, 2018 at 5:19 PM, Sundari  wrote:
>
> > Hello,
> >
> > I got the continuous trajectories by using demux. But now I am confused
> in
> > getting potential energy distribution of a single replica (similarly time
> > evolution of a replica (say replica_1) in temperature space).
> > I used edr file of original production data files, but I am not getting
> > what I want. I am attaching the temp.xvg file of one replica (say T= 315K
> > replica)
> >
> > Thank You..
> >
> > On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
> > wrote:
> >
> >> Hi,
> >>
> >> It sounds like you just want to use the original data, which you had
> >> before
> >> you ran the demux script.
> >>
> >> Mark
> >>
> >> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
> >>
> >> > Dear gromacs users,
> >> >
> >> > can anyone please suggest me that how we  get the time evolution of a
> >> > replica (say replica_1) in temperature space and time courses of
> >> potential
> >> > energy of each replica(  one way is md.edr file??)
> >> > As according to GROMACS tutorial, I used demux.pl script and got two
> >> files
> >> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a
> single
> >> > replica trajectory in all temperatures ( temp. on y-axis)
> >> >
> >> >
> >> > Thank you in advance..
> >> >
> >> > Sundari
> >> > --
> >> > Gromacs Users mailing list
> >> >
> >> > * Please search the archive at
> >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> > posting!
> >> >
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> > * For (un)subscribe requests visit
> >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> > send a mail to gmx-users-requ...@gromacs.org.
> >> >
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-04 Thread Sundari
Hello Guys,

Kindly suggest me something about my doubt.

On Thu, May 3, 2018 at 5:19 PM, Sundari  wrote:

> Hello,
>
> I got the continuous trajectories by using demux. But now I am confused in
> getting potential energy distribution of a single replica (similarly time
> evolution of a replica (say replica_1) in temperature space).
> I used edr file of original production data files, but I am not getting
> what I want. I am attaching the temp.xvg file of one replica (say T= 315K
> replica)
>
> Thank You..
>
> On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
> wrote:
>
>> Hi,
>>
>> It sounds like you just want to use the original data, which you had
>> before
>> you ran the demux script.
>>
>> Mark
>>
>> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
>>
>> > Dear gromacs users,
>> >
>> > can anyone please suggest me that how we  get the time evolution of a
>> > replica (say replica_1) in temperature space and time courses of
>> potential
>> > energy of each replica(  one way is md.edr file??)
>> > As according to GROMACS tutorial, I used demux.pl script and got two
>> files
>> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
>> > replica trajectory in all temperatures ( temp. on y-axis)
>> >
>> >
>> > Thank you in advance..
>> >
>> > Sundari
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-requ...@gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD temperature_space

2018-05-03 Thread Sundari
Hello,

I got the continuous trajectories by using demux. But now I am confused in
getting potential energy distribution of a single replica (similarly time
evolution of a replica (say replica_1) in temperature space).
I used edr file of original production data files, but I am not getting
what I want. I am attaching the temp.xvg file of one replica (say T= 315K
replica)

Thank You..

On Thu, May 3, 2018 at 5:02 PM, Mark Abraham 
wrote:

> Hi,
>
> It sounds like you just want to use the original data, which you had before
> you ran the demux script.
>
> Mark
>
> On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:
>
> > Dear gromacs users,
> >
> > can anyone please suggest me that how we  get the time evolution of a
> > replica (say replica_1) in temperature space and time courses of
> potential
> > energy of each replica(  one way is md.edr file??)
> > As according to GROMACS tutorial, I used demux.pl script and got two
> files
> > replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
> > replica trajectory in all temperatures ( temp. on y-axis)
> >
> >
> > Thank you in advance..
> >
> > Sundari
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD temperature_space

2018-05-03 Thread Mark Abraham
Hi,

It sounds like you just want to use the original data, which you had before
you ran the demux script.

Mark

On Thu, May 3, 2018 at 1:28 PM Sundari  wrote:

> Dear gromacs users,
>
> can anyone please suggest me that how we  get the time evolution of a
> replica (say replica_1) in temperature space and time courses of potential
> energy of each replica(  one way is md.edr file??)
> As according to GROMACS tutorial, I used demux.pl script and got two files
> replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
> replica trajectory in all temperatures ( temp. on y-axis)
>
>
> Thank you in advance..
>
> Sundari
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD temperature_space

2018-05-03 Thread Sundari
Dear gromacs users,

can anyone please suggest me that how we  get the time evolution of a
replica (say replica_1) in temperature space and time courses of potential
energy of each replica(  one way is md.edr file??)
As according to GROMACS tutorial, I used demux.pl script and got two files
replica_index.xvg and replica_temp.xvg.  But I want to analyse a single
replica trajectory in all temperatures ( temp. on y-axis)


Thank you in advance..

Sundari
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD Simulation

2018-04-17 Thread Mark Abraham
Hi,

On Mon, Apr 16, 2018 at 10:21 AM ISHRAT JAHAN  wrote:

> Dear all,
> I am trying to do REMD simulation in different cosolvents. I have generated
> temperatures using temperature genrating tools but it gives different
> number of temperatures in different solvents with exchange probability of
> 0.25. Is it fair to do remd with different replicas?


Sure. But first you should understand why the number of degrees of freedom
in the system are relevant for affecting the temperature spacing required
for constant exchange probability. See, among other references
https://pubs.acs.org/doi/abs/10.1021/ct800016r (shameless self-plug...)


> In what way it will
> effect the results?
>

What results are you seeking? Why would the number of replicas be a
relevant parameter determining the result?

Mark


> Thankyou
> --
> Ishrat Jahan
> Research Scholar
> Department Of Chemistry
> A.M.U Aligarh
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD Simulation

2018-04-16 Thread ISHRAT JAHAN
Dear all,
I am trying to do REMD simulation in different cosolvents. I have generated
temperatures using temperature genrating tools but it gives different
number of temperatures in different solvents with exchange probability of
0.25. Is it fair to do remd with different replicas? In what way it will
effect the results?
Thankyou
-- 
Ishrat Jahan
Research Scholar
Department Of Chemistry
A.M.U Aligarh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD DLB bug

2018-02-12 Thread Szilárd Páll
Hi,

The fix will be released in an upcoming 2016.5 patch release. (which
you can see in the redmine issue page "Target version" field BTW).

Cheers,
--
Szilárd


On Mon, Feb 12, 2018 at 2:49 PM, Akshay  wrote:
> Hello All,
>
> I was running REMD simulations on Gromacs 2016.1 when my simulation crashed
> with the error
>
> Assertion failed:
> Condition: comm->cycl_n[ddCyclStep] > 0
> When we turned on DLB, we should have measured cycles
>
> I saw that there was a bug #2298 reported about this recently at
> https://redmine.gromacs.org/issues/2298. I wanted to know if this fix has
> been implemented in the latest 2018 or 2016.4 versions?
>
> Thanks,
> Akshay
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD DLB bug

2018-02-12 Thread Akshay
Hello All,

I was running REMD simulations on Gromacs 2016.1 when my simulation crashed
with the error

Assertion failed:
Condition: comm->cycl_n[ddCyclStep] > 0
When we turned on DLB, we should have measured cycles

I saw that there was a bug #2298 reported about this recently at
https://redmine.gromacs.org/issues/2298. I wanted to know if this fix has
been implemented in the latest 2018 or 2016.4 versions?

Thanks,
Akshay
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska

Hi,

I should run it by using mdrun_mpi?

best
Urszula

> Hello,
>
>  From my experience, the domain decomposition is not compatible with
> implicit solvent, you have to switch
> to particle decomposition for the simulations.
>
>
> All the best,
> Qinghua
>
> On 01/05/2018 12:40 PM, Urszula Uciechowska wrote:
>> Hi,
>>
>> I just run a normal single-replica. Now the error that I have is:
>>
>> Program mdrun_mpi, VERSION 4.5.5
>> Source code file: domdec.c, line: 3266
>>
>> Software inconsistency error:
>> Inconsistent DD boundary staggering limits!
>> For more information and tips for troubleshooting, please check the
>> GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>>
>>
>> Any suggestions? What can I do to run it?
>>
>> Thanks
>> Ula
>>
>>> Hi,
>>>
>>> Did you try to debug your setup by running a normal single-replica
>>> simulation first?
>>>
>>> Mark
>>>
>>> On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
>>> urszula.uciechow...@biotech.ug.edu.pl> wrote:
>>>

 Dear gromacs users,

 I am trying to run REMD simulations using 4.5.5 version (implicit
 solvent). The MD procedure:

 pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

 The input for minimization step:

 ; Run control parameters
 integrator   = cg
 nsteps   = 800
 vdwtype  = cut-off
 coulombtype  = cut-off
 ;cutoff-scheme= group
 pbc  = no
 periodic_molecules   = no
 nstlist  = 10
 ns_type  = grid
 rlist= 1.0
 rcoulomb = 1.6
 rvdw = 1.6
 comm-mode= Angular
 nstcomm  = 10
 ;
 ;Energy minimizing stuff
 ;
 emtol= 100.0
 nstcgsteep   = 2
 emstep   = 0.01
 ;
 ;Relative dielectric constant for the medium and the reaction field
 epsilon_r= 1
 epsilon_rf   = 1
 ;
 ; Implicit solvent
 ;
 implicit_solvent = GBSA
 gb_algorithm = OBC  ;Still  HCT   OBC
 nstgbradii   = 1.0
 rgbradii = 1.0  ; [nm] Cut-off for the calculation
 of
 the Born radii. Currently must be equal to rlist
 gb_epsilon_solvent   = 80   ; Dielectric constant for the
 implicit
 solvent
 gb_saltconc  = 0; Salt concentration for implicit
 solvent models, currently not used
 sa_algorithm = Ace-approximation
 sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
 for
 the SA (nonpolar surface) part of GBSA. The value -1 will set default
 value for Still/HCT/OBC GB-models.

 and it finished without errors.

 The problem is with equilibration step. The input file that I used is:

 ; MD CONTROL OPTIONS
 integrator  = md
 dt  = 0.002
 nsteps  = 5 ; 10 ns
 init_step   = 0; For exact run continuation or
 redoing part of a run
 comm-mode   = Angular  ; mode for center of mass
 motion
 removal
 nstcomm = 10   ; number of steps for center of
 mass motion removal

 ; OUTPUT CONTROL OPTIONS
 ; Output frequency for coords (x), velocities (v) and forces (f)
 nstxout  = 1000
 nstvout  = 1000
 nstfout  = 1000

 ; Output frequency for energies to log file and energy file
 nstlog   = 1000
 nstcalcenergy= 10
 nstenergy= 1000

 ; Neighbor searching and Electrostatitcs
 vdwtype  = cut-off
 coulombtype  = cut-off
 ;cutoff-scheme= group
 pbc  = no
 periodic_molecules   = no
 nstlist  = 5
 ns_type  = grid
 rlist= 1.0
 rcoulomb = 1.6
 rvdw = 1.0
 ; Selection of energy groups
 energygrps   = fixed not_fixed
 freezegrps   = fixed not_fixed
 freezedim= Y Y Y N N N

 ;Relative dielectric constant for the medium and the reaction field
 epsilon_r= 1
 epsilon_rf   = 1

 ; Temperutare coupling
 tcoupl   = v-rescale
 tc_grps  = fixed not_fixed
 tau_t= 0.01 0.01
 ;nst_couple   = 5
 ref_t= 300.00 300.00

 ; Pressure coupling
 pcoupl   = no
 ;pcoupletype  = isotropic
 tau_p= 1.0
 ;compressiblity   = 4.5e-5
 ref_p= 1.0
 gen_vel  = yes
 gen_temp = 

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Qinghua Liao

Hello,

From my experience, the domain decomposition is not compatible with 
implicit solvent, you have to switch

to particle decomposition for the simulations.


All the best,
Qinghua

On 01/05/2018 12:40 PM, Urszula Uciechowska wrote:

Hi,

I just run a normal single-replica. Now the error that I have is:

Program mdrun_mpi, VERSION 4.5.5
Source code file: domdec.c, line: 3266

Software inconsistency error:
Inconsistent DD boundary staggering limits!
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Any suggestions? What can I do to run it?

Thanks
Ula


Hi,

Did you try to debug your setup by running a normal single-replica
simulation first?

Mark

On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
urszula.uciechow...@biotech.ug.edu.pl> wrote:



Dear gromacs users,

I am trying to run REMD simulations using 4.5.5 version (implicit
solvent). The MD procedure:

pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

The input for minimization step:

; Run control parameters
integrator   = cg
nsteps   = 800
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 10
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.6
comm-mode= Angular
nstcomm  = 10
;
;Energy minimizing stuff
;
emtol= 100.0
nstcgsteep   = 2
emstep   = 0.01
;
;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1
;
; Implicit solvent
;
implicit_solvent = GBSA
gb_algorithm = OBC  ;Still  HCT   OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation
of
the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit
solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.

and it finished without errors.

The problem is with equilibration step. The input file that I used is:

; MD CONTROL OPTIONS
integrator  = md
dt  = 0.002
nsteps  = 5 ; 10 ns
init_step   = 0; For exact run continuation or
redoing part of a run
comm-mode   = Angular  ; mode for center of mass motion
removal
nstcomm = 10   ; number of steps for center of
mass motion removal

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1000
nstvout  = 1000
nstfout  = 1000

; Output frequency for energies to log file and energy file
nstlog   = 1000
nstcalcenergy= 10
nstenergy= 1000

; Neighbor searching and Electrostatitcs
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 5
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.0
; Selection of energy groups
energygrps   = fixed not_fixed
freezegrps   = fixed not_fixed
freezedim= Y Y Y N N N

;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1

; Temperutare coupling
tcoupl   = v-rescale
tc_grps  = fixed not_fixed
tau_t= 0.01 0.01
;nst_couple   = 5
ref_t= 300.00 300.00

; Pressure coupling
pcoupl   = no
;pcoupletype  = isotropic
tau_p= 1.0
;compressiblity   = 4.5e-5
ref_p= 1.0
gen_vel  = yes
gen_temp = 300.00 300.00
gen_seed = -1
constraints  = h-bonds


; Implicit solvent
implicit_solvent = GBSA
gb_algorithm = Still ; HCT  ; OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the
calculation
of the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit solvent
gb_saltconc  = 0; Salt concentration for
implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
for the SA (nonpolar surface) part of GBSA. The value -1 will set
default

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska

Hi,

I just run a normal single-replica. Now the error that I have is:

Program mdrun_mpi, VERSION 4.5.5
Source code file: domdec.c, line: 3266

Software inconsistency error:
Inconsistent DD boundary staggering limits!
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Any suggestions? What can I do to run it?

Thanks
Ula

> Hi,
>
> Did you try to debug your setup by running a normal single-replica
> simulation first?
>
> Mark
>
> On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
> urszula.uciechow...@biotech.ug.edu.pl> wrote:
>
>>
>>
>> Dear gromacs users,
>>
>> I am trying to run REMD simulations using 4.5.5 version (implicit
>> solvent). The MD procedure:
>>
>> pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.
>>
>> The input for minimization step:
>>
>> ; Run control parameters
>> integrator   = cg
>> nsteps   = 800
>> vdwtype  = cut-off
>> coulombtype  = cut-off
>> ;cutoff-scheme= group
>> pbc  = no
>> periodic_molecules   = no
>> nstlist  = 10
>> ns_type  = grid
>> rlist= 1.0
>> rcoulomb = 1.6
>> rvdw = 1.6
>> comm-mode= Angular
>> nstcomm  = 10
>> ;
>> ;Energy minimizing stuff
>> ;
>> emtol= 100.0
>> nstcgsteep   = 2
>> emstep   = 0.01
>> ;
>> ;Relative dielectric constant for the medium and the reaction field
>> epsilon_r= 1
>> epsilon_rf   = 1
>> ;
>> ; Implicit solvent
>> ;
>> implicit_solvent = GBSA
>> gb_algorithm = OBC  ;Still  HCT   OBC
>> nstgbradii   = 1.0
>> rgbradii = 1.0  ; [nm] Cut-off for the calculation
>> of
>> the Born radii. Currently must be equal to rlist
>> gb_epsilon_solvent   = 80   ; Dielectric constant for the
>> implicit
>> solvent
>> gb_saltconc  = 0; Salt concentration for implicit
>> solvent models, currently not used
>> sa_algorithm = Ace-approximation
>> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
>> the SA (nonpolar surface) part of GBSA. The value -1 will set default
>> value for Still/HCT/OBC GB-models.
>>
>> and it finished without errors.
>>
>> The problem is with equilibration step. The input file that I used is:
>>
>> ; MD CONTROL OPTIONS
>> integrator  = md
>> dt  = 0.002
>> nsteps  = 5 ; 10 ns
>> init_step   = 0; For exact run continuation or
>> redoing part of a run
>> comm-mode   = Angular  ; mode for center of mass motion
>> removal
>> nstcomm = 10   ; number of steps for center of
>> mass motion removal
>>
>> ; OUTPUT CONTROL OPTIONS
>> ; Output frequency for coords (x), velocities (v) and forces (f)
>> nstxout  = 1000
>> nstvout  = 1000
>> nstfout  = 1000
>>
>> ; Output frequency for energies to log file and energy file
>> nstlog   = 1000
>> nstcalcenergy= 10
>> nstenergy= 1000
>>
>> ; Neighbor searching and Electrostatitcs
>> vdwtype  = cut-off
>> coulombtype  = cut-off
>> ;cutoff-scheme= group
>> pbc  = no
>> periodic_molecules   = no
>> nstlist  = 5
>> ns_type  = grid
>> rlist= 1.0
>> rcoulomb = 1.6
>> rvdw = 1.0
>> ; Selection of energy groups
>> energygrps   = fixed not_fixed
>> freezegrps   = fixed not_fixed
>> freezedim= Y Y Y N N N
>>
>> ;Relative dielectric constant for the medium and the reaction field
>> epsilon_r= 1
>> epsilon_rf   = 1
>>
>> ; Temperutare coupling
>> tcoupl   = v-rescale
>> tc_grps  = fixed not_fixed
>> tau_t= 0.01 0.01
>> ;nst_couple   = 5
>> ref_t= 300.00 300.00
>>
>> ; Pressure coupling
>> pcoupl   = no
>> ;pcoupletype  = isotropic
>> tau_p= 1.0
>> ;compressiblity   = 4.5e-5
>> ref_p= 1.0
>> gen_vel  = yes
>> gen_temp = 300.00 300.00
>> gen_seed = -1
>> constraints  = h-bonds
>>
>>
>> ; Implicit solvent
>> implicit_solvent = GBSA
>> gb_algorithm = Still ; HCT  ; OBC
>> nstgbradii   = 1.0
>> rgbradii = 1.0  ; [nm] Cut-off for the
>> calculation
>> of the Born radii. Currently must be equal to rlist
>> gb_epsilon_solvent   = 80   ; Dielectric constant for the
>> implicit solvent
>> gb_saltconc  = 0; Salt concentration for
>> implicit
>> solvent models, currently not used
>> sa_algorithm = 

Re: [gmx-users] REMD implicit solvent

2018-01-05 Thread Mark Abraham
Hi,

Did you try to debug your setup by running a normal single-replica
simulation first?

Mark

On Fri, Jan 5, 2018 at 12:12 PM Urszula Uciechowska <
urszula.uciechow...@biotech.ug.edu.pl> wrote:

>
>
> Dear gromacs users,
>
> I am trying to run REMD simulations using 4.5.5 version (implicit
> solvent). The MD procedure:
>
> pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.
>
> The input for minimization step:
>
> ; Run control parameters
> integrator   = cg
> nsteps   = 800
> vdwtype  = cut-off
> coulombtype  = cut-off
> ;cutoff-scheme= group
> pbc  = no
> periodic_molecules   = no
> nstlist  = 10
> ns_type  = grid
> rlist= 1.0
> rcoulomb = 1.6
> rvdw = 1.6
> comm-mode= Angular
> nstcomm  = 10
> ;
> ;Energy minimizing stuff
> ;
> emtol= 100.0
> nstcgsteep   = 2
> emstep   = 0.01
> ;
> ;Relative dielectric constant for the medium and the reaction field
> epsilon_r= 1
> epsilon_rf   = 1
> ;
> ; Implicit solvent
> ;
> implicit_solvent = GBSA
> gb_algorithm = OBC  ;Still  HCT   OBC
> nstgbradii   = 1.0
> rgbradii = 1.0  ; [nm] Cut-off for the calculation of
> the Born radii. Currently must be equal to rlist
> gb_epsilon_solvent   = 80   ; Dielectric constant for the implicit
> solvent
> gb_saltconc  = 0; Salt concentration for implicit
> solvent models, currently not used
> sa_algorithm = Ace-approximation
> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
> the SA (nonpolar surface) part of GBSA. The value -1 will set default
> value for Still/HCT/OBC GB-models.
>
> and it finished without errors.
>
> The problem is with equilibration step. The input file that I used is:
>
> ; MD CONTROL OPTIONS
> integrator  = md
> dt  = 0.002
> nsteps  = 5 ; 10 ns
> init_step   = 0; For exact run continuation or
> redoing part of a run
> comm-mode   = Angular  ; mode for center of mass motion
> removal
> nstcomm = 10   ; number of steps for center of
> mass motion removal
>
> ; OUTPUT CONTROL OPTIONS
> ; Output frequency for coords (x), velocities (v) and forces (f)
> nstxout  = 1000
> nstvout  = 1000
> nstfout  = 1000
>
> ; Output frequency for energies to log file and energy file
> nstlog   = 1000
> nstcalcenergy= 10
> nstenergy= 1000
>
> ; Neighbor searching and Electrostatitcs
> vdwtype  = cut-off
> coulombtype  = cut-off
> ;cutoff-scheme= group
> pbc  = no
> periodic_molecules   = no
> nstlist  = 5
> ns_type  = grid
> rlist= 1.0
> rcoulomb = 1.6
> rvdw = 1.0
> ; Selection of energy groups
> energygrps   = fixed not_fixed
> freezegrps   = fixed not_fixed
> freezedim= Y Y Y N N N
>
> ;Relative dielectric constant for the medium and the reaction field
> epsilon_r= 1
> epsilon_rf   = 1
>
> ; Temperutare coupling
> tcoupl   = v-rescale
> tc_grps  = fixed not_fixed
> tau_t= 0.01 0.01
> ;nst_couple   = 5
> ref_t= 300.00 300.00
>
> ; Pressure coupling
> pcoupl   = no
> ;pcoupletype  = isotropic
> tau_p= 1.0
> ;compressiblity   = 4.5e-5
> ref_p= 1.0
> gen_vel  = yes
> gen_temp = 300.00 300.00
> gen_seed = -1
> constraints  = h-bonds
>
>
> ; Implicit solvent
> implicit_solvent = GBSA
> gb_algorithm = Still ; HCT  ; OBC
> nstgbradii   = 1.0
> rgbradii = 1.0  ; [nm] Cut-off for the calculation
> of the Born radii. Currently must be equal to rlist
> gb_epsilon_solvent   = 80   ; Dielectric constant for the
> implicit solvent
> gb_saltconc  = 0; Salt concentration for implicit
> solvent models, currently not used
> sa_algorithm = Ace-approximation
> sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
> for the SA (nonpolar surface) part of GBSA. The value -1 will set default
> value for Still/HCT/OBC GB-models.
>
>
> mdrun -v -multidir eq_[12345678]
>
> The error that I obtained is:
>
> Fatal error:
> A charge group moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
>
> I do 

[gmx-users] REMD implicit solvent

2018-01-05 Thread Urszula Uciechowska


Dear gromacs users,

I am trying to run REMD simulations using 4.5.5 version (implicit
solvent). The MD procedure:

pdb2gmx -f  prot.pdb -o prot.gro -q prot.pdb -ignh -ss.

The input for minimization step:

; Run control parameters
integrator   = cg
nsteps   = 800
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 10
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.6
comm-mode= Angular
nstcomm  = 10
;
;Energy minimizing stuff
;
emtol= 100.0
nstcgsteep   = 2
emstep   = 0.01
;
;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1
;
; Implicit solvent
;
implicit_solvent = GBSA
gb_algorithm = OBC  ;Still  HCT   OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation of
the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the implicit
solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2) for
the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.

and it finished without errors.

The problem is with equilibration step. The input file that I used is:

; MD CONTROL OPTIONS
integrator  = md
dt  = 0.002
nsteps  = 5 ; 10 ns
init_step   = 0; For exact run continuation or
redoing part of a run
comm-mode   = Angular  ; mode for center of mass motion
removal
nstcomm = 10   ; number of steps for center of
mass motion removal

; OUTPUT CONTROL OPTIONS
; Output frequency for coords (x), velocities (v) and forces (f)
nstxout  = 1000
nstvout  = 1000
nstfout  = 1000

; Output frequency for energies to log file and energy file
nstlog   = 1000
nstcalcenergy= 10
nstenergy= 1000

; Neighbor searching and Electrostatitcs
vdwtype  = cut-off
coulombtype  = cut-off
;cutoff-scheme= group
pbc  = no
periodic_molecules   = no
nstlist  = 5
ns_type  = grid
rlist= 1.0
rcoulomb = 1.6
rvdw = 1.0
; Selection of energy groups
energygrps   = fixed not_fixed
freezegrps   = fixed not_fixed
freezedim= Y Y Y N N N

;Relative dielectric constant for the medium and the reaction field
epsilon_r= 1
epsilon_rf   = 1

; Temperutare coupling
tcoupl   = v-rescale
tc_grps  = fixed not_fixed
tau_t= 0.01 0.01
;nst_couple   = 5
ref_t= 300.00 300.00

; Pressure coupling
pcoupl   = no
;pcoupletype  = isotropic
tau_p= 1.0
;compressiblity   = 4.5e-5
ref_p= 1.0
gen_vel  = yes
gen_temp = 300.00 300.00
gen_seed = -1
constraints  = h-bonds


; Implicit solvent
implicit_solvent = GBSA
gb_algorithm = Still ; HCT  ; OBC
nstgbradii   = 1.0
rgbradii = 1.0  ; [nm] Cut-off for the calculation
of the Born radii. Currently must be equal to rlist
gb_epsilon_solvent   = 80   ; Dielectric constant for the
implicit solvent
gb_saltconc  = 0; Salt concentration for implicit
solvent models, currently not used
sa_algorithm = Ace-approximation
sa_surface_tension   = 2.05016  ; Surface tension (kJ/mol/nm^2)
for the SA (nonpolar surface) part of GBSA. The value -1 will set default
value for Still/HCT/OBC GB-models.


mdrun -v -multidir eq_[12345678]

The error that I obtained is:

Fatal error:
A charge group moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


I do not know what is wrong. I checked the Fatal error at
www.gromacs.org/Documentation/Errors. My system is ok, I tried to increase
the min steps but did not help. I have also checked the
http://www.gromacs.org/Documentation/How-tos/REMD but can not move forward
because of equilibration step.

I appreciate any recommendation.

Thanks

Urszula



Urszula Uciechowska PhD
University of Gdansk and Medical Univesity of Gdansk

Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Smith, Micholas D.
Ouyang,

Each Replica corresponds to 1 temperature in Gromacs (unlike other software 
packages). If you want to have continuous trajectories (i.e. follow the motion 
of one replica through temperature exchanges) then you have to demux. But the 
demux is really only useful (in my experience) with use of the retired 
g_kinetics tool.


===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark Abraham 
<mark.j.abra...@gmail.com>
Sent: Thursday, June 01, 2017 10:53 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD analysis of trajectories

Hi,

What did you learn from the first sentence of the link I gave you?

Mark

On Thu, Jun 1, 2017 at 3:20 PM YanhuaOuyang <15901283...@163.com> wrote:

> Do you mean  that the original trajectories REMD generated are belong to
> "one trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?
>
>
>
> Ouyang
>
>
>
>
> At 2017-06-01 21:00:52, "Mark Abraham" <mark.j.abra...@gmail.com> wrote:
> >Hi,
> >
> >That's what you already have. See
> >http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
> >
> >Mark
> >
> >On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
> >
> >> Hi,
> >>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> >> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory
> at
> >> specific temperature  such as a trajectory at experiment temperature
> 298K
> >> rather than analyzing the continuous trajectory. I have known GROMACS
> >> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> >> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need
> to
> >> do something else on the trajectories to get a trajectory at specific
> >> temperature(i.e. 298K)?
> >>
> >> Best regards,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >--
> >Gromacs Users mailing list
> >
> >* Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> >* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >* For (un)subscribe requests visit
> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Mark Abraham
Hi,

What did you learn from the first sentence of the link I gave you?

Mark

On Thu, Jun 1, 2017 at 3:20 PM YanhuaOuyang <15901283...@163.com> wrote:

> Do you mean  that the original trajectories REMD generated are belong to
> "one trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?
>
>
>
> Ouyang
>
>
>
>
> At 2017-06-01 21:00:52, "Mark Abraham"  wrote:
> >Hi,
> >
> >That's what you already have. See
> >http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
> >
> >Mark
> >
> >On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
> >
> >> Hi,
> >>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> >> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory
> at
> >> specific temperature  such as a trajectory at experiment temperature
> 298K
> >> rather than analyzing the continuous trajectory. I have known GROMACS
> >> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> >> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need
> to
> >> do something else on the trajectories to get a trajectory at specific
> >> temperature(i.e. 298K)?
> >>
> >> Best regards,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >--
> >Gromacs Users mailing list
> >
> >* Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> >* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >* For (un)subscribe requests visit
> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread YanhuaOuyang
Do you mean  that the original trajectories REMD generated are belong to "one 
trajectory per temperature" (i.e. the md2.xtc is a trajectory at 298K)?



Ouyang




At 2017-06-01 21:00:52, "Mark Abraham"  wrote:
>Hi,
>
>That's what you already have. See
>http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing
>
>Mark
>
>On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:
>
>> Hi,
>>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
>> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at
>> specific temperature  such as a trajectory at experiment temperature 298K
>> rather than analyzing the continuous trajectory. I have known GROMACS
>> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
>> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need to
>> do something else on the trajectories to get a trajectory at specific
>> temperature(i.e. 298K)?
>>
>> Best regards,
>> Ouyang
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>-- 
>Gromacs Users mailing list
>
>* Please search the archive at 
>http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>* For (un)subscribe requests visit
>https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD analysis of trajectories

2017-06-01 Thread Mark Abraham
Hi,

That's what you already have. See
http://www.gromacs.org/Documentation/How-tos/REMD#Post-Processing

Mark

On Thu, Jun 1, 2017 at 5:37 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
>I have run a 100ns-REMD of protein, which has 20 replicas (i.e.
> remd1.xtc, remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at
> specific temperature  such as a trajectory at experiment temperature 298K
> rather than analyzing the continuous trajectory. I have known GROMACS
> exchange coordinate when REMD running. Do I just analyze remd2.xtc of
> replica 2(T=298K) if I want to analyze a trajectory at 298K? Do I need to
> do something else on the trajectories to get a trajectory at specific
> temperature(i.e. 298K)?
>
> Best regards,
> Ouyang
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD analysis of trajectories

2017-05-31 Thread YanhuaOuyang
Hi,
   I have run a 100ns-REMD of protein, which has 20 replicas (i.e. remd1.xtc, 
remd2.xtc, ..., remd20.xtc).  I want to analyze a trajectory at specific 
temperature  such as a trajectory at experiment temperature 298K rather than 
analyzing the continuous trajectory. I have known GROMACS exchange coordinate 
when REMD running. Do I just analyze remd2.xtc of replica 2(T=298K) if I want 
to analyze a trajectory at 298K? Do I need to do something else on the 
trajectories to get a trajectory at specific temperature(i.e. 298K)?

Best regards,
Ouyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD analysis

2016-11-21 Thread Kalyanashis Jana
Dear all,

I have performed an REMD simulation for protein drug system (8350 + 32500
sol) using gromacs-4.4.5 package. But I could not understand how to do
analysis of REMD. I have used 10 set of replicas (298 K to 308.31K with
r=1.0038, the common ratio of the geometric progression)  for REMD
simulation and carried out a 5 ns simulation. I would like to compare the
thermodynamics of two drug molecules using REMD. Can you please suggest me,
how can I plot potential energy vs probability or how can I get free energy
profile? What types of analysis do I need to understand REMD?

Looking forward to hear from you.

Thanks in advance,

Kalyanashis Jana
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-15 Thread Abramyan, Tigran
Hi Mark,

I understand that at each replica the coordinates of the accepted states are 
saved. I understand that I can calculate different properties of 0.xtc in 
differenr programs, e.g., gromacs, MDTraj, etc., but when it comes down to 
visualization in VMD, for example, in gromacs I don't seem to find a way to 
remove the jumps and superpose the coordinates saved in 0.xtc.

Tigran





From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark Abraham 
<mark.j.abra...@gmail.com>
Sent: Monday, November 14, 2016 1:20 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Hi,

The ensemble at each temperature is intrinsically discontinuous. You can't
make it look continuous. What are you trying to do?

Mark

On Mon, 14 Nov 2016 05:26 Abramyan, Tigran <tig...@email.unc.edu> wrote:

> Thank you Mark,
>
> One more question regarding the centering of the frames at 300 replica
> (0.xtc) using trjconv. I have used a few trjconv options, however do not
> seem to be removing jumps from the original trajectory. For example, the
> command below works for me when applied to the *xtc file produced in
> regular MD, however, with REMD it produces a trajectory which won't be of
> use for example in VMD:
>
>  echo 1 | trjconv -s 0.tpr -f 0.xtc -o 300.xtc -pbc nojump -dt 40
>
> I am assuming I may need to use a combination of tpr files to produce the
> nojump 300.xtc file?
>
> Please advise,
> Thank you very much.
> Tigran
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham <mark.j.abra...@gmail.com>
> Sent: Tuesday, November 8, 2016 1:15 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Yes
>
> On Tue, 8 Nov 2016 18:43 Abramyan, Tigran <tig...@email.unc.edu> wrote:
>
> > Hi Mark,
> >
> > Thanks a lot for your prompt response. So  demux.pl creates continuous
> > trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> > ensemble, typically of interest in the analysis of REMD results) is saved
> > in the original  0.xtc file produced during REMD before using demux.pl?
> >
> > Thank you,
> > Tigran
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham <mark.j.abra...@gmail.com>
> > Sent: Tuesday, November 8, 2016 5:53 AM
> > To: gmx-us...@gromacs.org
> > Subject: Re: [gmx-users] REMD ensemble of states
> >
> > Hi,
> >
> > Mdrun wrote that. You made the trajectories contiguous with the demux.
> >
> > Mark
> >
> > On Tue, 8 Nov 2016 04:55 Abramyan, Tigran <tig...@email.unc.edu> wrote:
> >
> > > Hi,
> > >
> > >
> > > I conducted REMD, and extracted the trajectories via
> > > trjcat -f *.trr -demux replica_index.xvg
> > > And now I was wondering which *.xtc file is the ensemble of states at
> the
> > > baseline replica (lowest temperature replica). Intuitively my guess is
> > that
> > > the numbers in the names of *_trajout.xtc files correspond to the
> replica
> > > numbers starting from the baseline, and hence 0_trajout.xtc is the
> > ensemble
> > > of states at the baseline replica, but I may be wrong.
> > >
> > >
> > > Please suggest.
> > >
> > >
> > > Thank you,
> > >
> > > Tigran
> > >
> > >
> > > --
> > > Tigran M. Abramyan, Ph.D.
> > > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > > Center for Integrative Chemical Biology and Drug Discovery
> > > Eshelman School of Pharmacy
> > > University of North Carolina at Chapel Hill
> > > Chapel Hill, NC 27599-7363
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please

Re: [gmx-users] REMD ensemble of states

2016-11-13 Thread Abramyan, Tigran
Thank you Mark,

One more question regarding the centering of the frames at 300 replica (0.xtc) 
using trjconv. I have used a few trjconv options, however do not seem to be 
removing jumps from the original trajectory. For example, the command below 
works for me when applied to the *xtc file produced in regular MD, however, 
with REMD it produces a trajectory which won't be of use for example in VMD:

 echo 1 | trjconv -s 0.tpr -f 0.xtc -o 300.xtc -pbc nojump -dt 40

I am assuming I may need to use a combination of tpr files to produce the 
nojump 300.xtc file?

Please advise,
Thank you very much.
Tigran



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark Abraham 
<mark.j.abra...@gmail.com>
Sent: Tuesday, November 8, 2016 1:15 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Yes

On Tue, 8 Nov 2016 18:43 Abramyan, Tigran <tig...@email.unc.edu> wrote:

> Hi Mark,
>
> Thanks a lot for your prompt response. So  demux.pl creates continuous
> trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> ensemble, typically of interest in the analysis of REMD results) is saved
> in the original  0.xtc file produced during REMD before using demux.pl?
>
> Thank you,
> Tigran
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham <mark.j.abra...@gmail.com>
> Sent: Tuesday, November 8, 2016 5:53 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Hi,
>
> Mdrun wrote that. You made the trajectories contiguous with the demux.
>
> Mark
>
> On Tue, 8 Nov 2016 04:55 Abramyan, Tigran <tig...@email.unc.edu> wrote:
>
> > Hi,
> >
> >
> > I conducted REMD, and extracted the trajectories via
> > trjcat -f *.trr -demux replica_index.xvg
> > And now I was wondering which *.xtc file is the ensemble of states at the
> > baseline replica (lowest temperature replica). Intuitively my guess is
> that
> > the numbers in the names of *_trajout.xtc files correspond to the replica
> > numbers starting from the baseline, and hence 0_trajout.xtc is the
> ensemble
> > of states at the baseline replica, but I may be wrong.
> >
> >
> > Please suggest.
> >
> >
> > Thank you,
> >
> > Tigran
> >
> >
> > --
> > Tigran M. Abramyan, Ph.D.
> > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > Center for Integrative Chemical Biology and Drug Discovery
> > Eshelman School of Pharmacy
> > University of North Carolina at Chapel Hill
> > Chapel Hill, NC 27599-7363
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Mark Abraham
Yes

On Tue, 8 Nov 2016 18:43 Abramyan, Tigran <tig...@email.unc.edu> wrote:

> Hi Mark,
>
> Thanks a lot for your prompt response. So  demux.pl creates continuous
> trajectories, *_trajout.xtc, but the ensemble of states (lowest energy
> ensemble, typically of interest in the analysis of REMD results) is saved
> in the original  0.xtc file produced during REMD before using demux.pl?
>
> Thank you,
> Tigran
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham <mark.j.abra...@gmail.com>
> Sent: Tuesday, November 8, 2016 5:53 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] REMD ensemble of states
>
> Hi,
>
> Mdrun wrote that. You made the trajectories contiguous with the demux.
>
> Mark
>
> On Tue, 8 Nov 2016 04:55 Abramyan, Tigran <tig...@email.unc.edu> wrote:
>
> > Hi,
> >
> >
> > I conducted REMD, and extracted the trajectories via
> > trjcat -f *.trr -demux replica_index.xvg
> > And now I was wondering which *.xtc file is the ensemble of states at the
> > baseline replica (lowest temperature replica). Intuitively my guess is
> that
> > the numbers in the names of *_trajout.xtc files correspond to the replica
> > numbers starting from the baseline, and hence 0_trajout.xtc is the
> ensemble
> > of states at the baseline replica, but I may be wrong.
> >
> >
> > Please suggest.
> >
> >
> > Thank you,
> >
> > Tigran
> >
> >
> > --
> > Tigran M. Abramyan, Ph.D.
> > Postdoctoral Fellow, Computational Biophysics & Molecular Design
> > Center for Integrative Chemical Biology and Drug Discovery
> > Eshelman School of Pharmacy
> > University of North Carolina at Chapel Hill
> > Chapel Hill, NC 27599-7363
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Abramyan, Tigran
Hi Mark,

Thanks a lot for your prompt response. So  demux.pl creates continuous 
trajectories, *_trajout.xtc, but the ensemble of states (lowest energy 
ensemble, typically of interest in the analysis of REMD results) is saved in 
the original  0.xtc file produced during REMD before using demux.pl?

Thank you,
Tigran


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark Abraham 
<mark.j.abra...@gmail.com>
Sent: Tuesday, November 8, 2016 5:53 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD ensemble of states

Hi,

Mdrun wrote that. You made the trajectories contiguous with the demux.

Mark

On Tue, 8 Nov 2016 04:55 Abramyan, Tigran <tig...@email.unc.edu> wrote:

> Hi,
>
>
> I conducted REMD, and extracted the trajectories via
> trjcat -f *.trr -demux replica_index.xvg
> And now I was wondering which *.xtc file is the ensemble of states at the
> baseline replica (lowest temperature replica). Intuitively my guess is that
> the numbers in the names of *_trajout.xtc files correspond to the replica
> numbers starting from the baseline, and hence 0_trajout.xtc is the ensemble
> of states at the baseline replica, but I may be wrong.
>
>
> Please suggest.
>
>
> Thank you,
>
> Tigran
>
>
> --
> Tigran M. Abramyan, Ph.D.
> Postdoctoral Fellow, Computational Biophysics & Molecular Design
> Center for Integrative Chemical Biology and Drug Discovery
> Eshelman School of Pharmacy
> University of North Carolina at Chapel Hill
> Chapel Hill, NC 27599-7363
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD ensemble of states

2016-11-08 Thread Mark Abraham
Hi,

Mdrun wrote that. You made the trajectories contiguous with the demux.

Mark

On Tue, 8 Nov 2016 04:55 Abramyan, Tigran  wrote:

> Hi,
>
>
> I conducted REMD, and extracted the trajectories via
> trjcat -f *.trr -demux replica_index.xvg
> And now I was wondering which *.xtc file is the ensemble of states at the
> baseline replica (lowest temperature replica). Intuitively my guess is that
> the numbers in the names of *_trajout.xtc files correspond to the replica
> numbers starting from the baseline, and hence 0_trajout.xtc is the ensemble
> of states at the baseline replica, but I may be wrong.
>
>
> Please suggest.
>
>
> Thank you,
>
> Tigran
>
>
> --
> Tigran M. Abramyan, Ph.D.
> Postdoctoral Fellow, Computational Biophysics & Molecular Design
> Center for Integrative Chemical Biology and Drug Discovery
> Eshelman School of Pharmacy
> University of North Carolina at Chapel Hill
> Chapel Hill, NC 27599-7363
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD ensemble of states

2016-11-07 Thread Abramyan, Tigran
Hi,


I conducted REMD, and extracted the trajectories via
trjcat -f *.trr -demux replica_index.xvg
And now I was wondering which *.xtc file is the ensemble of states at the 
baseline replica (lowest temperature replica). Intuitively my guess is that the 
numbers in the names of *_trajout.xtc files correspond to the replica numbers 
starting from the baseline, and hence 0_trajout.xtc is the ensemble of states 
at the baseline replica, but I may be wrong.


Please suggest.


Thank you,

Tigran


--
Tigran M. Abramyan, Ph.D.
Postdoctoral Fellow, Computational Biophysics & Molecular Design
Center for Integrative Chemical Biology and Drug Discovery
Eshelman School of Pharmacy
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-7363

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread Mark Abraham
Hi,

Best practice is to read and learn others practice from publications that
are similar to what you want to do, rather than making ad-hoc changes. In
this case, the GROMACS defaults are pretty close to the de facto standard,
and supported by analysis work done by other members of the community.

Mark

On Thu, Jun 30, 2016 at 4:16 PM NISHA Prakash 
wrote:

> Dear Justin,
>
> Thanks a lot for pointing out the issues. I now understand why there were
> such high oscillations.
>
> Could you please also tell me if there are any ideal values for pme_order
> and fourier spacing with respect to the cut offs' value of 1.4.
>
> Does the following Note imply I can raise the fourier grid spacing to 0.25?
>
> NOTE 2 [file sim-new.mdp]:
>   The optimal PME mesh load for parallel simulations is below 0.5
>   and for highly parallel simulations between 0.25 and 0.33,
>   for higher performance, increase the cut-off and the PME grid spacing
>
> Thank you again,
>
> Nisha
>
>
> On Thu, Jun 30, 2016 at 6:55 PM, Justin Lemkul  wrote:
>
> >
> >
> > On 6/30/16 9:16 AM, NISHA Prakash wrote:
> >
> >> Dear Justin,
> >>
> >> Thank you for your reply.
> >> It is a protein carbohydrate system.  Including the solvent, the number
> of
> >> atoms is 43499.
> >> I have minimized the system for 200 ps followed by NPT and NVT
> simulations
> >> for 200 ps respectively
> >>
> >>
> > Given that your temperature output started from 0 K, then you did not
> > continue from the equilibration properly by supplying the checkpoint file
> > to grompp -t. This is important to get right, otherwise you're basically
> > starting over from some random point (an equilibrated structure without
> any
> > velocities likely isn't a physically realistic state).
> >
> > Below is the .mdp file.
> >>
> >>
> >> ; VARIOUS PREPROCESSING OPTIONS
> >> title= REMD Simulation
> >> define   = -DPOSRES
> >>
> >>
> >> ; RUN CONTROL PARAMETERS
> >> integrator   = md-vv  ; velocity verlet algorithm -
> >> tinit= 0 ;
> >> dt   = 0.002; timestep in ps
> >> nsteps  = 500;
> >> simulation-part  = 1 ; Part index is updated automatically on
> >> checkpointing
> >> comm-mode= Linear ; mode for center of mass motion
> removal
> >> nstcomm  = 100 ; number of steps for center of mass
> motion
> >> removal
> >> comm-grps= Protein_Carb  Water_and_Ions ; group(s)
> for
> >> center of mass motion removal
> >>
> >>
> > In a solvated system, you should not be separating these groups.  This
> > could explain the sudden jump in temperature - you could have things
> > clashing badly over the course of the simulation.
> >
> >
> >
> >> ; ENERGY MINIMIZATION OPTIONS
> >> emtol= 10 ; Force tolerance
> >> emstep   = 0.01 ; initial step-size
> >> niter= 20 ; Max number of iterations in relax-shells
> >> fcstep   = 0 ; Step size (ps^2) for minimization of
> >> flexible constraints
> >> nstcgsteep   = 1000 ; Frequency of steepest descents steps
> >> when
> >> doing CG
> >> nbfgscorr= 10
> >>
> >>
> >> ; OUTPUT CONTROL OPTIONS
> >> nstxout  = 5 ; Writing full precision coordinates
> >> every
> >> ns
> >> nstvout  = 5 ; Writing velocities every nanosecond
> >> nstfout  = 0 ; Not writing forces
> >> nstlog   = 5000  ; Writing to the log file every step
> 10ps
> >> nstcalcenergy= 100
> >> nstenergy= 5000  ; Writing out energy information every
> >> step 10ps
> >> nstxtcout= 2500  ; Writing coordinates every 5 ps
> >> xtc-precision= 1000
> >> xtc-grps = Protein_Carb  Water_and_Ions ; subset of
> >> atoms for the .xtc file.
> >> energygrps   = Protein_Carb  Water_and_Ions ; Selection
> of
> >> energy groups
> >>
> >>
> >> ; NEIGHBORSEARCHING PARAMETERS
> >> nstlist  = 10 ; nblist update frequency-
> >> ns-type  = Grid ; ns algorithm (simple or grid)
> >> pbc  = xyz ; Periodic boundary conditions: xyz,
> >> no,
> >> xy
> >> periodic-molecules   = no
> >> rlist= 1.4 ;  nblist cut-off
> >> rlistlong= -1 ; long-range cut-off for switched
> >> potentials
> >>
> >>
> >> ; OPTIONS FOR ELECTROSTATICS
> >> coulombtype  = PME ; Method for doing electrostatics
> >> rcoulomb = 1.4 ;
> >> epsilon-r= 1 ; Relative dielectric constant for the
> >> medium
> >> pme_order= 10;
> >>
> >>
> >> ; OPTIONS FOR VDW
> >> vdw-type = Cut-off  ; Method for doing Van der Waals
> >> rvdw-switch  = 0 ; cut-off lengths
> >> rvdw = 1.4 ;

Re: [gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread NISHA Prakash
Dear Justin,

Thanks a lot for pointing out the issues. I now understand why there were
such high oscillations.

Could you please also tell me if there are any ideal values for pme_order
and fourier spacing with respect to the cut offs' value of 1.4.

Does the following Note imply I can raise the fourier grid spacing to 0.25?

NOTE 2 [file sim-new.mdp]:
  The optimal PME mesh load for parallel simulations is below 0.5
  and for highly parallel simulations between 0.25 and 0.33,
  for higher performance, increase the cut-off and the PME grid spacing

Thank you again,

Nisha


On Thu, Jun 30, 2016 at 6:55 PM, Justin Lemkul  wrote:

>
>
> On 6/30/16 9:16 AM, NISHA Prakash wrote:
>
>> Dear Justin,
>>
>> Thank you for your reply.
>> It is a protein carbohydrate system.  Including the solvent, the number of
>> atoms is 43499.
>> I have minimized the system for 200 ps followed by NPT and NVT simulations
>> for 200 ps respectively
>>
>>
> Given that your temperature output started from 0 K, then you did not
> continue from the equilibration properly by supplying the checkpoint file
> to grompp -t. This is important to get right, otherwise you're basically
> starting over from some random point (an equilibrated structure without any
> velocities likely isn't a physically realistic state).
>
> Below is the .mdp file.
>>
>>
>> ; VARIOUS PREPROCESSING OPTIONS
>> title= REMD Simulation
>> define   = -DPOSRES
>>
>>
>> ; RUN CONTROL PARAMETERS
>> integrator   = md-vv  ; velocity verlet algorithm -
>> tinit= 0 ;
>> dt   = 0.002; timestep in ps
>> nsteps  = 500;
>> simulation-part  = 1 ; Part index is updated automatically on
>> checkpointing
>> comm-mode= Linear ; mode for center of mass motion removal
>> nstcomm  = 100 ; number of steps for center of mass motion
>> removal
>> comm-grps= Protein_Carb  Water_and_Ions ; group(s) for
>> center of mass motion removal
>>
>>
> In a solvated system, you should not be separating these groups.  This
> could explain the sudden jump in temperature - you could have things
> clashing badly over the course of the simulation.
>
>
>
>> ; ENERGY MINIMIZATION OPTIONS
>> emtol= 10 ; Force tolerance
>> emstep   = 0.01 ; initial step-size
>> niter= 20 ; Max number of iterations in relax-shells
>> fcstep   = 0 ; Step size (ps^2) for minimization of
>> flexible constraints
>> nstcgsteep   = 1000 ; Frequency of steepest descents steps
>> when
>> doing CG
>> nbfgscorr= 10
>>
>>
>> ; OUTPUT CONTROL OPTIONS
>> nstxout  = 5 ; Writing full precision coordinates
>> every
>> ns
>> nstvout  = 5 ; Writing velocities every nanosecond
>> nstfout  = 0 ; Not writing forces
>> nstlog   = 5000  ; Writing to the log file every step 10ps
>> nstcalcenergy= 100
>> nstenergy= 5000  ; Writing out energy information every
>> step 10ps
>> nstxtcout= 2500  ; Writing coordinates every 5 ps
>> xtc-precision= 1000
>> xtc-grps = Protein_Carb  Water_and_Ions ; subset of
>> atoms for the .xtc file.
>> energygrps   = Protein_Carb  Water_and_Ions ; Selection of
>> energy groups
>>
>>
>> ; NEIGHBORSEARCHING PARAMETERS
>> nstlist  = 10 ; nblist update frequency-
>> ns-type  = Grid ; ns algorithm (simple or grid)
>> pbc  = xyz ; Periodic boundary conditions: xyz,
>> no,
>> xy
>> periodic-molecules   = no
>> rlist= 1.4 ;  nblist cut-off
>> rlistlong= -1 ; long-range cut-off for switched
>> potentials
>>
>>
>> ; OPTIONS FOR ELECTROSTATICS
>> coulombtype  = PME ; Method for doing electrostatics
>> rcoulomb = 1.4 ;
>> epsilon-r= 1 ; Relative dielectric constant for the
>> medium
>> pme_order= 10;
>>
>>
>> ; OPTIONS FOR VDW
>> vdw-type = Cut-off  ; Method for doing Van der Waals
>> rvdw-switch  = 0 ; cut-off lengths
>> rvdw = 1.4 ;
>> DispCorr = EnerPres; Apply long range dispersion
>> corrections for Energy and Pressure
>> table-extension  = 1; Extension of the potential lookup tables
>> beyond the cut-off
>> fourierspacing   = 0.08;  Spacing for the PME/PPPM FFT grid
>>
>>
> This small Fourier spacing, coupled with the very high PME order above, is
> going to unnecessarily slow your system down.  Is there some reason you
> have set these this way?
>
>
>> ; GENERALIZED BORN ELECTROSTATICS
>> gb-algorithm = Still; Algorithm for calculating Born radii
>> nstgbradii   = 1; Frequency of 

Re: [gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread Justin Lemkul



On 6/30/16 9:16 AM, NISHA Prakash wrote:

Dear Justin,

Thank you for your reply.
It is a protein carbohydrate system.  Including the solvent, the number of
atoms is 43499.
I have minimized the system for 200 ps followed by NPT and NVT simulations
for 200 ps respectively



Given that your temperature output started from 0 K, then you did not continue 
from the equilibration properly by supplying the checkpoint file to grompp -t. 
This is important to get right, otherwise you're basically starting over from 
some random point (an equilibrated structure without any velocities likely isn't 
a physically realistic state).



Below is the .mdp file.


; VARIOUS PREPROCESSING OPTIONS
title= REMD Simulation
define   = -DPOSRES


; RUN CONTROL PARAMETERS
integrator   = md-vv  ; velocity verlet algorithm -
tinit= 0 ;
dt   = 0.002; timestep in ps
nsteps  = 500;
simulation-part  = 1 ; Part index is updated automatically on
checkpointing
comm-mode= Linear ; mode for center of mass motion removal
nstcomm  = 100 ; number of steps for center of mass motion
removal
comm-grps= Protein_Carb  Water_and_Ions ; group(s) for
center of mass motion removal



In a solvated system, you should not be separating these groups.  This could 
explain the sudden jump in temperature - you could have things clashing badly 
over the course of the simulation.




; ENERGY MINIMIZATION OPTIONS
emtol= 10 ; Force tolerance
emstep   = 0.01 ; initial step-size
niter= 20 ; Max number of iterations in relax-shells
fcstep   = 0 ; Step size (ps^2) for minimization of
flexible constraints
nstcgsteep   = 1000 ; Frequency of steepest descents steps when
doing CG
nbfgscorr= 10


; OUTPUT CONTROL OPTIONS
nstxout  = 5 ; Writing full precision coordinates every
ns
nstvout  = 5 ; Writing velocities every nanosecond
nstfout  = 0 ; Not writing forces
nstlog   = 5000  ; Writing to the log file every step 10ps
nstcalcenergy= 100
nstenergy= 5000  ; Writing out energy information every
step 10ps
nstxtcout= 2500  ; Writing coordinates every 5 ps
xtc-precision= 1000
xtc-grps = Protein_Carb  Water_and_Ions ; subset of
atoms for the .xtc file.
energygrps   = Protein_Carb  Water_and_Ions ; Selection of
energy groups


; NEIGHBORSEARCHING PARAMETERS
nstlist  = 10 ; nblist update frequency-
ns-type  = Grid ; ns algorithm (simple or grid)
pbc  = xyz ; Periodic boundary conditions: xyz, no,
xy
periodic-molecules   = no
rlist= 1.4 ;  nblist cut-off
rlistlong= -1 ; long-range cut-off for switched
potentials


; OPTIONS FOR ELECTROSTATICS
coulombtype  = PME ; Method for doing electrostatics
rcoulomb = 1.4 ;
epsilon-r= 1 ; Relative dielectric constant for the
medium
pme_order= 10;


; OPTIONS FOR VDW
vdw-type = Cut-off  ; Method for doing Van der Waals
rvdw-switch  = 0 ; cut-off lengths
rvdw = 1.4 ;
DispCorr = EnerPres; Apply long range dispersion
corrections for Energy and Pressure
table-extension  = 1; Extension of the potential lookup tables
beyond the cut-off
fourierspacing   = 0.08;  Spacing for the PME/PPPM FFT grid



This small Fourier spacing, coupled with the very high PME order above, is going 
to unnecessarily slow your system down.  Is there some reason you have set these 
this way?




; GENERALIZED BORN ELECTROSTATICS
gb-algorithm = Still; Algorithm for calculating Born radii
nstgbradii   = 1; Frequency of calculating the Born radii
inside rlist
rgbradii = 1; Cutoff for Born radii calculation
gb-epsilon-solvent   = 80; Dielectric coefficient of the implicit
solvent
gb-saltconc  = 0; Salt concentration in M for Generalized
Born models


; Scaling factors used in the OBC GB model. Default values are OBC(II)
gb-obc-alpha = 1
gb-obc-beta  = 0.8
gb-obc-gamma = 4.85
gb-dielectric-offset = 0.009
sa-algorithm = Ace-approximation
sa-surface-tension   = -1; Surface tension (kJ/mol/nm^2) for the SA
(nonpolar surface) part of GBSA - default -1



Implicit solvent should not be used if you have explicit solvent, though it 
looks like these options are probably off since the default for the 
implicit-solvent keyword is "no," but be aware that these are extraneous.





; Temperature coupling
tcoupl = nose-hoover
nsttcouple  

Re: [gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread NISHA Prakash
Dear Justin,

Thank you for your reply.
It is a protein carbohydrate system.  Including the solvent, the number of
atoms is 43499.
I have minimized the system for 200 ps followed by NPT and NVT simulations
for 200 ps respectively

Below is the .mdp file.


; VARIOUS PREPROCESSING OPTIONS
title= REMD Simulation
define   = -DPOSRES


; RUN CONTROL PARAMETERS
integrator   = md-vv  ; velocity verlet algorithm -
tinit= 0 ;
dt   = 0.002; timestep in ps
nsteps  = 500;
simulation-part  = 1 ; Part index is updated automatically on
checkpointing
comm-mode= Linear ; mode for center of mass motion removal
nstcomm  = 100 ; number of steps for center of mass motion
removal
comm-grps= Protein_Carb  Water_and_Ions ; group(s) for
center of mass motion removal


; ENERGY MINIMIZATION OPTIONS
emtol= 10 ; Force tolerance
emstep   = 0.01 ; initial step-size
niter= 20 ; Max number of iterations in relax-shells
fcstep   = 0 ; Step size (ps^2) for minimization of
flexible constraints
nstcgsteep   = 1000 ; Frequency of steepest descents steps when
doing CG
nbfgscorr= 10


; OUTPUT CONTROL OPTIONS
nstxout  = 5 ; Writing full precision coordinates every
ns
nstvout  = 5 ; Writing velocities every nanosecond
nstfout  = 0 ; Not writing forces
nstlog   = 5000  ; Writing to the log file every step 10ps
nstcalcenergy= 100
nstenergy= 5000  ; Writing out energy information every
step 10ps
nstxtcout= 2500  ; Writing coordinates every 5 ps
xtc-precision= 1000
xtc-grps = Protein_Carb  Water_and_Ions ; subset of
atoms for the .xtc file.
energygrps   = Protein_Carb  Water_and_Ions ; Selection of
energy groups


; NEIGHBORSEARCHING PARAMETERS
nstlist  = 10 ; nblist update frequency-
ns-type  = Grid ; ns algorithm (simple or grid)
pbc  = xyz ; Periodic boundary conditions: xyz, no,
xy
periodic-molecules   = no
rlist= 1.4 ;  nblist cut-off
rlistlong= -1 ; long-range cut-off for switched
potentials


; OPTIONS FOR ELECTROSTATICS
coulombtype  = PME ; Method for doing electrostatics
rcoulomb = 1.4 ;
epsilon-r= 1 ; Relative dielectric constant for the
medium
pme_order= 10;


; OPTIONS FOR VDW
vdw-type = Cut-off  ; Method for doing Van der Waals
rvdw-switch  = 0 ; cut-off lengths
rvdw = 1.4 ;
DispCorr = EnerPres; Apply long range dispersion
corrections for Energy and Pressure
table-extension  = 1; Extension of the potential lookup tables
beyond the cut-off
fourierspacing   = 0.08;  Spacing for the PME/PPPM FFT grid


; GENERALIZED BORN ELECTROSTATICS
gb-algorithm = Still; Algorithm for calculating Born radii
nstgbradii   = 1; Frequency of calculating the Born radii
inside rlist
rgbradii = 1; Cutoff for Born radii calculation
gb-epsilon-solvent   = 80; Dielectric coefficient of the implicit
solvent
gb-saltconc  = 0; Salt concentration in M for Generalized
Born models


; Scaling factors used in the OBC GB model. Default values are OBC(II)
gb-obc-alpha = 1
gb-obc-beta  = 0.8
gb-obc-gamma = 4.85
gb-dielectric-offset = 0.009
sa-algorithm = Ace-approximation
sa-surface-tension   = -1; Surface tension (kJ/mol/nm^2) for the SA
(nonpolar surface) part of GBSA - default -1



; Temperature coupling
tcoupl = nose-hoover
nsttcouple   = 10 ;
nh-chain-length  = 10
tc-grps  = Protein_Carb  Water_and_Ions ; Groups to
couple separately
tau-t= 1010; Time constant (ps)-
ref-t  = 270.0 270.0; reference temperature (K)


; pressure coupling
pcoupl   = no  ;-


; GENERATE VELOCITIES FOR STARTUP RUN
gen-vel  = no
gen-temp  = 270.0
gen-seed = 173529


; OPTIONS FOR BONDS
continuation = yes ;  constrain the start configuration

constraints  = all-bonds
constraint-algorithm = lincs ; Type of constraint algorithm-
lincs-order  = 4
lincs-iter   = 1
lincs-warnangle  = 30


Thank you for your help.

Nisha



On Thu, Jun 30, 2016 at 6:21 PM, Justin Lemkul  wrote:

>
>
> On 6/30/16 8:46 AM, NISHA Prakash wrote:
>
>> Dear all,
>>
>> I have conducted a 10ns REMD simulation for a protein ligand complex with
>> the temperature range - 270 

Re: [gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread Justin Lemkul



On 6/30/16 8:46 AM, NISHA Prakash wrote:

Dear all,

I have conducted a 10ns REMD simulation for a protein ligand complex with
the temperature range - 270 to 350 K, however the temperature distribution
plot of the replicas show that the sampling has occurred at higher
temperatures as well that is beyond 350K -
Below is an excerpt from the temperature xvg file


@title "Gromacs Energies"
@xaxis  label "Time (ps)"
@yaxis  label "(K)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Temperature"
0.000.00
   10.00  350.997864
   20.00  353.618927
   30.00  350.068481
   40.00  353.921753
   50.00  359.485565
   60.00  353.463654
   70.00  352.015778
   80.00  350.657898
   90.00  351.927155
  100.00  354.539429
  110.00  354.287720
  120.00  349.436096
  130.00  352.960541
  140.00  351.631317
  150.00  354.217407
  160.00  350.185852
  170.00  350.294434
  180.00  350.980194
  190.00  350.914429
   
   
 470.00  349.224060
  480.00  350.819458
  490.00  348.541748
  500.00  350.393127
  510.00  398.775208
  520.00  444.802856
  530.00  470.899323
  540.00  466.652740
  550.00  465.600677
  560.00  469.22
  570.00  470.548370
  580.00  470.011566
  590.00  470.643951
  600.00  472.433197
  610.00  470.451172
  620.00  469.991699
  630.00  469.073090
  640.00  467.259521
  650.00  464.561798
  660.00  468.416901
  670.00  468.754913
  680.00  469.259613
  690.00  467.641144
  700.00  468.542328


Temperature coupling was done using Nose hoover algorithm.

Does this imply the sampling is wrong or insufficent?
Any help / suggestion is appreciated.



How large is your system, and what is it?  What were your (full) .mdp settings? 
The fact that your temperature started at 0 K and ramped up suggests that you 
did not equilibrate prior to the run, did not generate appropriate velocities, 
or did not continue properly.  The sudden jump in temperature later suggests 
instability, and could be due to incorrect settings.  N-H allows for large 
oscillations, but I wouldn't expect a stable system to that degree.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD - replicas sampling in temperatures beyond the assigned range

2016-06-30 Thread NISHA Prakash
Dear all,

I have conducted a 10ns REMD simulation for a protein ligand complex with
the temperature range - 270 to 350 K, however the temperature distribution
plot of the replicas show that the sampling has occurred at higher
temperatures as well that is beyond 350K -
Below is an excerpt from the temperature xvg file


@title "Gromacs Energies"
@xaxis  label "Time (ps)"
@yaxis  label "(K)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Temperature"
0.000.00
   10.00  350.997864
   20.00  353.618927
   30.00  350.068481
   40.00  353.921753
   50.00  359.485565
   60.00  353.463654
   70.00  352.015778
   80.00  350.657898
   90.00  351.927155
  100.00  354.539429
  110.00  354.287720
  120.00  349.436096
  130.00  352.960541
  140.00  351.631317
  150.00  354.217407
  160.00  350.185852
  170.00  350.294434
  180.00  350.980194
  190.00  350.914429
   
   
 470.00  349.224060
  480.00  350.819458
  490.00  348.541748
  500.00  350.393127
  510.00  398.775208
  520.00  444.802856
  530.00  470.899323
  540.00  466.652740
  550.00  465.600677
  560.00  469.22
  570.00  470.548370
  580.00  470.011566
  590.00  470.643951
  600.00  472.433197
  610.00  470.451172
  620.00  469.991699
  630.00  469.073090
  640.00  467.259521
  650.00  464.561798
  660.00  468.416901
  670.00  468.754913
  680.00  469.259613
  690.00  467.641144
  700.00  468.542328


Temperature coupling was done using Nose hoover algorithm.

Does this imply the sampling is wrong or insufficent?
Any help / suggestion is appreciated.

Thanking you in anticipation.

Nisha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

If you've configured with GMX_MPI, then the resulting GROMACS binary is
called gmx_mpi, so mpirun -np X gmx_mpi mdrun -multi ...

Mark

On Fri, May 13, 2016 at 10:09 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I have installed the openmpi 1.10, and I can run mpirun. When I installed
> grimaces 5.1, I configured -DGMX_MPI=on.
> And the error still happens .
> > 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> >
> > Hi,
> >
> > Yes. Exactly as the error message says, you need to compile GROMACS
> > differently, with real MPI support. See
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> >
> > Mark
> >
> > On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com>
> wrote:
> >
> >> Hi,
> >> I am running a REMD of a protein, when I submit "gmx mdrun -s
> >> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> >> Fatal error:
> >> mdrun -multi or -multidir are not supported with the thread-MPI library.
> >> Please compile GROMACS with a proper external MPI library.
> >> I have installed the openmpi  and gromacs 5.1.
> >> Do anyone know the problem.
> >>
> >> yours sincerelly,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I have installed the openmpi 1.10, and I can run mpirun. When I installed 
grimaces 5.1, I configured -DGMX_MPI=on.
And the error still happens .
> 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> 
> Hi,
> 
> Yes. Exactly as the error message says, you need to compile GROMACS
> differently, with real MPI support. See
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> 
> Mark
> 
> On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:
> 
>> Hi,
>> I am running a REMD of a protein, when I submit "gmx mdrun -s
>> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
>> Fatal error:
>> mdrun -multi or -multidir are not supported with the thread-MPI library.
>> Please compile GROMACS with a proper external MPI library.
>> I have installed the openmpi  and gromacs 5.1.
>> Do anyone know the problem.
>> 
>> yours sincerelly,
>> Ouyang
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

Yes. Exactly as the error message says, you need to compile GROMACS
differently, with real MPI support. See
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I am running a REMD of a protein, when I submit "gmx mdrun -s
> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> Fatal error:
> mdrun -multi or -multidir are not supported with the thread-MPI library.
> Please compile GROMACS with a proper external MPI library.
> I have installed the openmpi  and gromacs 5.1.
> Do anyone know the problem.
>
> yours sincerelly,
> Ouyang
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I am running a REMD of a protein, when I submit "gmx mdrun -s md_0_${i}.tpr 
-multi 46 -replex 1000 -reseed -1", it fails as the below
Fatal error:
mdrun -multi or -multidir are not supported with the thread-MPI library. Please 
compile GROMACS with a proper external MPI library.
I have installed the openmpi  and gromacs 5.1.
Do anyone know the problem.

yours sincerelly,
Ouyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD on more than one node

2016-05-13 Thread Mark Abraham
Hi,

You'll need to choose a replica setup that naturally fits on your available
hardware. Number of nodes * number of cores per node must equal number of
replicas * number of cores per replica. See also
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Fri, May 13, 2016 at 4:41 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I am running a REMD with grimacs 5.0, I have 46 replica, 4 nodes, 16 cores
> per node. how can I use my compute resource and what’s the command of “gmx
> mdrun”?
> the command is below, I am not sure weather it is right
> mpirun -np 4 -npme gmx mdrun -s md_01.tpr  -multi 46 -replex 500 -reseed
> -1.
> mpirun -np 4 -npme gmx mdrun -s md_02.tpr  -multi 46 -replex 500 -reseed
> -1.
> mpirun -np 4 -npme gmx mdrun -s md_03.tpr  -multi 46 -replex 500 -reseed
> -1.
> …
> mpirun -np 4 -npme gmx mdrun -s md_46.tpr  -multi 46 -replex 500 -reseed
> -1.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD on more than one node

2016-05-12 Thread YanhuaOuyang
Hi,
I am running a REMD with grimacs 5.0, I have 46 replica, 4 nodes, 16 cores per 
node. how can I use my compute resource and what’s the command of “gmx mdrun”?
the command is below, I am not sure weather it is right
mpirun -np 4 -npme gmx mdrun -s md_01.tpr  -multi 46 -replex 500 -reseed -1.
mpirun -np 4 -npme gmx mdrun -s md_02.tpr  -multi 46 -replex 500 -reseed -1.
mpirun -np 4 -npme gmx mdrun -s md_03.tpr  -multi 46 -replex 500 -reseed -1.
…
mpirun -np 4 -npme gmx mdrun -s md_46.tpr  -multi 46 -replex 500 -reseed -1.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD--how to determine the temperature distribution

2016-04-26 Thread YanhuaOuyang
When I choose NVT, it appear like this: "ERROR: Can not do constant volume 
yet!”, do you have other ways to determine the temperature except the two 
websites?
> 在 2016年4月26日,下午8:16,Mark Abraham <mark.j.abra...@gmail.com> 写道:
> 
> No, just choose NVT.
> 
> Mark
> 
> On Tue, 26 Apr 2016 13:42 YanhuaOuyang <15901283...@163.com> wrote:
> 
>> Thank you so much, but the latter one is only suitable for REMD in NPT
>> ensemble.
>>> 在 2016年4月26日,上午1:20,Christopher Neale <chris.ne...@alum.utoronto.ca> 写道:
>>> 
>>> There are many published approaches. Here is the one that I use:
>> http://origami.phys.rpi.edu/racc/rate_of_acceptance.php
>>> Another example is here: http://folding.bmc.uu.se/remd/
>>> 
>>> 
>>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
>> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of
>> YanhuaOuyang <15901283...@163.com>
>>> Sent: 25 April 2016 10:36
>>> To: gmx-us...@gromacs.org
>>> Subject: [gmx-users] REMD--how to determine the temperature distribution
>>> 
>>> Dear all,
>>>   I am going to run a REMD of a protein(explicit solvent) in NVT
>> ensemble with gromacs, but I have trouble in determining a optimum
>> temperature  distribution.Can anybody know the ways to determine the
>> temperature?
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD--how to determine the temperature distribution

2016-04-26 Thread Mark Abraham
No, just choose NVT.

Mark

On Tue, 26 Apr 2016 13:42 YanhuaOuyang <15901283...@163.com> wrote:

> Thank you so much, but the latter one is only suitable for REMD in NPT
> ensemble.
> > 在 2016年4月26日,上午1:20,Christopher Neale <chris.ne...@alum.utoronto.ca> 写道:
> >
> > There are many published approaches. Here is the one that I use:
> http://origami.phys.rpi.edu/racc/rate_of_acceptance.php
> > Another example is here: http://folding.bmc.uu.se/remd/
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of
> YanhuaOuyang <15901283...@163.com>
> > Sent: 25 April 2016 10:36
> > To: gmx-us...@gromacs.org
> > Subject: [gmx-users] REMD--how to determine the temperature distribution
> >
> > Dear all,
> >I am going to run a REMD of a protein(explicit solvent) in NVT
> ensemble with gromacs, but I have trouble in determining a optimum
> temperature  distribution.Can anybody know the ways to determine the
> temperature?
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD--how to determine the temperature distribution

2016-04-26 Thread YanhuaOuyang
Thank you so much, but the latter one is only suitable for REMD in NPT ensemble.
> 在 2016年4月26日,上午1:20,Christopher Neale <chris.ne...@alum.utoronto.ca> 写道:
> 
> There are many published approaches. Here is the one that I use: 
> http://origami.phys.rpi.edu/racc/rate_of_acceptance.php
> Another example is here: http://folding.bmc.uu.se/remd/
> 
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
> <gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of YanhuaOuyang 
> <15901283...@163.com>
> Sent: 25 April 2016 10:36
> To: gmx-us...@gromacs.org
> Subject: [gmx-users] REMD--how to determine the temperature distribution
> 
> Dear all,
>I am going to run a REMD of a protein(explicit solvent) in NVT 
> ensemble with gromacs, but I have trouble in determining a optimum 
> temperature  distribution.Can anybody know the ways to determine the 
> temperature?
> --
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD--how to determine the temperature distribution

2016-04-25 Thread Christopher Neale
There are many published approaches. Here is the one that I use: 
http://origami.phys.rpi.edu/racc/rate_of_acceptance.php
Another example is here: http://folding.bmc.uu.se/remd/


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of YanhuaOuyang 
<15901283...@163.com>
Sent: 25 April 2016 10:36
To: gmx-us...@gromacs.org
Subject: [gmx-users] REMD--how to determine the temperature distribution

Dear all,
I am going to run a REMD of a protein(explicit solvent) in NVT ensemble 
with gromacs, but I have trouble in determining a optimum temperature  
distribution.Can anybody know the ways to determine the temperature?
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD--how to determine the temperature distribution

2016-04-25 Thread YanhuaOuyang
Dear all,
I am going to run a REMD of a protein(explicit solvent) in NVT ensemble 
with gromacs, but I have trouble in determining a optimum temperature  
distribution.Can anybody know the ways to determine the temperature?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD of IDPs

2016-04-08 Thread Smith, Micholas D.
Very good point from João. Always remember to check that your box length is big 
enough!

===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João Henriques 
<joao.henriques.32...@gmail.com>
Sent: Friday, April 08, 2016 8:24 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] REMD of IDPs

One small remark to Micholas' email​:

- Make sure the simulation box is big enough to allow the IDP to fully
stretch without interacting with its periodic image(s). This is non-trivial
if you build your system from a random coil. That's why I start from a
fully stretched conformation instead of a more representative conformation
of the system. Much easier to control and the time it takes to get to a
"meaningful" conformation is minimal.

/J



On Fri, Apr 8, 2016 at 2:10 PM, Smith, Micholas D. <smit...@ornl.gov> wrote:

> Dear Yanhua,
>
> Converting a sequence into a structure is itself an "open" problem in
> computational biology/biophysics. There are ways to generate potential
> structures if you also happen to have some restraints from NMR or other
> experiments (small-angle scattering or CD-Spectra) noted in the literature,
> but getting to the "native" fold is very challenging. One program that
> tries to address the sequence to structure problem is Rosetta (
> http://robetta.bakerlab.org/ ).
>
> If you have a short IDP fragment (less than 20 residues), one thing you
> can do it use something like Schrodinger's Maestro program (its free from
> their webpage www.schrodinger.com) and use the molecule builder to "grow"
> the chain as a random coil (random phi-psi placement), save the PDB from it
> and then run MD at high temp to relax the structure into a potential
> starting structure. If it is longer, the IDP may have small structural
> segments (the chain is dominated by disorder but may have short-lived,
> meta-stable, secondary structure regions) in which case you can either try
> to build the molecule with a corresponding secondary structure distribution
> (using Maestro) or try using Rosetta and refine with energy minimization.
>
> Good Luck!
>
> ===
> Micholas Dean Smith, PhD.
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João
> Henriques <joao.henriques.32...@gmail.com>
> Sent: Friday, April 08, 2016 3:51 AM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] REMD of IDPs
>
> ​Dear Yanhua,
>
> To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
> tools able to turn a sequence into a PDB. The user must take care of that
> pre-processing on his/her own. I work with IDPs quite a lot, so what I can
> tell you is what I usually do. I take my fasta sequence and use PyMOL to
> construct the PDB. Then I'm able to feed the PDB to pdb2gmx.
>
> *I'm sure there are a million different ways of doing this, given that
> there are so many different protein modelling tools out there.*
>
> Here's one example using Histatin 5.
>
> - On PyMOL's command line type the following (without the quotation marks):
> "for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"
>
> - This builds a fully stretched Histatin 5 3D model which can be exported
> as PDB.
>
> - Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
> are usually incompatible with the force fields I routinely use.
>
> - It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
> exports the PDB with residue numberings starting from no. 2.
>
> Cheers,
> João
>
>
> On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:
>
> > Hi, I have a sequence of an intrinsically disordered protein, I have no
> > idea how to start my REMD with gromacs. e.g. how to convert my sequence
> > into a pdb file
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread João Henriques
One small remark to Micholas' email​:

- Make sure the simulation box is big enough to allow the IDP to fully
stretch without interacting with its periodic image(s). This is non-trivial
if you build your system from a random coil. That's why I start from a
fully stretched conformation instead of a more representative conformation
of the system. Much easier to control and the time it takes to get to a
"meaningful" conformation is minimal.

/J



On Fri, Apr 8, 2016 at 2:10 PM, Smith, Micholas D. <smit...@ornl.gov> wrote:

> Dear Yanhua,
>
> Converting a sequence into a structure is itself an "open" problem in
> computational biology/biophysics. There are ways to generate potential
> structures if you also happen to have some restraints from NMR or other
> experiments (small-angle scattering or CD-Spectra) noted in the literature,
> but getting to the "native" fold is very challenging. One program that
> tries to address the sequence to structure problem is Rosetta (
> http://robetta.bakerlab.org/ ).
>
> If you have a short IDP fragment (less than 20 residues), one thing you
> can do it use something like Schrodinger's Maestro program (its free from
> their webpage www.schrodinger.com) and use the molecule builder to "grow"
> the chain as a random coil (random phi-psi placement), save the PDB from it
> and then run MD at high temp to relax the structure into a potential
> starting structure. If it is longer, the IDP may have small structural
> segments (the chain is dominated by disorder but may have short-lived,
> meta-stable, secondary structure regions) in which case you can either try
> to build the molecule with a corresponding secondary structure distribution
> (using Maestro) or try using Rosetta and refine with energy minimization.
>
> Good Luck!
>
> ===
> Micholas Dean Smith, PhD.
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João
> Henriques <joao.henriques.32...@gmail.com>
> Sent: Friday, April 08, 2016 3:51 AM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] REMD of IDPs
>
> ​Dear Yanhua,
>
> To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
> tools able to turn a sequence into a PDB. The user must take care of that
> pre-processing on his/her own. I work with IDPs quite a lot, so what I can
> tell you is what I usually do. I take my fasta sequence and use PyMOL to
> construct the PDB. Then I'm able to feed the PDB to pdb2gmx.
>
> *I'm sure there are a million different ways of doing this, given that
> there are so many different protein modelling tools out there.*
>
> Here's one example using Histatin 5.
>
> - On PyMOL's command line type the following (without the quotation marks):
> "for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"
>
> - This builds a fully stretched Histatin 5 3D model which can be exported
> as PDB.
>
> - Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
> are usually incompatible with the force fields I routinely use.
>
> - It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
> exports the PDB with residue numberings starting from no. 2.
>
> Cheers,
> João
>
>
> On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:
>
> > Hi, I have a sequence of an intrinsically disordered protein, I have no
> > idea how to start my REMD with gromacs. e.g. how to convert my sequence
> > into a pdb file
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mai

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread Smith, Micholas D.
Dear Yanhua,

Converting a sequence into a structure is itself an "open" problem in 
computational biology/biophysics. There are ways to generate potential 
structures if you also happen to have some restraints from NMR or other 
experiments (small-angle scattering or CD-Spectra) noted in the literature, but 
getting to the "native" fold is very challenging. One program that tries to 
address the sequence to structure problem is Rosetta ( 
http://robetta.bakerlab.org/ ). 

If you have a short IDP fragment (less than 20 residues), one thing you can do 
it use something like Schrodinger's Maestro program (its free from their 
webpage www.schrodinger.com) and use the molecule builder to "grow" the chain 
as a random coil (random phi-psi placement), save the PDB from it and then run 
MD at high temp to relax the structure into a potential starting structure. If 
it is longer, the IDP may have small structural segments (the chain is 
dominated by disorder but may have short-lived, meta-stable, secondary 
structure regions) in which case you can either try to build the molecule with 
a corresponding secondary structure distribution (using Maestro) or try using 
Rosetta and refine with energy minimization.

Good Luck! 

===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of João Henriques 
<joao.henriques.32...@gmail.com>
Sent: Friday, April 08, 2016 3:51 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] REMD of IDPs

​Dear Yanhua,

To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
tools able to turn a sequence into a PDB. The user must take care of that
pre-processing on his/her own. I work with IDPs quite a lot, so what I can
tell you is what I usually do. I take my fasta sequence and use PyMOL to
construct the PDB. Then I'm able to feed the PDB to pdb2gmx.

*I'm sure there are a million different ways of doing this, given that
there are so many different protein modelling tools out there.*

Here's one example using Histatin 5.

- On PyMOL's command line type the following (without the quotation marks):
"for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"

- This builds a fully stretched Histatin 5 3D model which can be exported
as PDB.

- Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
are usually incompatible with the force fields I routinely use.

- It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
exports the PDB with residue numberings starting from no. 2.

Cheers,
João


On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:

> Hi, I have a sequence of an intrinsically disordered protein, I have no
> idea how to start my REMD with gromacs. e.g. how to convert my sequence
> into a pdb file
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD of IDPs

2016-04-08 Thread João Henriques
​Dear Yanhua,

To my knowledge (prior to gromacs 5.X at least), there​ are no gromacs
tools able to turn a sequence into a PDB. The user must take care of that
pre-processing on his/her own. I work with IDPs quite a lot, so what I can
tell you is what I usually do. I take my fasta sequence and use PyMOL to
construct the PDB. Then I'm able to feed the PDB to pdb2gmx.

*I'm sure there are a million different ways of doing this, given that
there are so many different protein modelling tools out there.*

Here's one example using Histatin 5.

- On PyMOL's command line type the following (without the quotation marks):
"for aa in "AKRHHGYKRKFH": cmd._alt(string.lower(aa))"

- This builds a fully stretched Histatin 5 3D model which can be exported
as PDB.

- Make sure to use "-ignh" on pdb2gmx, as the resulting hydrogen atom names
are usually incompatible with the force fields I routinely use.

- It's also a good idea to use "-renum" on pdb2gmx as for some reason PyMOL
exports the PDB with residue numberings starting from no. 2.

Cheers,
João


On Fri, Apr 8, 2016 at 4:14 AM, YanhuaOuyang <15901283...@163.com> wrote:

> Hi, I have a sequence of an intrinsically disordered protein, I have no
> idea how to start my REMD with gromacs. e.g. how to convert my sequence
> into a pdb file
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] REMD of IDPs

2016-04-07 Thread YanhuaOuyang
Hi, I have a sequence of an intrinsically disordered protein, I have no idea 
how to start my REMD with gromacs. e.g. how to convert my sequence into a pdb 
file
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD system blowing up

2015-09-28 Thread NISHA Prakash
Hi all,

I would like to know if there is a way to figure out which of the replica
is exploding during REMD simulation?
I am running REMD for 54 replicas and the system is exploding with just
one step14495b.pdb and step14495c.pdb files.
Does this mean there is just one replica that is exploding?
Does this also have to do with the temperature?
The equilibration was carried out for 600 ps and the individual replicas
have no issues.

Awaiting response.

Thanks!

Nisha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD and distance restraints problem in gmx 4.6.7

2015-09-18 Thread Mark Abraham
Hi,

On Fri, Sep 18, 2015 at 6:27 AM Christopher Neale <
chris.ne...@alum.utoronto.ca> wrote:

> Dear Users:
>
> I have a system with many distance restraints, designed to maintain
> helical character, e.g.:
> [ distance_restraints ]
> 90 33 1 1 2 2.745541e-01 3.122595e-01 999 1.0
> 97 57 1 2 2 2.876300e-01 2.892921e-01 999 1.0
> 114 73 1 3 2 2.704403e-01 2.929642e-01 999 1.0
> ...
>
> Distance restraints are properly turned on in the .mdp file with:
> disre=simple
> disre-fc=1000
>
> The run works fine on a single node (gmx 4.6.7 here and for all that
> follows):
> mdrun -nt 24 ...
>
> The run also works fine on two nodes:
> ibrun -np 48 mdrun_mpi ...
>
> However, if I try to do temperature replica exchange (REMD), with two
> replicas and two nodes like this:
> ibrun -np 48 mdrun_mpi -multi 2 -replex 200 ...
>
> then I get the error message:
> Fatal error:
> Time or ensemble averaged or multiple pair distance restraints do not work
> (yet) with domain decomposition, use particle decomposition (mdrun option
> -pd)
>

Right. Unfortunately, the ensemble-restraints code dates from some of the
very early days of GROMACS. It uses mdrun -multi and (IIRC) is hard-coded
to be on if its topology and runtime conditions are satisfied. That is, you
can't run non-ensemble distance restraints with normal mdrun -multi. So
when REMD also uses mdrun -multi, things get confused.

Aside: I tried particle decomposition, but when I do that without the REMD,
> simply running the 48-core job that worked fine with domain decomposition,
> I get LINCS errors and quickly a crash (note that without -pd I have 25 ns
> and counting of run without error):
> Step 0, time 0 (ps)  LINCS WARNING in simulation 0
> relative constraint deviation after LINCS:
> rms 5.774043, max 48.082966 (between atoms 21554 and 21555)
> bonds that rotated more than 30 degrees:
>  atom 1 atom 2  angle  previous, current, constraint length
> ...
>
> So I am stuck with an error message that is not entirely helpful because
> (a) the -pd option does not solve the issue even without REMD and also (b)
> the issue seems to be related to REMD (because without REMD I can run on
> multiple nodes) though that is not mentioned in the error message.
>

Yes, the ensemble restraints code is issuing that message. It doesn't know
that REMD is a thing.


> I note that Mark Abraham mentioned here:
> http://redmine.gromacs.org/issues/1117 that:
> "You can use MPI, you just can't have more than one domain (= MPI rank)
> per simulation. For a multi-simulation with distance restraints and not
> replica-exchange, you thus must have as many MPI ranks as simulations, so
> that each simulation has one rank and thus one domain."
>
> I have trouble interpreting this, as I have always thought that running
> MPI across multiple nodes requires multiple domains (apparently = MPI
> ranks), so I am confused as to why that is possible without REMD but gets
> messy with REMD.
>

I'm not sure why I mentioned REMD, but the topic there is ensemble
restraints.


> Final note: I am not trying to do "Time or ensemble averaged" distance
> restraints, and I think that I am not trying to do "multiple pair distance
> restraints", unless that simply means having more than one  simple distance
> restraint. So at the very least I think that the error message that I get
> is confusing.
>

Unfortunately that's thanks to the magic helpfulness of the feature turning
itself on (IIRC). Your setting of type'=2 would probably stop the feature
doing anything, but the check doesn't know that.


> If the solution or source of error is obvious then sorry.. maybe I just
> don't get MPI well enough.
>

No, you understand well enough. Some of the code is not good enough any
more.

It occurs to me now that a one line hack that does "oh, so you're running
mdrun -replex? you clearly don't want ensemble restraints" might work. I'll
see what I can find (probably not before Monday).

Mark

Thank you for your suggestions,
> Chris.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD and distance restraints problem in gmx 4.6.7

2015-09-18 Thread Christopher Neale
Dear Mark:

you are correct. If I get rid of the gmx_fatal call then everything seems to 
work just fine.

In the file src/gmxlib/disre.c , I got rid of the following code, which starts 
on line 147 of gmx 4.6.7:
if (dd->dr_tau != 0 || ir->eDisre == edrEnsemble || cr->ms != NULL ||
dd->nres != dd->npair)
{
gmx_fatal(FARGS, "Time or ensemble averaged or multiple pair 
distance restraints do not work (yet) with domain decomposition, use particle 
decomposition (mdrun option -pd)");
}

I tested by taking the temperature way up and running with and without the 
distance restraints in a 2-replica REMD simulation. It's a short test and I'll 
report back if other issues come up later, but for now things seem to be going 
as desired.

Note 1: it's the "cr->ms != NULL" condition that leads to the gmx_fatal call 
when using -multi -replex and distance restraints.

Note 2: somebody has clearly already thought about this because at line 191 of 
the same file (src/gmxlib/disre.c), REMD is taken into account:
if (cr && cr->ms != NULL && ptr != NULL && !bIsREMD)

Note 3: using -multidir instead of -multi does not on its own solve the issue 
that I originally reported

Thank you for your help
Chris.


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark Abraham 
<mark.j.abra...@gmail.com>
Sent: 18 September 2015 04:11
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] REMD and distance restraints problem in gmx 4.6.7

Hi,

On Fri, Sep 18, 2015 at 6:27 AM Christopher Neale <
chris.ne...@alum.utoronto.ca> wrote:

> Dear Users:
>
> I have a system with many distance restraints, designed to maintain
> helical character, e.g.:
> [ distance_restraints ]
> 90 33 1 1 2 2.745541e-01 3.122595e-01 999 1.0
> 97 57 1 2 2 2.876300e-01 2.892921e-01 999 1.0
> 114 73 1 3 2 2.704403e-01 2.929642e-01 999 1.0
> ...
>
> Distance restraints are properly turned on in the .mdp file with:
> disre=simple
> disre-fc=1000
>
> The run works fine on a single node (gmx 4.6.7 here and for all that
> follows):
> mdrun -nt 24 ...
>
> The run also works fine on two nodes:
> ibrun -np 48 mdrun_mpi ...
>
> However, if I try to do temperature replica exchange (REMD), with two
> replicas and two nodes like this:
> ibrun -np 48 mdrun_mpi -multi 2 -replex 200 ...
>
> then I get the error message:
> Fatal error:
> Time or ensemble averaged or multiple pair distance restraints do not work
> (yet) with domain decomposition, use particle decomposition (mdrun option
> -pd)
>

Right. Unfortunately, the ensemble-restraints code dates from some of the
very early days of GROMACS. It uses mdrun -multi and (IIRC) is hard-coded
to be on if its topology and runtime conditions are satisfied. That is, you
can't run non-ensemble distance restraints with normal mdrun -multi. So
when REMD also uses mdrun -multi, things get confused.

Aside: I tried particle decomposition, but when I do that without the REMD,
> simply running the 48-core job that worked fine with domain decomposition,
> I get LINCS errors and quickly a crash (note that without -pd I have 25 ns
> and counting of run without error):
> Step 0, time 0 (ps)  LINCS WARNING in simulation 0
> relative constraint deviation after LINCS:
> rms 5.774043, max 48.082966 (between atoms 21554 and 21555)
> bonds that rotated more than 30 degrees:
>  atom 1 atom 2  angle  previous, current, constraint length
> ...
>
> So I am stuck with an error message that is not entirely helpful because
> (a) the -pd option does not solve the issue even without REMD and also (b)
> the issue seems to be related to REMD (because without REMD I can run on
> multiple nodes) though that is not mentioned in the error message.
>

Yes, the ensemble restraints code is issuing that message. It doesn't know
that REMD is a thing.


> I note that Mark Abraham mentioned here:
> http://redmine.gromacs.org/issues/1117 that:
> "You can use MPI, you just can't have more than one domain (= MPI rank)
> per simulation. For a multi-simulation with distance restraints and not
> replica-exchange, you thus must have as many MPI ranks as simulations, so
> that each simulation has one rank and thus one domain."
>
> I have trouble interpreting this, as I have always thought that running
> MPI across multiple nodes requires multiple domains (apparently = MPI
> ranks), so I am confused as to why that is possible without REMD but gets
> messy with REMD.
>

I'm not sure why I mentioned REMD, but the topic there is ensemble
restraints.


> Final note: I am not trying to do "Time or ensemble averaged" distance
> restraints, and I

[gmx-users] REMD and distance restraints problem in gmx 4.6.7

2015-09-17 Thread Christopher Neale
Dear Users:

I have a system with many distance restraints, designed to maintain helical 
character, e.g.:
[ distance_restraints ]
90 33 1 1 2 2.745541e-01 3.122595e-01 999 1.0
97 57 1 2 2 2.876300e-01 2.892921e-01 999 1.0
114 73 1 3 2 2.704403e-01 2.929642e-01 999 1.0
...

Distance restraints are properly turned on in the .mdp file with:
disre=simple
disre-fc=1000

The run works fine on a single node (gmx 4.6.7 here and for all that follows):
mdrun -nt 24 ...

The run also works fine on two nodes:
ibrun -np 48 mdrun_mpi ...

However, if I try to do temperature replica exchange (REMD), with two replicas 
and two nodes like this:
ibrun -np 48 mdrun_mpi -multi 2 -replex 200 ...

then I get the error message:
Fatal error:
Time or ensemble averaged or multiple pair distance restraints do not work 
(yet) with domain decomposition, use particle decomposition (mdrun option -pd)

Aside: I tried particle decomposition, but when I do that without the REMD, 
simply running the 48-core job that worked fine with domain decomposition, I 
get LINCS errors and quickly a crash (note that without -pd I have 25 ns and 
counting of run without error):
Step 0, time 0 (ps)  LINCS WARNING in simulation 0
relative constraint deviation after LINCS:
rms 5.774043, max 48.082966 (between atoms 21554 and 21555)
bonds that rotated more than 30 degrees:
 atom 1 atom 2  angle  previous, current, constraint length
...

So I am stuck with an error message that is not entirely helpful because (a) 
the -pd option does not solve the issue even without REMD and also (b) the 
issue seems to be related to REMD (because without REMD I can run on multiple 
nodes) though that is not mentioned in the error message.

I note that Mark Abraham mentioned here: http://redmine.gromacs.org/issues/1117 
that:
"You can use MPI, you just can't have more than one domain (= MPI rank) per 
simulation. For a multi-simulation with distance restraints and not 
replica-exchange, you thus must have as many MPI ranks as simulations, so that 
each simulation has one rank and thus one domain."

I have trouble interpreting this, as I have always thought that running MPI 
across multiple nodes requires multiple domains (apparently = MPI ranks), so I 
am confused as to why that is possible without REMD but gets messy with REMD.

Final note: I am not trying to do "Time or ensemble averaged" distance 
restraints, and I think that I am not trying to do "multiple pair distance 
restraints", unless that simply means having more than one  simple distance 
restraint. So at the very least I think that the error message that I get is 
confusing.

If the solution or source of error is obvious then sorry.. maybe I just don't 
get MPI well enough.

Thank you for your suggestions,
Chris.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD temperature trajectory

2015-08-28 Thread Nawel Mele
Dear Gromacs user,

I performed a REMD simulation and I want to analyse my result per temperature.
I am interested at looking the trajectory for the lowest and the
highest temperature.
I am used to perform REMD with Amber and I realised that Amber
exchanges temperature during the simulation,compare to Gromacs which
returns a discontinuous trajectories
for each temperatures.
So my question is , do I need to use the demux.pl script to get a
temperature trajectory or can I, from the log output file, just
create a trajectory
at the temperature of interest?
For example if I am interested on the lowest temperature, should I
just need to analyse the prod0.log file??

Another question, the output replica_temp.xvg from the demux.pl looks
like this :

0   0123456789
  10   11   12   13   14   15   16   17   18   19   20   21   22   23
 24   25   26   27   28   29   30   31   32   33   34   35   36   37
38   39   40   41   42   43   44   45   46   47
2   1023546789
  10   11   13   12   14   15   16   17   18   19   21   20   23   22
 24   25   27   26   28   29   31   30   33   32   34   35   37   36
39   38   41   40   43   42   44   45   46   47
4   201364587   10
   9   11   13   12   14   15   16   17   18   20   22   19   24   21
 23   26   28   25   27   30   32   29   33   31   34   35   37   36
40   38   41   39   44   42   43   46   45   47
6   310264597   11
   8   10   12   13   15   14   16   17   19   20   22   18   24   21
 23   27   29   25   26   30   32   28   33   31   34   35   37   36
41   39   40   38   45   43   42   47   44   46


Does that mean that, except for the first column, each column
corresponds to each temperature? An so from that we can follow the
trajectory of the replicas for a temperature of interest?

Many thanks in advance

Nawel


-- 

Nawel Mele, PhD Research Student

Jonathan Essex Group, School of Chemistry

University of Southampton,  Highfield

Southampton, SO17 1BJ
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD mdrun_mpi error

2015-06-23 Thread Nawel Mele
Hi Mark,

I tried to run an individual tpr file and it crashed:

Double sids (0, 1) for atom 26
Double sids (0, 1) for atom 27
Double sids (0, 1) for atom 28
Double sids (0, 1) for atom 29
Double sids (0, 1) for atom 30
Double sids (0, 1) for atom 31
Double sids (0, 1) for atom 32
Double sids (0, 1) for atom 33
Double sids (0, 1) for atom 34
Double sids (0, 1) for atom 35
Double sids (0, 1) for atom 36
Double sids (0, 1) for atom 37
Double sids (0, 1) for atom 38
Double sids (0, 1) for atom 39
Double sids (0, 1) for atom 40

---
Program mdrun, VERSION 4.6.5
Source code file:
/local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
line: 99

Fatal error:
Double entries in block structure. Item 53 is in blocks 1 and 0
 Cannot make an unambiguous inverse block.


To create my tpr files I useda bash script like this:





















*#!/bin/bash -fnrep=`wc temperatures.dat | awk '{print $1}'`echo
$nrepcount=0count2=-1for TEMP in `cat temperatures.dat`do   let count2+=1
REP=`printf %03d $count2`   REPBIS=`printf %d $count2`  echo
TEMPERATURE: $TEMP K == FILE: nvt_$REP.mdp  sed s/X/$TEMP/g
nvt.mdp  nvt_$REP.mdp  grompp -f nvt_$REP.mdp -c min.gro -p topol.top -o
eq_$REPBIS.tpr -maxwarn 1   rm -f tempdoneecho N REPLICAS  = $nrepecho 
Done.*

Nawel


2015-06-23 11:47 GMT+01:00 Mark Abraham mark.j.abra...@gmail.com:

 Hi,

 Do your individual replica .tpr files run correctly on their own?

 Mark

 On Mon, Jun 22, 2015 at 3:35 PM Nawel Mele nawel.m...@gmail.com wrote:

  Dear gromacs users,
 
  I am trying to simulate a ligand using REMD method in explicit solvent
 with
  the charmm force field. When I try to equilibrate my system I get this
  error :
 
  Double sids (0, 1) for atom 26
  Double sids (0, 1) for atom 27
  Double sids (0, 1) for atom 28
  Double sids (0, 1) for atom 29
  Double sids (0, 1) for atom 30
  Double sids (0, 1) for atom 31
  Double sids (0, 1) for atom 32
  Double sids (0, 1) for atom 33
  Double sids (0, 1) for atom 34
  Double sids (0, 1) for atom 35
  Double sids (0, 1) for atom 36
  Double sids (0, 1) for atom 37
  Double sids (0, 1) for atom 38
  Double sids (0, 1) for atom 39
  Double sids (0, 1) for atom 40
 
  ---
  Program mdrun_mpi, VERSION 4.6.5
  Source code file:
  /local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
  line: 99
 
  Fatal error:
  Double entries in block structure. Item 53 is in blocks 1 and 0
   Cannot make an unambiguous inverse block.
  For more information and tips for troubleshooting, please check the
 GROMACS
  website at http://www.gromacs.org/Documentation/Errors
 
 
 
  *My mdp input file looks like this :*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  *title   = CHARMM compound NVT equilibration define  =
  -DPOSRES  ; position restrain the protein; Run
  parametersintegrator  = sd; leap-frog stochastic dynamics
  integratornsteps  = 100   ; 2 * 100 = 100
  psdt  = 0.002 ; 2 fs; Output controlnstxout =
  500   ; save coordinates every 0.2 psnstvout =
  10; save velocities every 0.2 psnstenergy   = 500
  ; save energies every 0.2 psnstlog  = 500   ; update log
  file every 0.2 ps; Bond parameterscontinuation= no; first
  dynamics runconstraint_algorithm = SHAKE; holonomic constraints
  constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
  constrainedshake-tol   = 0.1   ; relative tolerance for
 SHAKE;
  Neighborsearchingns_type = grid  ; search neighboring
 grid
  cellsnstlist = 5 ; 10 fsrlist   = 1.0
  ; short-range neighborlist cutoff (in nm)rcoulomb= 1.0
  ;
  short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
  short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
  PME   ; Particle Mesh Ewald for long-range
  electrostaticspme_order   = 4 ; Interpolation order for
  PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
  spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
  ; modified Berendsen thermostattc-grps = LIG SOL   ; two
  coupling groups - more accuratetau_t   = 1.0   1.0 ; time
  constant, in psref_t   = X X   ; reference
  temperature, one for each group, in K;Langevin dynamicsbd-fric =
 0
  ;   ;Brownian dynamics friction coefficient. ld-seed
  =-1;;pseudo random seed is used; Pressure coupling is
  offpcoupl  = no; no pressure coupling in NVT;
 Periodic
  boundary conditionspbc = xyz   ; 3-D PBC; Dispersion
  correctionDispCorr= EnerPres  ; account for cut-off vdW
 scheme;
  Velocity 

Re: [gmx-users] REMD mdrun_mpi error

2015-06-23 Thread Mark Abraham
Hi,

Do your individual replica .tpr files run correctly on their own?

Mark

On Mon, Jun 22, 2015 at 3:35 PM Nawel Mele nawel.m...@gmail.com wrote:

 Dear gromacs users,

 I am trying to simulate a ligand using REMD method in explicit solvent with
 the charmm force field. When I try to equilibrate my system I get this
 error :

 Double sids (0, 1) for atom 26
 Double sids (0, 1) for atom 27
 Double sids (0, 1) for atom 28
 Double sids (0, 1) for atom 29
 Double sids (0, 1) for atom 30
 Double sids (0, 1) for atom 31
 Double sids (0, 1) for atom 32
 Double sids (0, 1) for atom 33
 Double sids (0, 1) for atom 34
 Double sids (0, 1) for atom 35
 Double sids (0, 1) for atom 36
 Double sids (0, 1) for atom 37
 Double sids (0, 1) for atom 38
 Double sids (0, 1) for atom 39
 Double sids (0, 1) for atom 40

 ---
 Program mdrun_mpi, VERSION 4.6.5
 Source code file:
 /local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
 line: 99

 Fatal error:
 Double entries in block structure. Item 53 is in blocks 1 and 0
  Cannot make an unambiguous inverse block.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors



 *My mdp input file looks like this :*












































 *title   = CHARMM compound NVT equilibration define  =
 -DPOSRES  ; position restrain the protein; Run
 parametersintegrator  = sd; leap-frog stochastic dynamics
 integratornsteps  = 100   ; 2 * 100 = 100
 psdt  = 0.002 ; 2 fs; Output controlnstxout =
 500   ; save coordinates every 0.2 psnstvout =
 10; save velocities every 0.2 psnstenergy   = 500
 ; save energies every 0.2 psnstlog  = 500   ; update log
 file every 0.2 ps; Bond parameterscontinuation= no; first
 dynamics runconstraint_algorithm = SHAKE; holonomic constraints
 constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
 constrainedshake-tol   = 0.1   ; relative tolerance for SHAKE;
 Neighborsearchingns_type = grid  ; search neighboring grid
 cellsnstlist = 5 ; 10 fsrlist   = 1.0
 ; short-range neighborlist cutoff (in nm)rcoulomb= 1.0   ;
 short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
 short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
 PME   ; Particle Mesh Ewald for long-range
 electrostaticspme_order   = 4 ; Interpolation order for
 PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
 spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
 ; modified Berendsen thermostattc-grps = LIG SOL   ; two
 coupling groups - more accuratetau_t   = 1.0   1.0 ; time
 constant, in psref_t   = X X   ; reference
 temperature, one for each group, in K;Langevin dynamicsbd-fric = 0
 ;   ;Brownian dynamics friction coefficient. ld-seed
 =-1;;pseudo random seed is used; Pressure coupling is
 offpcoupl  = no; no pressure coupling in NVT; Periodic
 boundary conditionspbc = xyz   ; 3-D PBC; Dispersion
 correctionDispCorr= EnerPres  ; account for cut-off vdW scheme;
 Velocity generationgen_vel = yes   ; assign velocities from
 Maxwell distributiongen_temp= 0.0   ; temperature for
 Maxwell distributiongen_seed= -1; generate a random
 seed*


 *And my input file to run it in parallel looks like that:*










 *#!/bin/bash#PBS -l nodes=3:ppn=16#PBS -l walltime=00:10:00#PBS -o
 zzz.qsub.out#PBS -e zzz.qsub.errmodule load openmpi module load
 gromacs/4.6.5mpirun -np 48  mdrun_mpi -s eq_.tpr -multi 48 -replex 10
  faillog-X.log*


 Does anyone have seen this issue before??

 Many thanks,
 --

 Nawel Mele, PhD Research Student

 Jonathan Essex Group, School of Chemistry

 University of Southampton,  Highfield

 Southampton, SO17 1BJ
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD with different structures

2015-06-23 Thread ruchi lohia
Hi


I am trying to do NVT  REMD simulations with gromacs. I have 60 replicas
and each of them have different starting structure . The starting
structures have same number of atoms but slightly different volume and
pressure. I was able to run these simulations but I want to know if having
different volume and pressure is affecting the exchange probability, and if
it does, is it being included in the gromacs REMD simulations ? Please
suggest a method to verify it .

-- 
Regards

Ruchi Lohia
Graduate Student
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD mdrun_mpi error

2015-06-22 Thread Nawel Mele
Dear gromacs users,

I am trying to simulate a ligand using REMD method in explicit solvent with
the charmm force field. When I try to equilibrate my system I get this
error :

Double sids (0, 1) for atom 26
Double sids (0, 1) for atom 27
Double sids (0, 1) for atom 28
Double sids (0, 1) for atom 29
Double sids (0, 1) for atom 30
Double sids (0, 1) for atom 31
Double sids (0, 1) for atom 32
Double sids (0, 1) for atom 33
Double sids (0, 1) for atom 34
Double sids (0, 1) for atom 35
Double sids (0, 1) for atom 36
Double sids (0, 1) for atom 37
Double sids (0, 1) for atom 38
Double sids (0, 1) for atom 39
Double sids (0, 1) for atom 40

---
Program mdrun_mpi, VERSION 4.6.5
Source code file:
/local/software/gromacs/4.6.5/source/gromacs-4.6.5/src/gmxlib/invblock.c,
line: 99

Fatal error:
Double entries in block structure. Item 53 is in blocks 1 and 0
 Cannot make an unambiguous inverse block.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



*My mdp input file looks like this :*












































*title   = CHARMM compound NVT equilibration define  =
-DPOSRES  ; position restrain the protein; Run
parametersintegrator  = sd; leap-frog stochastic dynamics
integratornsteps  = 100   ; 2 * 100 = 100
psdt  = 0.002 ; 2 fs; Output controlnstxout =
500   ; save coordinates every 0.2 psnstvout =
10; save velocities every 0.2 psnstenergy   = 500
; save energies every 0.2 psnstlog  = 500   ; update log
file every 0.2 ps; Bond parameterscontinuation= no; first
dynamics runconstraint_algorithm = SHAKE; holonomic constraints
constraints = h-bonds   ; all bonds (even heavy atom-H bonds)
constrainedshake-tol   = 0.1   ; relative tolerance for SHAKE;
Neighborsearchingns_type = grid  ; search neighboring grid
cellsnstlist = 5 ; 10 fsrlist   = 1.0
; short-range neighborlist cutoff (in nm)rcoulomb= 1.0   ;
short-range electrostatic cutoff (in nm)rvdw= 1.0   ;
short-range van der Waals cutoff (in nm); Electrostaticscoulombtype =
PME   ; Particle Mesh Ewald for long-range
electrostaticspme_order   = 4 ; Interpolation order for
PME. 4 equals cubic interpolationfourierspacing  = 0.16  ; grid
spacing for FFT; Temperature coupling is on;tcoupl = V-rescale
; modified Berendsen thermostattc-grps = LIG SOL   ; two
coupling groups - more accuratetau_t   = 1.0   1.0 ; time
constant, in psref_t   = X X   ; reference
temperature, one for each group, in K;Langevin dynamicsbd-fric = 0
;   ;Brownian dynamics friction coefficient. ld-seed
=-1;;pseudo random seed is used; Pressure coupling is
offpcoupl  = no; no pressure coupling in NVT; Periodic
boundary conditionspbc = xyz   ; 3-D PBC; Dispersion
correctionDispCorr= EnerPres  ; account for cut-off vdW scheme;
Velocity generationgen_vel = yes   ; assign velocities from
Maxwell distributiongen_temp= 0.0   ; temperature for
Maxwell distributiongen_seed= -1; generate a random
seed*


*And my input file to run it in parallel looks like that:*










*#!/bin/bash#PBS -l nodes=3:ppn=16#PBS -l walltime=00:10:00#PBS -o
zzz.qsub.out#PBS -e zzz.qsub.errmodule load openmpi module load
gromacs/4.6.5mpirun -np 48  mdrun_mpi -s eq_.tpr -multi 48 -replex 10
 faillog-X.log*


Does anyone have seen this issue before??

Many thanks,
-- 

Nawel Mele, PhD Research Student

Jonathan Essex Group, School of Chemistry

University of Southampton,  Highfield

Southampton, SO17 1BJ
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD exchange probabilities

2015-03-08 Thread Neha Gandhi
Dear list,

Using an exchange probability of 0.25 and temperature range 293-370  K, I
calculated number of replicas using the server. However, when I did first
run and tried exchanging replicas every 500 steps (1 ps), I don't think the
exchange probabilities make sense in particular replicas 15 and 16. Replica
15 has a low exchange ratio of 0.12 while replica 16 has a high exchange
ratio of 0.55.

Repl  average probabilities:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl  .28  .28  .28  .28  .29  .28  .29  .29  .28  .29  .28  .28  .29
.29  .29  .12  .55  .29  .29  .30  .30  .29  .29  .26  .32  .31  .30  .30
.30  .30  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32
.32  .32  .32  .33
Repl  number of exchanges:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl 2901 2954 2873 3017 3038 2910 3009 2993 2934 3002 2981 2999 2927
3038 3059 1229 5757 3056 3100 3136 3054 3053 3109 2743  3166 3097 3185
3161 3189 3133 3226 3261 3242 3229 3205 3249 3227 3221 3222 3326 3303 3309
3320 3373 3346 3474
Repl  average number of exchanges:
Repl 0123456789   10   11   12
13   14   15   16   17   18   19   20   21   22   23   24   25   26   27
28   29   30   31   32   33   34   35   36   37   38   39   40   41   42
43   44   45   46   47
Repl  .28  .28  .27  .29  .29  .28  .29  .29  .28  .29  .29  .29  .28
.29  .29  .12  .55  .29  .30  .30  .29  .29  .30  .26  .32  .30  .30  .30
.30  .31  .30  .31  .31  .31  .31  .31  .31  .31  .31  .31  .32  .32  .32
.32  .32  .32  .33


Below are the temperatures I have used. How do I manually edit temperatures
to get average exchange probabilities between 0.2-0.3?

ref_t= 293293; reference temperature, one for each
group, in K
ref_t= 294.51 294.51; reference temperature, one for each
group, in K
ref_t= 296.03 296.03; reference temperature, one for each
group, in K
ref_t= 297.56 297.56; reference temperature, one for each
group, in K
ref_t= 299.09 299.09; reference temperature, one for each
group, in K
ref_t= 300.63 300.63; reference temperature, one for each
group, in K
ref_t= 302.18 302.18; reference temperature, one for each
group, in K
ref_t= 303.73 303.73; reference temperature, one for each
group, in K
ref_t= 305.29 305.29; reference temperature, one for each
group, in K
ref_t= 306.86 306.86; reference temperature, one for each
group, in K
ref_t= 308.43 308.43; reference temperature, one for each
group, in K
ref_t= 310.01 310.01; reference temperature, one for each
group, in K
ref_t= 311.60 311.60; reference temperature, one for each
group, in K
ref_t= 313.19 313.19; reference temperature, one for each
group, in K
ref_t= 314.79 314.79; reference temperature, one for each
group, in K
ref_t= 316.40 316.40; reference temperature, one for each
group, in K
ref_t= 318.63 318.63; reference temperature, one for each
group, in K
ref_t= 319.63 319.63; reference temperature, one for each
group, in K
ref_t= 321.26 321.26; reference temperature, one for each
group, in K
ref_t= 322.89 322.89; reference temperature, one for each
group, in K
ref_t= 324.52 324.52; reference temperature, one for each
group, in K
ref_t= 326.17 326.17; reference temperature, one for each
group, in K
ref_t= 327.82 327.82; reference temperature, one for each
group, in K
ref_t= 329.49 329.49; reference temperature, one for each
group, in K
ref_t= 331.26 331.26; reference temperature, one for each
group, in K
ref_t= 332.86 332.86; reference temperature, one for each
group, in K
ref_t= 334.51 334.51; reference temperature, one for each
group, in K
ref_t= 336.20 336.20; reference temperature, one for each
group, in K
ref_t= 337.90 337.90; reference temperature, one for each
group, in K
ref_t= 339.61 339.61; reference temperature, one for each
group, in K
ref_t= 341.32 341.32; reference temperature, one for each
group, in K
ref_t= 343.04 343.04; reference temperature, one for each
group, in K
ref_t= 344.76 344.76; reference temperature, one for each
group, in K
ref_t= 346.49 346.49; reference temperature, one for each
group, in K

Re: [gmx-users] REMD: mdrun_mpi crash with segmentation fault (but mpi is working)

2015-02-10 Thread Justin Lemkul



On 2/10/15 7:35 AM, Felipe Villanelo wrote:

Absolutely nothing is written in the log file, just the citations



That indicates that the simulation systems are totally unstable and crash 
immediately.  Test by running each job individually (not as part of REMD) and 
see if you can do any diagnosis and troubleshooting based on 
http://www.gromacs.org/Documentation/Terminology/Blowing_Up.


-Justin


Felipe Villanelo Lizana
Bioquímico
Laboratorio de Biología Estructural y Molecular
Universidad de Chile

2015-02-03 10:01 GMT-03:00 Felipe Villanelo el.maest...@gmail.com:


Hi,

I trying to learn REMD following the tutorial on gromacs page
http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/An_introduction_to_replica_exchange_simulations%3A_Mark_Abraham,_Session_1B
 on
a 4-cores computer.
However when I try to use the command:
mpirun -np 4 mdrun_mpi -v -multidir equil[0123] (as the tutorial says)
the program crashed with the following error:
mpirun noticed that process rank 2 with PID 13013 on node debian exited on
signal 11 (Segmentation fault).

The mpi is running fine with the 4 cores if I run a simple gromacs
simulation (NPT equil) in the same machine.
So I think it is not a problem of mpi configuration (as I read in another
thread)

These with gromacs version is 5.0.2

If I try to run the same with an older version of gromacs (4.5.5) the
error is different (previously adjusting the options on the mdp file to
match changes in syntaxis betweeen versions):

[debian:23526] *** An error occurred in MPI_comm_size
[debian:23526] *** on communicator MPI_COMM_WORLD
[debian:23526] *** MPI_ERR_COMM: invalid communicator
[debian:23526] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

But this version also work fine with mpi using the 4 cores on a simple
simulation

Thanks
Bye

Felipe Villanelo Lizana
Bioquímico
Laboratorio de Biología Estructural y Molecular
Universidad de Chile



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD: mdrun_mpi crash with segmentation fault (but mpi is working)

2015-02-10 Thread Felipe Villanelo
Absolutely nothing is written in the log file, just the citations

Felipe Villanelo Lizana
Bioquímico
Laboratorio de Biología Estructural y Molecular
Universidad de Chile

2015-02-03 10:01 GMT-03:00 Felipe Villanelo el.maest...@gmail.com:

 Hi,

 I trying to learn REMD following the tutorial on gromacs page
 http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/An_introduction_to_replica_exchange_simulations%3A_Mark_Abraham,_Session_1B
  on
 a 4-cores computer.
 However when I try to use the command:
 mpirun -np 4 mdrun_mpi -v -multidir equil[0123] (as the tutorial says)
 the program crashed with the following error:
 mpirun noticed that process rank 2 with PID 13013 on node debian exited on
 signal 11 (Segmentation fault).

 The mpi is running fine with the 4 cores if I run a simple gromacs
 simulation (NPT equil) in the same machine.
 So I think it is not a problem of mpi configuration (as I read in another
 thread)

 These with gromacs version is 5.0.2

 If I try to run the same with an older version of gromacs (4.5.5) the
 error is different (previously adjusting the options on the mdp file to
 match changes in syntaxis betweeen versions):

 [debian:23526] *** An error occurred in MPI_comm_size
 [debian:23526] *** on communicator MPI_COMM_WORLD
 [debian:23526] *** MPI_ERR_COMM: invalid communicator
 [debian:23526] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

 But this version also work fine with mpi using the 4 cores on a simple
 simulation

 Thanks
 Bye

 Felipe Villanelo Lizana
 Bioquímico
 Laboratorio de Biología Estructural y Molecular
 Universidad de Chile

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD: mdrun_mpi crash with segmentation fault (but mpi is working)

2015-02-06 Thread Mark Abraham
Hi,

What was the last thing written to the log files?

Mark

On Tue, Feb 3, 2015 at 2:01 PM, Felipe Villanelo el.maest...@gmail.com
wrote:

 Hi,

 I trying to learn REMD following the tutorial on gromacs page
 
 http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/An_introduction_to_replica_exchange_simulations%3A_Mark_Abraham,_Session_1B
 
 on
 a 4-cores computer.
 However when I try to use the command:
 mpirun -np 4 mdrun_mpi -v -multidir equil[0123] (as the tutorial says)
 the program crashed with the following error:
 mpirun noticed that process rank 2 with PID 13013 on node debian exited on
 signal 11 (Segmentation fault).

 The mpi is running fine with the 4 cores if I run a simple gromacs
 simulation (NPT equil) in the same machine.
 So I think it is not a problem of mpi configuration (as I read in another
 thread)

 These with gromacs version is 5.0.2

 If I try to run the same with an older version of gromacs (4.5.5) the error
 is different (previously adjusting the options on the mdp file to match
 changes in syntaxis betweeen versions):

 [debian:23526] *** An error occurred in MPI_comm_size
 [debian:23526] *** on communicator MPI_COMM_WORLD
 [debian:23526] *** MPI_ERR_COMM: invalid communicator
 [debian:23526] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

 But this version also work fine with mpi using the 4 cores on a simple
 simulation

 Thanks
 Bye

 Felipe Villanelo Lizana
 Bioquímico
 Laboratorio de Biología Estructural y Molecular
 Universidad de Chile
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD: mdrun_mpi crash with segmentation fault (but mpi is working)

2015-02-03 Thread Felipe Villanelo
Hi,

I trying to learn REMD following the tutorial on gromacs page
http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013/An_introduction_to_replica_exchange_simulations%3A_Mark_Abraham,_Session_1B
on
a 4-cores computer.
However when I try to use the command:
mpirun -np 4 mdrun_mpi -v -multidir equil[0123] (as the tutorial says)
the program crashed with the following error:
mpirun noticed that process rank 2 with PID 13013 on node debian exited on
signal 11 (Segmentation fault).

The mpi is running fine with the 4 cores if I run a simple gromacs
simulation (NPT equil) in the same machine.
So I think it is not a problem of mpi configuration (as I read in another
thread)

These with gromacs version is 5.0.2

If I try to run the same with an older version of gromacs (4.5.5) the error
is different (previously adjusting the options on the mdp file to match
changes in syntaxis betweeen versions):

[debian:23526] *** An error occurred in MPI_comm_size
[debian:23526] *** on communicator MPI_COMM_WORLD
[debian:23526] *** MPI_ERR_COMM: invalid communicator
[debian:23526] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

But this version also work fine with mpi using the 4 cores on a simple
simulation

Thanks
Bye

Felipe Villanelo Lizana
Bioquímico
Laboratorio de Biología Estructural y Molecular
Universidad de Chile
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD tutorial

2014-08-21 Thread Mark Abraham
On Thu, Aug 21, 2014 at 8:01 AM, shahab shariati shahab.shari...@gmail.com
wrote:

 Dear Mark

 Before, in following address you said: Google knows about two GROMACS REMD
 tutorials.


 https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-January/086563.html

 Unfortunately, I could not find tutorials you mentioned.


You can find them here
https://www.google.se/search?q=gromacs+remd+tutorialsoq=gromacs+remd+tutorials



 

 Also, in following address you said: I've added a section on
 replica-exchange to
 http://wiki.gromacs.org/index.php/Steps_to_Perform_a_Simulation


 https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2007-December/031188.html
 .

 Is this link active, now? I have no access to this link.


The webpage has been changed since then, see link from
http://www.gromacs.org/Documentation/How-tos/Steps_to_Perform_a_Simulation

Mark



 -

 I want to know Is there a tutorial for REMD like what is in
 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/.

 Any help will highly appreciated.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD: A small bug in repl_ex.c and a related question

2014-06-27 Thread Mark Abraham
On Fri, Jun 27, 2014 at 5:27 AM, Suman Chakrabarty 
chakrabarty.su...@gmail.com wrote:

 Hello!

 1. It seems I have encountered a minor bug (?) in repl_ex.c for
 version 4.6.x (for NPT simulations only):

 Line 880 (in version 4.6.5):
 fprintf(fplog,   dpV = %10.3e  d = %10.3e\nb, dpV, delta +
 dpV);

 should be:
 fprintf(fplog,   dpV = %10.3e  d = %10.3e\n, dpV, delta +
 dpV);

 The extra b results into lines like this in the log file:
 bRepl ex  0123456789   10   11
 12 x 13   14 x 15   16 x 17   18   19   20 x 21

 While the REMD itself runs fine, the lines containing bRepl ex are
 never parsed by the demux.pl script. So, only half of the exchanges
 are analyzed by the script.


Yes. This will be fixed in 5.0. You can safely hack out the excess b in
the source if you want a fix in 4.6.x



 2. The related question: In the log file, the dPV corrections are
 reported only every alternate exchange attempts: (I have kept only a
 relevant trimmed part below)

 Replica exchange at step 2000 time 4
 Repl ex  012345 x  67 x  89 x 10

 Replica exchange at step 3000 time 6
 Repl 0 - 1  dE_term =  6.670e-01 (kT)
   dpV = -3.918e-05  d =  6.669e-01
 bRepl ex  0 x  12345678 x  9   10

 Why is this necessary? I hope it doesn't mean the correction is being
 applied every alternate attempt (I have not checked this part of the
 code yet).


Such output is only done from the lower-numbered replica, and the sets of
replicas that exchange alternate between subsequent exchange attempts. You
can only see the full situation by considering all the .log files.


 Is it safe to use NPT REMD with the current version of the code? If I
 just apply the above correction, and use demux.pl to analyze ALL
 exchange attempts, is it still valid? Please confirm.


I expect so. Note that all simulation users should be doing their own
sanity checks, such as confirming that your settings produce statistically
indistinguishable results on (say) some water boxes with and without
-replex on. Nobody else has tested your exact system ;-) Or
http://dx.doi.org/10.1021/ct300688p

Mark


 Thanks,
 Suman.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


  1   2   >