Re: [gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Mala L Radhakrishnan
Thanks.

It's interesting how such small differences can compound to totally
different conformations within a few ps (the energies are close though,
which one would expect at equilibration).

M

On Fri, Mar 22, 2019 at 4:29 PM Benson Muite 
wrote:

> Hi,
>
> Yes, finite precision and order of computations may not be respected by
> the compiler. In most cases, they should be close, however much
> analytical work remains in this area. A  relevant paper (though there
> are many others) is:
>
> Collange, Defour, Grailaat and Iakymchuk
>
> Numerical Reproducibility for the Parallel Reduction on Multi- and
> Many-Core Architectures
>
> https://hal.archives-ouvertes.fr/hal-00949355v4/document
>
> It is unclear if it would be too expensive to implement such methods in
> GROMACs though.
>
> Benson
>
> On 3/22/19 10:11 PM, Mala L Radhakrishnan wrote:
> > Hi Mark,
> >
> > Thanks so much -- good to know that it's basically equivalent to
> different
> > starting velocities and I should expect them to be different.
> >
> > I found this page that sort of explains it:
> > http://www.gromacs.org/Documentation/Terminology/Reproducibility
> >
> > Out of curiosity, I was wondering if someone can point me to something
> that
> > explains why (a+b)+c != a+(b+c) sometimes for computations across
> multiple
> > processors.  Is it a finite precision issue?
> >
> > thanks again,
> > M
> >
> > On Fri, Mar 22, 2019 at 1:21 PM Mark Abraham 
> > wrote:
> >
> >> Hi,
> >>
> >> The dynamic load balancing on by default for domain decomposition means
> >> divergence happens by default. After a few ps it's logically equivalent
> to
> >> starting from different velocities. See the GROMACS user guide for more
> >> details on reproducibility questions!
> >>
> >> Mark
> >>
> >> On Fri., 22 Mar. 2019, 18:15 Mala L Radhakrishnan, <
> mradh...@wellesley.edu
> >> wrote:
> >>
> >>> Hi all,
> >>>
> >>> We set up replicate simulations (same starting mdp files and
> structures)
> >>> that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel
> >> to
> >>> no and used the same ld-seed value for both with the v-rescale
> >> thermostat.
> >>> They also ran on the same machine -- so from a deterministic point of
> >> view,
> >>> I would expect them to be "exactly" the same.
> >>>
> >>> The simulations, while having similar average energetics throughout,
> >> sample
> >>> different conformations and they start to differ pretty much right
> after
> >>> the simulation starts.
> >>>
> >>> I understand that I could have gotten results to be more reproducible
> by
> >>> using the -reprod flag, but in the case I describe (and I don't think I
> >>> have any other stochastic things going on unless I'm not understanding
> >>> ld-seed or gen-vel = no, or am forgetting something?),  what is causing
> >> the
> >>> difference?   Online, I see something about domain decomposition and
> >>> optimization, but I'd like to understand that better.
> >>>
> >>> My major question, though, is -- are the differences due to domain
> >>> decomposition optimization enough to basically equal what you might get
> >>> from "replicates" starting with different starting velocities,
> especially
> >>> once equilibration (as measure by RMSD) is reached? That's what I'm
> >> seeing,
> >>> so I wanted to make sure that these differences can actually be this
> big.
> >>> Or is there some other source of stochasticity I'm forgetting?
> >>>
> >>> Thanks so much, and I hope my question makes sense.
> >>>
> >>> M
> >>>
> >>> --
> >>> Mala L. Radhakrishnan
> >>> Whitehead Associate Professor of Critical Thought
> >>> Associate Professor of Chemistry
> >>> Wellesley College
> >>> 106 Central Street
> >>> Wellesley, MA 02481
> >>> (781)283-2981
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-requ...@gromacs.org.
> >>>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.



-- 
Mala L. Radhakrishnan
Whitehead As

Re: [gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Benson Muite

Hi,

Yes, finite precision and order of computations may not be respected by 
the compiler. In most cases, they should be close, however much 
analytical work remains in this area. A  relevant paper (though there 
are many others) is:


Collange, Defour, Grailaat and Iakymchuk

Numerical Reproducibility for the Parallel Reduction on Multi- and 
Many-Core Architectures


https://hal.archives-ouvertes.fr/hal-00949355v4/document

It is unclear if it would be too expensive to implement such methods in 
GROMACs though.


Benson

On 3/22/19 10:11 PM, Mala L Radhakrishnan wrote:

Hi Mark,

Thanks so much -- good to know that it's basically equivalent to different
starting velocities and I should expect them to be different.

I found this page that sort of explains it:
http://www.gromacs.org/Documentation/Terminology/Reproducibility

Out of curiosity, I was wondering if someone can point me to something that
explains why (a+b)+c != a+(b+c) sometimes for computations across multiple
processors.  Is it a finite precision issue?

thanks again,
M

On Fri, Mar 22, 2019 at 1:21 PM Mark Abraham 
wrote:


Hi,

The dynamic load balancing on by default for domain decomposition means
divergence happens by default. After a few ps it's logically equivalent to
starting from different velocities. See the GROMACS user guide for more
details on reproducibility questions!

Mark

On Fri., 22 Mar. 2019, 18:15 Mala L Radhakrishnan, 
Hi all,

We set up replicate simulations (same starting mdp files and structures)
that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel

to

no and used the same ld-seed value for both with the v-rescale

thermostat.

They also ran on the same machine -- so from a deterministic point of

view,

I would expect them to be "exactly" the same.

The simulations, while having similar average energetics throughout,

sample

different conformations and they start to differ pretty much right after
the simulation starts.

I understand that I could have gotten results to be more reproducible by
using the -reprod flag, but in the case I describe (and I don't think I
have any other stochastic things going on unless I'm not understanding
ld-seed or gen-vel = no, or am forgetting something?),  what is causing

the

difference?   Online, I see something about domain decomposition and
optimization, but I'd like to understand that better.

My major question, though, is -- are the differences due to domain
decomposition optimization enough to basically equal what you might get
from "replicates" starting with different starting velocities, especially
once equilibration (as measure by RMSD) is reached? That's what I'm

seeing,

so I wanted to make sure that these differences can actually be this big.
Or is there some other source of stochasticity I'm forgetting?

Thanks so much, and I hope my question makes sense.

M

--
Mala L. Radhakrishnan
Whitehead Associate Professor of Critical Thought
Associate Professor of Chemistry
Wellesley College
106 Central Street
Wellesley, MA 02481
(781)283-2981
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Mala L Radhakrishnan
Hi Mark,

Thanks so much -- good to know that it's basically equivalent to different
starting velocities and I should expect them to be different.

I found this page that sort of explains it:
http://www.gromacs.org/Documentation/Terminology/Reproducibility

Out of curiosity, I was wondering if someone can point me to something that
explains why (a+b)+c != a+(b+c) sometimes for computations across multiple
processors.  Is it a finite precision issue?

thanks again,
M

On Fri, Mar 22, 2019 at 1:21 PM Mark Abraham 
wrote:

> Hi,
>
> The dynamic load balancing on by default for domain decomposition means
> divergence happens by default. After a few ps it's logically equivalent to
> starting from different velocities. See the GROMACS user guide for more
> details on reproducibility questions!
>
> Mark
>
> On Fri., 22 Mar. 2019, 18:15 Mala L Radhakrishnan,  >
> wrote:
>
> > Hi all,
> >
> > We set up replicate simulations (same starting mdp files and structures)
> > that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel
> to
> > no and used the same ld-seed value for both with the v-rescale
> thermostat.
> > They also ran on the same machine -- so from a deterministic point of
> view,
> > I would expect them to be "exactly" the same.
> >
> > The simulations, while having similar average energetics throughout,
> sample
> > different conformations and they start to differ pretty much right after
> > the simulation starts.
> >
> > I understand that I could have gotten results to be more reproducible by
> > using the -reprod flag, but in the case I describe (and I don't think I
> > have any other stochastic things going on unless I'm not understanding
> > ld-seed or gen-vel = no, or am forgetting something?),  what is causing
> the
> > difference?   Online, I see something about domain decomposition and
> > optimization, but I'd like to understand that better.
> >
> > My major question, though, is -- are the differences due to domain
> > decomposition optimization enough to basically equal what you might get
> > from "replicates" starting with different starting velocities, especially
> > once equilibration (as measure by RMSD) is reached? That's what I'm
> seeing,
> > so I wanted to make sure that these differences can actually be this big.
> > Or is there some other source of stochasticity I'm forgetting?
> >
> > Thanks so much, and I hope my question makes sense.
> >
> > M
> >
> > --
> > Mala L. Radhakrishnan
> > Whitehead Associate Professor of Critical Thought
> > Associate Professor of Chemistry
> > Wellesley College
> > 106 Central Street
> > Wellesley, MA 02481
> > (781)283-2981
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Mala L. Radhakrishnan
Whitehead Associate Professor of Critical Thought
Associate Professor of Chemistry
Wellesley College
106 Central Street
Wellesley, MA 02481
(781)283-2981
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Question about gmx order ...

2019-03-22 Thread Sergio Garay
Hi all

I have a micelle trajectory formed by 30 molecules, and I would like to
obtain the order parameters of each acyl chain moieties. I've tried using
the option -radial, but I'm not sure about the result. What will be the
director that it will be used in the calculation? It will be a vector from
the center of mass to.what? the tool is not asking for any terminal
atom to calculate the end of the director. Probably I'm wrong, but I'm not
sure how to continue.
Any help will be very appreciated!

Sergio
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Different models of TMAO

2019-03-22 Thread Erik Marklund
Dear Dilip,

Perhaps you are aware of these publications 
http://dx.doi.org/10.1021/jacs.7b11695, 
https://doi.org/10.1103/PhysRevLett.119.108102.

Unless someone provides parameter files for you, you will need to learn how the 
gromacs topology and force field files work. That is a good thing to know 
long-term anyway if you plan on doing MD in the future.

Kind regards,
Erik
__
Erik Marklund, PhD, Associate Professor  of Biochemistry
Associate Senior Lecturer in Computational Biochemistry
Department of Chemistry – BMC, Uppsala University
+46 (0)18 471 4562
erik.markl...@kemi.uu.se

On 20 Mar 2019, at 16:24, Dilip.H.N 
mailto:cy16f01.di...@nitk.edu.in>> wrote:

Hello all,
I have run a simulation with TMAO using Charmm36 FF and i want to see
whether different models of TMAO ie., Kast, Netz and Garcia models do
solvate/behave in the same way.
So, can anybody share the relevant parameter files as .rtp, .itp etc., of
the above mentioned three different TMAO models required to run the
simulations...
How can i include the epsilon and LJ parameters in it...?

Any suggestions are appreciated.
Thank you.
---
With Best Regards,

Dilip.H.N
Ph.D. Student.

[image: Mailtrack]

Sender
notified by
Mailtrack

20/03/19,
20:54:23
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.









När du har kontakt med oss på Uppsala universitet med e-post så innebär det att 
vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du 
läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/

E-mailing Uppsala University means that we will process your personal data. For 
more information on how this is performed, please read here: 
http://www.uu.se/en/about-uu/data-protection-policy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Mark Abraham
Hi,

The dynamic load balancing on by default for domain decomposition means
divergence happens by default. After a few ps it's logically equivalent to
starting from different velocities. See the GROMACS user guide for more
details on reproducibility questions!

Mark

On Fri., 22 Mar. 2019, 18:15 Mala L Radhakrishnan, 
wrote:

> Hi all,
>
> We set up replicate simulations (same starting mdp files and structures)
> that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel to
> no and used the same ld-seed value for both with the v-rescale thermostat.
> They also ran on the same machine -- so from a deterministic point of view,
> I would expect them to be "exactly" the same.
>
> The simulations, while having similar average energetics throughout, sample
> different conformations and they start to differ pretty much right after
> the simulation starts.
>
> I understand that I could have gotten results to be more reproducible by
> using the -reprod flag, but in the case I describe (and I don't think I
> have any other stochastic things going on unless I'm not understanding
> ld-seed or gen-vel = no, or am forgetting something?),  what is causing the
> difference?   Online, I see something about domain decomposition and
> optimization, but I'd like to understand that better.
>
> My major question, though, is -- are the differences due to domain
> decomposition optimization enough to basically equal what you might get
> from "replicates" starting with different starting velocities, especially
> once equilibration (as measure by RMSD) is reached? That's what I'm seeing,
> so I wanted to make sure that these differences can actually be this big.
> Or is there some other source of stochasticity I'm forgetting?
>
> Thanks so much, and I hope my question makes sense.
>
> M
>
> --
> Mala L. Radhakrishnan
> Whitehead Associate Professor of Critical Thought
> Associate Professor of Chemistry
> Wellesley College
> 106 Central Street
> Wellesley, MA 02481
> (781)283-2981
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Mala L Radhakrishnan
Hi all,

We set up replicate simulations (same starting mdp files and structures)
that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel to
no and used the same ld-seed value for both with the v-rescale thermostat.
They also ran on the same machine -- so from a deterministic point of view,
I would expect them to be "exactly" the same.

The simulations, while having similar average energetics throughout, sample
different conformations and they start to differ pretty much right after
the simulation starts.

I understand that I could have gotten results to be more reproducible by
using the -reprod flag, but in the case I describe (and I don't think I
have any other stochastic things going on unless I'm not understanding
ld-seed or gen-vel = no, or am forgetting something?),  what is causing the
difference?   Online, I see something about domain decomposition and
optimization, but I'd like to understand that better.

My major question, though, is -- are the differences due to domain
decomposition optimization enough to basically equal what you might get
from "replicates" starting with different starting velocities, especially
once equilibration (as measure by RMSD) is reached? That's what I'm seeing,
so I wanted to make sure that these differences can actually be this big.
Or is there some other source of stochasticity I'm forgetting?

Thanks so much, and I hope my question makes sense.

M

-- 
Mala L. Radhakrishnan
Whitehead Associate Professor of Critical Thought
Associate Professor of Chemistry
Wellesley College
106 Central Street
Wellesley, MA 02481
(781)283-2981
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] System Blowing up when more than one MPI thread is used

2019-03-22 Thread Mayank Vats
Hi,
I am trying to do a simple simulation in GROMCAS 2018.6, of a protein in
tip3p water with the amber99 forcefield. Its a 14nm cubic box with two
protein chains, and 87651 water molecules, 269525 atoms total. I am able to
energy minimise to an Fmax < 500, and then perform nvt and npt
equilibriation for 100ps each at 300K and 1 bar. I am facing issues in the
production run (dt=2fs, for 5ns).
A little bit about the hardware i'm using: My local workstation has 8 cpu
cores and 1 gpu. I'm also connecting to a Power9 system, where i have
access to 1 node with 160 cpu cores and 4 gpus. I have built the
gromacs version for the P9 system with GMX_OPENMP_MAX_THREADS=192, so it
can run using more than the default 64 threads.
I do not have a clear understanding of the way mpi, ranks etc work, which
is why I'm here.
Now the issue, with information from the log files:
On my *local workstation*, i run *gmx mdrun* and it uses 1 mpi thread, 8
openMP threads, and the one gpu available is assigned two gpu tasks:

Using 1 MPI thread
Using 8 OpenMP threads

1 GPU auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
   PP:0,PME:0

This runs and completes successfully.
On the *P9 system*, where i have one node with the configuration mentioned
above, i have run *gmx_mpi mdrun* which uses 1 mpi thread, 160 openMP
threads and 1 gpu as follows:

Using 1 MPI thread
Using 160 OpenMP threads

1 GPU auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
   PP:0,PME:0

This two completes successfully.
However, i wanted to be able to use all the available gpus for the
simulation on the *P9 system*, and tried *gmx mdrun* which gave this in the
log file:

Using 32 MPI threads
Using 5 OpenMP threads per tMPI thread

On host bgrs02 4 GPUs auto-selected for this run.
Mapping of GPU IDs to the 32 GPU tasks in the 32 ranks on this node:

PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1,PP:2,PP:2,PP:2,PP:2,PP:2,PP:2,PP:2,PP:2,PP:3,PP:3,PP:3,PP:3,PP:3,PP:3,PP:3,PP:3

This gives me warnings saying:
"One or more water molecules cannot be settled.
Check for bad contacts and/or reduce the timestep if appropriate."
"LINCS warnings.."
and finally exits with this:

Program: gmx mdrun, version 2018.6
Source file: src/gromacs/ewald/pme-redistribute.cpp (line 282)
MPI rank:12 (out of 32)

Fatal error:
891 particles communicated to PME rank 12 are more than 2/3 times the
cut-off
out of the domain decomposition cell of their charge group in dimension x.
This usually means that your system is not well equilibrated.

I looked up the error on the documentation as suggested, and it says that
this an indication of the system blowing up. I don't understand how the
same input configuration completes production successfully when using one
MPI thread, but fails and gives a 'blowing up' message when using more MPI
threads.
Are there arguments that i should be using? Or should i build the P9
Gromacs in a different way? Or is it an artifact of my system itself, and
if so, any suggestions on what to change?
I can provide more information if needed. Any help will be appreciated.
Thanks,
Mayank
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] nonstandard hydrogen names

2019-03-22 Thread MD
Hi Gromacs folks,
I have a quick question on the nonstandard H names. After simulation the
Hydrogen names are all having the names adopted from the forcefield I used
(charmm), e.g. HG1 instead of H, which is giving me problem displaying the
correct charging surface because the chimera I use doesn't recognize HG1 as
hydrogen. I wonder how you fix this type of issue, without further editing
the pdb?
Best,
MD
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] cutoff for tabulated bonded interaction functions?

2019-03-22 Thread Voronin, Arthur (SCC)
Dear Gromacs users,


I'm using tabulated bonded interaction functions (table_b0.xvg) for some 
simulations, where the potential has the shape of a sigmoid function.

Since the domain decomposition depends on the largest interaction, my 
simulation cant be run in parallel via MPI in some specific cases, where the 
distance goes over the whole simulation box.

Is there a way to setup a cutoff for table_b0.xvg? If the distances in the 
simulation are beyond the r values of the table, mdrun will quit with an error. 
Since the potential in the table is a sigmoid, I wouldn't mind to have a cutoff 
at some point.


Best regards,

Arthur Voronin
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] dssp error

2019-03-22 Thread Soham Sarkar
sometimes it happened if there is any residue that is unknown to DSSP, then
it might show this kind of error. I faced it once.

On Fri, Mar 22, 2019 at 1:10 AM Mario Andres Rodriguez Pineda <
mand...@iq.usp.br> wrote:

> I downloaded this from dssp ftp page: dssp-2.0.4-linux-amd64, this is an
> executable
>
> Em qui, 21 de mar de 2019 às 16:36, Qinghua Liao 
> escreveu:
>
> > Have a check the installation of DSSP,
> > did you set the variable DSSP for do_dssp?
> >
> >
> > All the best,
> > Qinghua
> >
> >
> > On 3/21/19 8:01 PM, Mario Andres Rodriguez Pineda wrote:
> > > gmx do_dssp -f cbd211_mdnopbc.xtc -s cbd211_md.tpr -o cbd211_ssp.xpm
> -tu
> > ns
> > > -sc cbd211ssp.xvg -ver 2
> > >
> > > Program: gmx do_dssp, version 2016.3
> > > Source file: src/gromacs/gmxana/gmx_do_dssp.cpp (line 668)
> > >
> > > Fatal error:
> > > Failed to execute command: Try specifying your dssp version with the
> -ver
> > > option.
> > >
> > >
> > > Em qui, 21 de mar de 2019 às 15:53, Qinghua Liao <
> scorpio.l...@gmail.com
> > >
> > > escreveu:
> > >
> > >> Hello,
> > >>
> > >> Just follow the suggestion by adding "-ver 2" to your command.
> > >>
> > >>
> > >> All the best,
> > >> Qinghua
> > >>
> > >> On 3/21/19 7:50 PM, Mario Andres Rodriguez Pineda wrote:
> > >>> Good afternoon.
> > >>> I'm using Gromacs 2016 to do a dynamic simulation. I installed DSSP
> > 2.0.4
> > >>> for secondary structure analysis. When i try to run it i used this
> > >> commad:
> > >>> gmx do_dssp -f prot_mdnopbc.xtc -s prot_md.tpr -o prot_ssp.xpm -tu ns
> > -sc
> > >>> protssp.xvg
> > >>>
> > >>> but gromacs send me this error:
> > >>> Program: gmx do_dssp, version 2016.3
> > >>> Source file: src/gromacs/gmxana/gmx_do_dssp.cpp (line 668)
> > >>>
> > >>> Fatal error:
> > >>> Failed to execute command: Try specifying your dssp version with the
> > -ver
> > >>> option.
> > >>>
> > >>> Can you help me to fix this error?
> > >>> Thanks for your help
> > >>>
> > >>>
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at
> > >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > >> posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > >> send a mail to gmx-users-requ...@gromacs.org.
> > >>
> > >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
>
>
>
> --
> *MSc. MARIO ANDRÉS RODRÍGUEZ PINEDA*
> *Estudiante Doctorado en Biotecnología*
>
> *UNAL- MEDELLÍN/ IQ- USP*
>
> *Grupo de Pesquisa em Ressonância Magnética Nuclear de Biomoléculas *
> *Av. Prof. Lineu Prestes 748, Sao Paulo SP, 05508-000, Tel: + 55 11 3091
> 1475*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.