Re: [gmx-users] question on system blow-up

2019-09-20 Thread Lei Qian
Hi John,

Thank you very much for your reply.

After checking my log file, it seems my Metal ion cannot form a stable
coordination bonds in simulation.
the first 10 missing interactions, except for exclusions:
   LJ-14 atoms 1180 1572   global  1180  1572
The number 1572 is my metal ion. And 1180 is an atom close to it in space.

Because it says "the first 10 missing interactions", but the file only
shows one missing interaction here LJ-14 atoms 1180 1572. It seems strange.
When I checked the trr file, the metal ion position shift away from its
normal position.
I will try to redo mcpb process.


On Fri, Sep 20, 2019 at 6:22 AM John Whittaker <
johnwhitt...@zedat.fu-berlin.de> wrote:

> Hi,
>
> > Dear gmx-users,
> >
> > Could I ask a question on system blow-up? Thanks!
> > I finished em, nvt, and npt. But when I ran production(npt) 10ns, I found
> > system blow up at around 1ns. ("atoms involved moved further apart than
> > the
> > multi-body cut-off distance")
> >
> > When I checked the trajectory files: nvt.trr, npt.trr, I found the
> > distorted water molecules (very very long H-O bond), although protein
> > looks
> > OK in these traj files. I guess perhaps those distorted water molecules
> > are
> > blow-up and they can gradually affect protein?
>
> It doesn't matter which molecule it is; if a molecule is blowing up, it
> will cause the simulation to fail. After visualizing your system, are you
> able to see what causes the elongated H-O bond? Are molecules overlapping?
> Are there other warnings (like LINCS warnings) in your output? You need to
> provide some more information about the system you are simulating or no
> one will really know how to help.
>
> >
> > I tried to change production npt setting to nvt (pcoupl = no), and change
> > integrator md to sd, change tau_t: 2.0 2.0. However, the following error
> > always show up:
> > "1 of the 20067 bonded interactions could not be calculated because some
> > atoms involved moved further apart than the multi-body cut-off distance
> > (1.18213 nm) or the two-body cut-off distance (1.241 nm)"
>
> These settings are likely not the cause of your problem and they probably
> wouldn't change much about the simulation consistently blowing up. When
> you make these changes, are the same molecules distorted?
>
> Best,
>
> John
>
> >
> > Thank you!
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send
> > a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] RAM usage of gmx msd

2019-09-20 Thread Martin

Hi,

thanks a lot for your answer. It contained exactly the information that 
I needed.

Here is a summary of my issue and how I resolved it:

Problem:

I wanted to calculate lateral diffusion of lipids in a cell membrane 
using a key atom for each lipid. But gmx msd needs too much RAM.


Solution:

I first created a smaller trajectory file using:
gmx trjconv -f mdrun.xtc -n key_atoms.ndx -o key_atoms.xtc
Then I adjusted the tpm file:
gmx convert-tpr -s mdrun.tpr -n key.ndx -o key_atoms.tpr
I have now limited the trajectory to only the atoms that I actually need.
But now the original index file no longer fits.
So I created a new one using:
gmx make_ndx -f key_atoms.tpr -o key_atoms.ndx

Once that was done I was able to calculate gmx msd with barely any RAM 
usage at all.


Thanks again for all your support.
Best regards
Martin Kern

Am 18.09.19 um 11:52 schrieb Mark Abraham:

Hi,

It's likely that using the same index file is the problem. The numbers it
contains are interpreted relative to the tpr file, so if you make a subset
of the tpr file, then it's on you to understand whether the necessary
indices have changed or not.

Mark

On Tue, 17 Sep 2019 at 14:58, Martin Kern 
wrote:


Hi John,

thanks for your answer. The mdrun_key_atoms.xtc FILE is only around 500
MB in size. But the issue here wasn't RAM usage but a segmentation
fault. gmx msd requires a tpr file and I guess that my tpr and xtc files
didn't fit together. Skipping frames might work without the need of a
modified tpr file. I'll try that tomorrow.

Am 17.09.2019 um 12:09 schrieb John Whittaker:

Hi Martin,



Hello everyone.

I simulated a cell membrane and would like to calculate lateral
diffusion of lipids. I tried this using the gmx msd command.
Unfortunately this uses enormous amounts of RAM. The process runs
without error until it is killed by the operating system. No output file
is created at that time.

The membrane contains around 400 lipids and I simulated for 1100ns which
is 22 frames. The total size of the xtc file is around 150 GB. I use
GROMACS 2016.4 with GPU support. The command I used was:
gmx msd -s mdrun.tpr -f mdrun.xtc -n key_atoms.ndx -lateral z -o
lateraal_diffusion.xvg

I found an old email that also mentions the high RAM usage of gmx msd
but it didn't get a reply.


https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2018-January/118014.html

I also tried reducing the RAM usage by creating a trajectory that only
includes the key atoms. This attempt resulted in a segmentation fault.
Here is what I tried:
gmx trjconv -f mdrun.xtc -n key_atoms.ndx -o mdrun_key_atoms.xtc
gmx convert-tpr -f mdrun.tpr -n key_atoms.ndx -o mdrun_key_atoms.tpr
gmx msd -s mdrun_key_atoms.tpr -f mdrun_key_atoms.xtc -n key_atoms.ndx
-lateral z -o lateraal_diffusion.xvg

How big is the "mdrun_key_atoms.xtc" trajectory? It's possible that this
file is still too large for the amount of RAM available on your machine.

Are you able to break this trajectory down into smaller chunks that are
more manageable? You could also use the -skip or -dt options of gmx
trjconv in order to write only every "n" frames.

Hope this helps,

John



I'm grateful for any suggestion.

Best regards
Martin Kern

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send

a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] SIMD options - detection program issue

2019-09-20 Thread Szilárd Páll
Hi,

Good to know your system instability issues were resolved.

(As a side-note you could have tried to use elrepo which has newer kernels
for CentOS.)

The SIMD detection should however not be failing; can you please file an
issue on redmine.gromacs.org with cmake invocation, CMakeCache.txt and
CMake logs attached. Also please make sure you use the latest patch
release, that is 2019.3?

Cheers,
--
Szilárd


On Wed, Sep 18, 2019 at 9:17 AM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

>  Hi all,
> an update, hopefully the last one, I have been annoying you for too long.
> I decided to replace centOS with Mint 19.2. I compiled Gromacs 2019.2,
> setting -DGMX_SIMD=AUTO resulted again in the same error of compilation of
> detect program (Did not detect build CPU vendor - detection program did not
> compile - Detection for best SIMD instructions failed, using SIMD - None --
> SIMD instructions disabled). So I set manually -DGMX_SIMD to avx2_128 or
> avx2_256. The compilation worked fine and I tried to run the two
> simulations in parallel on the two gpus
> (gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 00 -pin on -pinoffset 0 -pinstride 1
> plus
> gmx mdrun -deffnm run2 -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 11 -pin on -pinoffset 28 -pinstride 1)
> and in both cases the system proved stable without any crash. Maybe the old
> kernel of centos (3.10) was not that smart in managing the cpu (I also
> found some posts on the web regarding issues of threadripper 2990wx and
> centos).
> Still I can not find an explanation of the compilation error of detect
> program.
> Anyway, thanks to all of you for sharing suggestions and opinions,
> Stefano
>
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178
>
>
> <
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail
> >
> Mail
> priva di virus. www.avast.com
> <
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail
> >
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Free webinar on accelerating sampling in GROMACS with the AWH method

2019-09-20 Thread Paul bauer

Hi,

Since version 2018, GROMACS supports the Accelerated Weight Histogram 
method to accelerate sampling along reaction coordinates. With AWH 
conformational transitions can be accelerated by orders of magnitude.


On Tuesday, September 24, from 15:00 we are organizing a free webinar on 
the AWH method presented by the main developer Berk Hess.
At the end of the webinar we'll have a Q session during which you can 
ask Berk questions directly.


More information about the webinar and how to join is here:
https://bioexcel.eu/webinar-accelerating-sampling-in-gromacs-with-the-awh-method-2019-09-24/

Cheers

Paul

--
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] question on system blow-up

2019-09-20 Thread John Whittaker
Hi,

> Dear gmx-users,
>
> Could I ask a question on system blow-up? Thanks!
> I finished em, nvt, and npt. But when I ran production(npt) 10ns, I found
> system blow up at around 1ns. ("atoms involved moved further apart than
> the
> multi-body cut-off distance")
>
> When I checked the trajectory files: nvt.trr, npt.trr, I found the
> distorted water molecules (very very long H-O bond), although protein
> looks
> OK in these traj files. I guess perhaps those distorted water molecules
> are
> blow-up and they can gradually affect protein?

It doesn't matter which molecule it is; if a molecule is blowing up, it
will cause the simulation to fail. After visualizing your system, are you
able to see what causes the elongated H-O bond? Are molecules overlapping?
Are there other warnings (like LINCS warnings) in your output? You need to
provide some more information about the system you are simulating or no
one will really know how to help.

>
> I tried to change production npt setting to nvt (pcoupl = no), and change
> integrator md to sd, change tau_t: 2.0 2.0. However, the following error
> always show up:
> "1 of the 20067 bonded interactions could not be calculated because some
> atoms involved moved further apart than the multi-body cut-off distance
> (1.18213 nm) or the two-body cut-off distance (1.241 nm)"

These settings are likely not the cause of your problem and they probably
wouldn't change much about the simulation consistently blowing up. When
you make these changes, are the same molecules distorted?

Best,

John

>
> Thank you!
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Tesla GPUs: P40 or P100?

2019-09-20 Thread Matteo Tiberti
Hi,

thanks for your suggestion - we hadn't considered those Quadros, but they
do sound like a great option.
I'll be asking for availability and pricing.

Cheers

Matteo

Il giorno gio 19 set 2019 alle ore 20:39 Szilárd Páll <
pall.szil...@gmail.com> ha scritto:

> Hi,
>
> I strongly recommend the Quadro RTX series,  6000 or 5000. These should not
> be a lot more expensive, but will be a lot faster than the Pascal
> generation cards. For comparisons see our recent paper:
> https://doi.org/10.1002/jcc.26011
>
> Cheers,
> --
> Szilárd
>
> On Thu, Sep 19, 2019, 09:50 Matteo Tiberti 
> wrote:
>
> > Hi all,
> >
> > we are considering getting a new server, mainly for GROMACS and for other
> > CPU-intensive tasks.
> >
> > Unfortunately we are unable to buy consumer GPUs and we need to get
> Teslas
> > to accelerate GROMACS and possibly for other MD workload in the future.
> > Both P40 and P100 fit our budget, and I'd be inclined towards the P40 for
> > the slightly better single-precision performance and larger memory. The
> P40
> > has however lower bandwidth and much lower double-precision performance
> > respect
> > to the P100 (it's more similar to a consumer GPU in this extent), which
> > shouldn't matter as far as GROMACS is concerned right now. I've seen some
> > talk in the mailing list about implementing mixed/fixed precision modes
> in
> > GROMACS, and for what I gathered it's unlikely to happen anytime soon,
> so I
> > believe the P40 to be a future-proof choice (at least in the short-medium
> > term).
> >
> > This said, I feel like the P40 isn't getting much recognition both in the
> > mailing list and in the "bang for your bucks" papers - so my question
> boils
> > down to, is there any reason we should prefer the P100 card over the P40?
> >
> > Thanks for your help!
> >
> > Matteo
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Position Restraints MD

2019-09-20 Thread ISHRAT JAHAN
 Dear all,
I am doing MD simulation of protein and metal atom. I had applied position
restraints on metal atom. I had applied it as follows- ( kindly correct me
if i am wrong)
first made an index of metal atom and generated the posre_ion.itp file
using gmx genrestr and then added the force and atom index into the top
file of protein in [position restraint] part. In full.mdp file , added the
line define= -DPOSRES.
Then performed full md. But when i calculated the rmsd of backbone its
value is found to be around 0.02 nm(very small). I do not know whether
restraints have been applied on metal or on the whole system.  Am I doing
it correct or not?
Any help related to this will be appreciated.
Thanks in advance


-- 
Ishrat Jahan
Research Scholar
Department Of Chemistry
A.M.U Aligarh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] question on system blow-up

2019-09-20 Thread Lei Qian
Dear gmx-users,

Could I ask a question on system blow-up? Thanks!
I finished em, nvt, and npt. But when I ran production(npt) 10ns, I found
system blow up at around 1ns. ("atoms involved moved further apart than the
multi-body cut-off distance")

When I checked the trajectory files: nvt.trr, npt.trr, I found the
distorted water molecules (very very long H-O bond), although protein looks
OK in these traj files. I guess perhaps those distorted water molecules are
blow-up and they can gradually affect protein?

I tried to change production npt setting to nvt (pcoupl = no), and change
integrator md to sd, change tau_t: 2.0 2.0. However, the following error
always show up:
"1 of the 20067 bonded interactions could not be calculated because some
atoms involved moved further apart than the multi-body cut-off distance
(1.18213 nm) or the two-body cut-off distance (1.241 nm)"

Thank you!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.