[gmx-users] Z shell autocompletion

2020-02-27 Thread Floris van Eerden
Dear Gromacs community,

About a year ago I switched from Bash to ZSH as my default shell, although I 
have been content with the switch, for some reason Gromacs auto completion does 
not work properly anymore. The problem occurs both on MacOS and CentOS 7, and I 
have checked multiple versions (2018.1 and 2019.3). The issue has been nicely 
described and filed as a bug in 
Redmine. It is of course not a major 
issue, but I would be glad if anybody knows a solution or workaround for this 
minor annoyance.

Thanks for your help and suggestions,

Best,

Floris van Eerden
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Compiling with OpenCL for Macbook AMD Radeon Pro 560 GPU

2020-02-27 Thread Oliver Dutton
I am sorry. I thought failing 'make check’ meant it would fail on install.

It works perfectly.

Thank you,
Oliver

> On 18 Feb 2020, at 12:10, Szilárd Páll  wrote:
> 
> Hi Oliver,
> 
> Does this affect an installation of GROMACS? In previous reports we have
> observed that the issue is only present when running "make check" in the
> build tree, but not in the case of an installed version.
> 
> Cheers,
> --
> Szilárd
> 
> 
> On Mon, Feb 17, 2020 at 7:58 PM Oliver Dutton  wrote:
> 
>> Hello,
>> 
>> I am trying to do the exact same as Michael in
>> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-February/124394.html
>> <
>> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-February/124394.html
>>> 
>> but hit the exact same error of it not finding a simple header file. I’ve
>> tried building 2019.5 and 2020 Gromacs on a MacBook Pro with AMD Radeon Pro
>> 560 GPU.
>> 
>> I’m using the apple inbuilt compiler, same flags and cmake options as
>> Michael. Was this ever got working?
>> 
>> Kind regards,
>> Oliver
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Analyzing Hydrogen Bonding Interactions using Gromacs

2020-02-27 Thread Chenyu Liu
This is what I mean by detailed information:
vi hbonds-details.dat
 Found 360 hbonds.
donoracceptoroccupancy
LYS21-Side-NZGLU24-Side-OE2  17.31%
VAL15-Main-N GLU17-Side-OE1  40.38%
LYS16-Main-N GLU17-Side-OE1  11.54%
 ARG190-Side-NH2  ASP161-Side-OD2 61.54%
LYS23-Main-N LEU19-Main-O19.23%
ASP25-Main-N LYS21-Main-O21.15%
SER159-Side-OG   TYR156-Main-O   30.77%

this .dat file is generated by vmd's hydrogen bond analysis extension.

I hope this would clarify the question I have.



On Thu, Feb 27, 2020 at 1:42 PM Chenyu Liu  wrote:

> Hi gromacs users,
>
> I was trying to analyze hydrogen bonding interaction of a protein and
> intended to obtain the detailed list of interactions. Such as amine
> hydrogen from Lys A residue forms a hydrogen bond with the backbone
> carbonyl oxygen of some other residue. I tried the gmx hbond command but I
> cannot find one option for this purpose. I could get results such as hbond
> donor numbers, distance distribution, but not detailed information about
> the interactions. I was wondering if there is any command to derive this
> information from the trajectory. Also, if I wanted to analyze hydrogen bond
> formed by a specific type of donor/acceptor (such as side chain hydrogen as
> a donor), what should I do to specify that?
>
> I have tried using vmd software (there is a hbond analysis in the
> extension) to analyze the trajectory, and it worked pretty well for shorter
> trajectories. My trajectory consists of 5000 frames and when I loaded the
> trajectory the program terminates at around 4000 frames. So, I think I
> should divide the trajectories into smaller pieces and analyze each piece.
> My question is that is there a way I can write a bash script to automate
> the process so that I can analyze these shorter trajectories without
> loading them by hand (otherwise there will be 100 shorter trajectories,
> since I have run several parallel long trajectories). I can write bash
> script for all operations involving gromacs, but not vmd.
>
> I would really appreciate it if you could answer my question, or point out
> some resources which I could refer to. Thanks in advance!
>
> Best regards,
> Chenyu
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Analyzing Hydrogen Bonding Interactions using Gromacs

2020-02-27 Thread Chenyu Liu
Hi gromacs users,

I was trying to analyze hydrogen bonding interaction of a protein and
intended to obtain the detailed list of interactions. Such as amine
hydrogen from Lys A residue forms a hydrogen bond with the backbone
carbonyl oxygen of some other residue. I tried the gmx hbond command but I
cannot find one option for this purpose. I could get results such as hbond
donor numbers, distance distribution, but not detailed information about
the interactions. I was wondering if there is any command to derive this
information from the trajectory. Also, if I wanted to analyze hydrogen bond
formed by a specific type of donor/acceptor (such as side chain hydrogen as
a donor), what should I do to specify that?

I have tried using vmd software (there is a hbond analysis in the
extension) to analyze the trajectory, and it worked pretty well for shorter
trajectories. My trajectory consists of 5000 frames and when I loaded the
trajectory the program terminates at around 4000 frames. So, I think I
should divide the trajectories into smaller pieces and analyze each piece.
My question is that is there a way I can write a bash script to automate
the process so that I can analyze these shorter trajectories without
loading them by hand (otherwise there will be 100 shorter trajectories,
since I have run several parallel long trajectories). I can write bash
script for all operations involving gromacs, but not vmd.

I would really appreciate it if you could answer my question, or point out
some resources which I could refer to. Thanks in advance!

Best regards,
Chenyu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Szilárd Páll
On Thu, Feb 27, 2020 at 1:08 PM Andreas Baer  wrote:

> Hi,
>
> On 27.02.20 12:34, Szilárd Páll wrote:
> > Hi
> >
> > On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer 
> wrote:
> >
> >> Hi,
> >>
> >> with the link below, additional log files for runs with 1 GPU should be
> >> accessible now.
> >>
> > I meant to ask you to run single-rank GPU runs, i.e. gmx mdrun -ntmpi 1.
> >
> > It would also help if you could share some input files in case if further
> > testing is needed.
> Ok, there is now also an additional benchmark with `-ntmpi 1 -ntomp 4
> -bonded gpu -update gpu` as parameters. However, it is run on the same
> machine with smt disabled.
> With the following link, I provide all the tests on this machine, I did
> by now, along with a summary of the performance for the several input
> parameters (both in `logfiles`), as well as input files (`C60xh.7z`) and
> the scripts to run these.
>

Links seems to be missing.
--
Szilárd


> I hope, this helps. If there is anything else, I can do to help, please
> let me know!
> >
> >
> >> Thank you for the comment with the rlist, I did not know, that this will
> >> affect the performance negatively.
> >
> > It does in multiple ways. First, you are using a rather long list buffer
> > which will make the nonbonded pair-interaction calculation more
> > computational expensive than it could be if you just used a tolerance and
> > let the buffer be calculated. Secondly, as setting a manual rlist
> disables
> > the automated verlet buffer calculation, it prevents mdrun from using a
> > dual pairl-list setup (see
> >
> http://manual.gromacs.org/documentation/2018.1/release-notes/2018/major/features.html#dual-pair-list-buffer-with-dynamic-pruning
> )
> > which has additional performance benefits.
> Ok, thank you for the explanation!
> >
> > Cheers,
> > --
> > Szilárd
> Cheers,
> Andreas
> >
> >
> >
> >> I know, about the nstcalcenergy, but
> >> I need it for several of my simulations.
> > Cheers,
> >> Andreas
> >>
> >> On 26.02.20 16:50, Szilárd Páll wrote:
> >>> Hi,
> >>>
> >>> Can you please check the performance when running on a single GPU 2019
> vs
> >>> 2020 with your inputs?
> >>>
> >>> Also note that you are using some peculiar settings that will have an
> >>> adverse effect on performance (like manually set rlist disallowing the
> >> dual
> >>> pair-list setup, and nstcalcenergy=1).
> >>>
> >>> Cheers,
> >>>
> >>> --
> >>> Szilárd
> >>>
> >>>
> >>> On Wed, Feb 26, 2020 at 4:11 PM Andreas Baer 
> >> wrote:
>  Hello,
> 
>  here is a link to the logfiles.
> 
> 
> >>
> https://faubox.rrze.uni-erlangen.de/getlink/fiX8wP1LwSBkHRoykw6ksjqY/benchmarks_2019-2020
>  If necessary, I can also provide some more log or tpr/gro/... files.
> 
>  Cheers,
>  Andreas
> 
> 
>  On 26.02.20 16:09, Paul bauer wrote:
> > Hello,
> >
> > you can't add attachments to the list, please upload the files
> > somewhere to share them.
> > This might be quite important to us, because the performance
> > regression is not expected by us.
> >
> > Cheers
> >
> > Paul
> >
> > On 26/02/2020 15:54, Andreas Baer wrote:
> >> Hello,
> >>
> >> from a set of benchmark tests with large systems using Gromacs
> >> versions 2019.5 and 2020, I obtained some unexpected results:
> >> With the same set of parameters and the 2020 version, I obtain a
> >> performance that is about 2/3 of the 2019.5 version. Interestingly,
> >> according to nvidia-smi, the GPU usage is about 20% higher for the
> >> 2020 version.
> >> Also from the log files it seems, that the 2020 version does the
> >> computations more efficiently, but spends so much more time waiting,
> >> that the overall performance drops.
> >>
> >> Some background info on the benchmarks:
> >> - System contains about 2.1 million atoms.
> >> - Hardware: 2x Intel Xeon Gold 6134 („Skylake“) @3.2 GHz = 16 cores
> +
> >> SMT; 4x NVIDIA Tesla V100
> >> (similar results with less significant performance drop (~15%)
> on a
> >> different machine: 2 or 4 nodes with each [2x Intel Xeon 2660v2
> („Ivy
> >> Bridge“) @ 2.2GHz = 20 cores + SMT; 2x NVIDIA Kepler K20])
> >> - Several options for -ntmpi, -ntomp, -bonded, -pme are used to find
> >> the optimal set. However the performance drop seems to be persistent
> >> for all such options.
> >>
> >> Two representative log files are attached.
> >> Does anyone have an idea, where this drop comes from, and how to
> >> choose the parameters for the 2020 version to circumvent this?
> >>
> >> Regards,
> >> Andreas
> >>
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>  posting!
> 
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit

[gmx-users] regressiontests/complex (Failed)

2020-02-27 Thread Navneet Kumar
Hello Everyone!

-During installation GROMACS-2018.8 on ubuntu with Tesla C2050
-During step - "sudo make check"
-One test (regressiontests/complex) out of 39 tests failed.
attached details while installation


35/39 Test #35: regressiontests/complex ..***Failed   49.02 sec
  :-) GROMACS - gmx mdrun, 2018.8 (-:

GROMACS is written by:
 Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C.
Berendsen
Par BjelkmarAldert van Buuren   Rudi van Drunen Anton Feenstra
  Gerrit GroenhofAleksei Iupinov   Christoph Junghans   Anca Hamuraru
 Vincent Hindriksen Dimitrios KarkoulisPeter KassonJiri Kraus

  Carsten Kutzner  Per Larsson  Justin A. LemkulViveca Lindahl
  Magnus Lundborg   Pieter MeulenhoffErik Marklund  Teemu Murtola
Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov
   Michael Shirts Alfons Sijbers Peter TielemanTeemu Virolainen
 Christian WennbergMaarten Wolf
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.8
Executable:   /home/navneet/Downloads/gromacs-2018.8/build/bin/gmx
Data prefix:  /home/navneet/Downloads/gromacs-2018.8 (source tree)
Working dir:  /home/navneet/Downloads/regressiontests-2018.8
Command line:
  gmx mdrun -h


Thanx for Using GROMACS - Have a Nice Day


Abnormal return value for ' gmx mdrun -ntmpi 12  -notunepme >mdrun.out
2>&1' was 1
Retrying mdrun with better settings...
Re-running nbnxn-vdw-potential-switch using CPU-based PME
Re-running nbnxn_pme using CPU-based PME

Abnormal return value for ' gmx mdrun -ntmpi 6  -notunepme >mdrun.out
2>&1' was 1
Retrying mdrun with better settings...
Re-running octahedron using CPU-based PME
Re-running orientation-restraints using CPU-based PME
Re-running pull_geometry_angle using CPU-based PME
Re-running pull_geometry_angle-axis using CPU-based PME
Re-running pull_geometry_dihedral using CPU-based PME
Re-running swap_x using CPU-based PME
FAILED. Check checkforce.out (2 errors) file(s) in swap_y for swap_y
Re-running swap_y using CPU-based PME
FAILED. Check checkforce.out (1 errors) file(s) in swap_z for swap_z
Re-running swap_z using CPU-based PME
FAILED. Check checkforce.out (1 errors) file(s) in tip4p_continue for
tip4p_continue
3 out of 61 complex tests FAILED

  Start 36: regressiontests/kernel
36/39 Test #36: regressiontests/kernel ...   Passed   81.99 sec
  Start 37: regressiontests/freeenergy
37/39 Test #37: regressiontests/freeenergy ...   Passed   11.30 sec
  Start 38: regressiontests/pdb2gmx
38/39 Test #38: regressiontests/pdb2gmx ..   Passed   16.82 sec
  Start 39: regressiontests/rotation
39/39 Test #39: regressiontests/rotation .   Passed5.86 sec

97% tests passed, 1 tests failed out of 39

Label Time Summary:
GTest  =  15.68 sec (33 tests)
IntegrationTest=   8.81 sec (3 tests)
MpiTest=   0.18 sec (3 tests)
UnitTest   =   6.87 sec (30 tests)

Total Test time (real) = 187.17 sec

The following tests FAILED:
35 - regressiontests/complex (Failed)
Errors while running CTest
CMakeFiles/run-ctest-nophys.dir/build.make:57: recipe for target
'CMakeFiles/run-ctest-nophys' failed
make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
CMakeFiles/Makefile2:1160: recipe for target
'CMakeFiles/run-ctest-nophys.dir/all' failed
make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
CMakeFiles/Makefile2:971: recipe for target 'CMakeFiles/check.dir/rule'
failed
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
Makefile:546: recipe for target 'check' failed
make: *** [check] Error 2
_
 How to solve this error.
Earlier this error also arises while installing GROMACS-2018.4 on the same
system. How to solve this problem.
What artefacts/error one can expect even if we install the gromacs after
this particular test fails.

Regards
Navneet
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to 

Re: [gmx-users] Restart energy minimization with step .pdb files

2020-02-27 Thread Michele Pellegrino
Ok, thank you for the clarification; I thought those files where generated no 
matter what.
By the way, I am not trying to restart because the simulation crashed: it's 
just that I am running on a cluster and the job exceded the prescibed time 
limit. The minimimization itself seems to work fine (at least this is what I 
can see from ener.edr).

Cheers,
Michele


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Justin Lemkul 

Sent: 27 February 2020 13:11
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Restart energy minimization with step .pdb files

On 2/27/20 4:37 AM, Michele Pellegrino wrote:
> Hi,
>
>
> I am trying to restart steepest descent using the .pdb files generated by 
> mdrun. The name of these files has the following pattern:
>
> step#%_n*.pdb
>
> being # and * integers and % a character ('a', 'b' or 'c').
>
> I read the documentation, but I can't understand what those files represent.
>
> Could anyone give me some hint?

step*.pdb files are the coordinates in each spatial domain in an attempt
to help you debug what is going on. I have never been able to employ
them in any useful way. All they really indicate is that your mdrun
process is going to crash due to instability.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in ions.tpr generation

2020-02-27 Thread Justin Lemkul




On 2/27/20 2:50 AM, Neha Tiwari wrote:

Dear gmx experts,
The ligand molecule was refined and optimized using G03 calculation and
.itp was generated using the ATB server. Still, I am getting the
following error, dirctly copied from the terminal.
And I am unable to generate any extension files in GROMACS using this .itp.



$ gmx grompp -f ions.mdp -c solv.gro -p topol.top -o ions.tpr

   :-) GROMACS - gmx grompp, 2018.1 (-:



 GROMACS is written by:

  Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C.
Berendsen

 Par BjelkmarAldert van Buuren   Rudi van Drunen Anton Feenstra

   Gerrit GroenhofAleksei Iupinov   Christoph Junghans   Anca Hamuraru

  Vincent Hindriksen Dimitrios KarkoulisPeter KassonJiri Kraus


   Carsten Kutzner  Per Larsson  Justin A. LemkulViveca Lindahl

   Magnus Lundborg   Pieter MeulenhoffErik Marklund  Teemu Murtola

 Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov

Michael Shirts Alfons Sijbers Peter TielemanTeemu Virolainen

  Christian WennbergMaarten Wolf

and the project leaders:

 Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel



Copyright (c) 1991-2000, University of Groningen, The Netherlands.

Copyright (c) 2001-2017, The GROMACS development team at

Uppsala University, Stockholm University and

the Royal Institute of Technology, Sweden.

check out http://www.gromacs.org for more information.



GROMACS is free software; you can redistribute it and/or modify it

under the terms of the GNU Lesser General Public License

as published by the Free Software Foundation; either version 2.1

of the License, or (at your option) any later version.



GROMACS:  gmx grompp, version 2018.1

Executable:   /usr/bin/gmx

Data prefix:  /usr

Working dir:  /home/ya/Desktop/Neha/fecA/gromos

Command line:

   gmx grompp -f ions.mdp -c solv.gro -p topol.top -o ions.tpr



Ignoring obsolete mdp entry 'title'



NOTE 1 [file ions.mdp]:

   With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note

   that with the Verlet scheme, nstlist has no effect on the accuracy of

   your simulation.



Setting the LD random seed to 49113858

Generated 165 of the 1596 non-bonded parameter combinations

Excluding 3 bonded neighbours molecule type 'Protein'

Excluding 3 bonded neighbours molecule type '4JCP'

Excluding 2 bonded neighbours molecule type 'SOL'



NOTE 2 [file topol.top, line 45350]:

   System has non-zero total charge: -14.00

   Total charge should normally be an integer. See

   http://www.gromacs.org/Documentation/Floating_Point_Arithmetic

   for discussion on how close it should be to an integer.







Removing all charge groups because cutoff-scheme=Verlet



ERROR 1 [file topol.top, line 45350]:

   atom O10 (Res 4JCP-1) has mass 0 (state A) / 0 (state B)


Apparently your topology is incorrectly formatted and does not contain 
masses of the atoms. You will need to correct its contents.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restart energy minimization with step .pdb files

2020-02-27 Thread Justin Lemkul




On 2/27/20 4:37 AM, Michele Pellegrino wrote:

Hi,


I am trying to restart steepest descent using the .pdb files generated by 
mdrun. The name of these files has the following pattern:

step#%_n*.pdb

being # and * integers and % a character ('a', 'b' or 'c').

I read the documentation, but I can't understand what those files represent.

Could anyone give me some hint?


step*.pdb files are the coordinates in each spatial domain in an attempt 
to help you debug what is going on. I have never been able to employ 
them in any useful way. All they really indicate is that your mdrun 
process is going to crash due to instability.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Andreas Baer

Hi,

On 27.02.20 12:34, Szilárd Páll wrote:

Hi

On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer  wrote:


Hi,

with the link below, additional log files for runs with 1 GPU should be
accessible now.


I meant to ask you to run single-rank GPU runs, i.e. gmx mdrun -ntmpi 1.

It would also help if you could share some input files in case if further
testing is needed.
Ok, there is now also an additional benchmark with `-ntmpi 1 -ntomp 4 
-bonded gpu -update gpu` as parameters. However, it is run on the same 
machine with smt disabled.
With the following link, I provide all the tests on this machine, I did 
by now, along with a summary of the performance for the several input 
parameters (both in `logfiles`), as well as input files (`C60xh.7z`) and 
the scripts to run these.
I hope, this helps. If there is anything else, I can do to help, please 
let me know!




Thank you for the comment with the rlist, I did not know, that this will
affect the performance negatively.


It does in multiple ways. First, you are using a rather long list buffer
which will make the nonbonded pair-interaction calculation more
computational expensive than it could be if you just used a tolerance and
let the buffer be calculated. Secondly, as setting a manual rlist disables
the automated verlet buffer calculation, it prevents mdrun from using a
dual pairl-list setup (see
http://manual.gromacs.org/documentation/2018.1/release-notes/2018/major/features.html#dual-pair-list-buffer-with-dynamic-pruning)
which has additional performance benefits.

Ok, thank you for the explanation!


Cheers,
--
Szilárd

Cheers,
Andreas





I know, about the nstcalcenergy, but
I need it for several of my simulations.

Cheers,

Andreas

On 26.02.20 16:50, Szilárd Páll wrote:

Hi,

Can you please check the performance when running on a single GPU 2019 vs
2020 with your inputs?

Also note that you are using some peculiar settings that will have an
adverse effect on performance (like manually set rlist disallowing the

dual

pair-list setup, and nstcalcenergy=1).

Cheers,

--
Szilárd


On Wed, Feb 26, 2020 at 4:11 PM Andreas Baer 

wrote:

Hello,

here is a link to the logfiles.



https://faubox.rrze.uni-erlangen.de/getlink/fiX8wP1LwSBkHRoykw6ksjqY/benchmarks_2019-2020

If necessary, I can also provide some more log or tpr/gro/... files.

Cheers,
Andreas


On 26.02.20 16:09, Paul bauer wrote:

Hello,

you can't add attachments to the list, please upload the files
somewhere to share them.
This might be quite important to us, because the performance
regression is not expected by us.

Cheers

Paul

On 26/02/2020 15:54, Andreas Baer wrote:

Hello,

from a set of benchmark tests with large systems using Gromacs
versions 2019.5 and 2020, I obtained some unexpected results:
With the same set of parameters and the 2020 version, I obtain a
performance that is about 2/3 of the 2019.5 version. Interestingly,
according to nvidia-smi, the GPU usage is about 20% higher for the
2020 version.
Also from the log files it seems, that the 2020 version does the
computations more efficiently, but spends so much more time waiting,
that the overall performance drops.

Some background info on the benchmarks:
- System contains about 2.1 million atoms.
- Hardware: 2x Intel Xeon Gold 6134 („Skylake“) @3.2 GHz = 16 cores +
SMT; 4x NVIDIA Tesla V100
(similar results with less significant performance drop (~15%) on a
different machine: 2 or 4 nodes with each [2x Intel Xeon 2660v2 („Ivy
Bridge“) @ 2.2GHz = 20 cores + SMT; 2x NVIDIA Kepler K20])
- Several options for -ntmpi, -ntomp, -bonded, -pme are used to find
the optimal set. However the performance drop seems to be persistent
for all such options.

Two representative log files are attached.
Does anyone have an idea, where this drop comes from, and how to
choose the parameters for the 2020 version to circumvent this?

Regards,
Andreas


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Szilárd Páll
Hi

On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer  wrote:

> Hi,
>
> with the link below, additional log files for runs with 1 GPU should be
> accessible now.
>

I meant to ask you to run single-rank GPU runs, i.e. gmx mdrun -ntmpi 1.

It would also help if you could share some input files in case if further
testing is needed.


> Thank you for the comment with the rlist, I did not know, that this will
> affect the performance negatively.


It does in multiple ways. First, you are using a rather long list buffer
which will make the nonbonded pair-interaction calculation more
computational expensive than it could be if you just used a tolerance and
let the buffer be calculated. Secondly, as setting a manual rlist disables
the automated verlet buffer calculation, it prevents mdrun from using a
dual pairl-list setup (see
http://manual.gromacs.org/documentation/2018.1/release-notes/2018/major/features.html#dual-pair-list-buffer-with-dynamic-pruning)
which has additional performance benefits.

Cheers,
--
Szilárd



> I know, about the nstcalcenergy, but
> I need it for several of my simulations.

Cheers,
> Andreas
>
> On 26.02.20 16:50, Szilárd Páll wrote:
> > Hi,
> >
> > Can you please check the performance when running on a single GPU 2019 vs
> > 2020 with your inputs?
> >
> > Also note that you are using some peculiar settings that will have an
> > adverse effect on performance (like manually set rlist disallowing the
> dual
> > pair-list setup, and nstcalcenergy=1).
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> >
> > On Wed, Feb 26, 2020 at 4:11 PM Andreas Baer 
> wrote:
> >
> >> Hello,
> >>
> >> here is a link to the logfiles.
> >>
> >>
> https://faubox.rrze.uni-erlangen.de/getlink/fiX8wP1LwSBkHRoykw6ksjqY/benchmarks_2019-2020
> >>
> >> If necessary, I can also provide some more log or tpr/gro/... files.
> >>
> >> Cheers,
> >> Andreas
> >>
> >>
> >> On 26.02.20 16:09, Paul bauer wrote:
> >>> Hello,
> >>>
> >>> you can't add attachments to the list, please upload the files
> >>> somewhere to share them.
> >>> This might be quite important to us, because the performance
> >>> regression is not expected by us.
> >>>
> >>> Cheers
> >>>
> >>> Paul
> >>>
> >>> On 26/02/2020 15:54, Andreas Baer wrote:
>  Hello,
> 
>  from a set of benchmark tests with large systems using Gromacs
>  versions 2019.5 and 2020, I obtained some unexpected results:
>  With the same set of parameters and the 2020 version, I obtain a
>  performance that is about 2/3 of the 2019.5 version. Interestingly,
>  according to nvidia-smi, the GPU usage is about 20% higher for the
>  2020 version.
>  Also from the log files it seems, that the 2020 version does the
>  computations more efficiently, but spends so much more time waiting,
>  that the overall performance drops.
> 
>  Some background info on the benchmarks:
>  - System contains about 2.1 million atoms.
>  - Hardware: 2x Intel Xeon Gold 6134 („Skylake“) @3.2 GHz = 16 cores +
>  SMT; 4x NVIDIA Tesla V100
> (similar results with less significant performance drop (~15%) on a
>  different machine: 2 or 4 nodes with each [2x Intel Xeon 2660v2 („Ivy
>  Bridge“) @ 2.2GHz = 20 cores + SMT; 2x NVIDIA Kepler K20])
>  - Several options for -ntmpi, -ntomp, -bonded, -pme are used to find
>  the optimal set. However the performance drop seems to be persistent
>  for all such options.
> 
>  Two representative log files are attached.
>  Does anyone have an idea, where this drop comes from, and how to
>  choose the parameters for the 2020 version to circumvent this?
> 
>  Regards,
>  Andreas
> 
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Andreas Baer

Hi,

with the link below, additional log files for runs with 1 GPU should be 
accessible now.


Thank you for the comment with the rlist, I did not know, that this will 
affect the performance negatively. I know, about the nstcalcenergy, but 
I need it for several of my simulations.


Cheers,
Andreas

On 26.02.20 16:50, Szilárd Páll wrote:

Hi,

Can you please check the performance when running on a single GPU 2019 vs
2020 with your inputs?

Also note that you are using some peculiar settings that will have an
adverse effect on performance (like manually set rlist disallowing the dual
pair-list setup, and nstcalcenergy=1).

Cheers,

--
Szilárd


On Wed, Feb 26, 2020 at 4:11 PM Andreas Baer  wrote:


Hello,

here is a link to the logfiles.

https://faubox.rrze.uni-erlangen.de/getlink/fiX8wP1LwSBkHRoykw6ksjqY/benchmarks_2019-2020

If necessary, I can also provide some more log or tpr/gro/... files.

Cheers,
Andreas


On 26.02.20 16:09, Paul bauer wrote:

Hello,

you can't add attachments to the list, please upload the files
somewhere to share them.
This might be quite important to us, because the performance
regression is not expected by us.

Cheers

Paul

On 26/02/2020 15:54, Andreas Baer wrote:

Hello,

from a set of benchmark tests with large systems using Gromacs
versions 2019.5 and 2020, I obtained some unexpected results:
With the same set of parameters and the 2020 version, I obtain a
performance that is about 2/3 of the 2019.5 version. Interestingly,
according to nvidia-smi, the GPU usage is about 20% higher for the
2020 version.
Also from the log files it seems, that the 2020 version does the
computations more efficiently, but spends so much more time waiting,
that the overall performance drops.

Some background info on the benchmarks:
- System contains about 2.1 million atoms.
- Hardware: 2x Intel Xeon Gold 6134 („Skylake“) @3.2 GHz = 16 cores +
SMT; 4x NVIDIA Tesla V100
   (similar results with less significant performance drop (~15%) on a
different machine: 2 or 4 nodes with each [2x Intel Xeon 2660v2 („Ivy
Bridge“) @ 2.2GHz = 20 cores + SMT; 2x NVIDIA Kepler K20])
- Several options for -ntmpi, -ntomp, -bonded, -pme are used to find
the optimal set. However the performance drop seems to be persistent
for all such options.

Two representative log files are attached.
Does anyone have an idea, where this drop comes from, and how to
choose the parameters for the 2020 version to circumvent this?

Regards,
Andreas


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Restart energy minimization with step .pdb files

2020-02-27 Thread Michele Pellegrino
I am sorry, but I don't fully understand your answer: do you mean step_XXX.pdb 
files are generated when atoms are overlapping?
In any case I still do not get the meaning of the labels; what 'a', 'b' and 'c' 
stand for?

Thank you for your quick response.

Cheers,
Michele

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Quyen V. Vu 

Sent: 27 February 2020 11:22
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Restart energy minimization with step .pdb files

It usually happens when you have two or more atoms overlap.
step_XXX.pdb is the lowest energy state that steepest descend can give you
with your input file.
In my opinion, you should look into the log file of energy minimization
step, find out which atom is exerted the maximum force, visualize your
initial input and check is there any atom overlaps with that atom.
Best,

On Thu, Feb 27, 2020 at 10:37 AM Michele Pellegrino  wrote:

> Hi,
>
>
> I am trying to restart steepest descent using the .pdb files generated by
> mdrun. The name of these files has the following pattern:
>
> step#%_n*.pdb
>
> being # and * integers and % a character ('a', 'b' or 'c').
>
> I read the documentation, but I can't understand what those files
> represent.
>
> Could anyone give me some hint?
>
>
> Cheers,
>
> Michele
>
>
> p.s. I am running GROMACS 2019.5
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Restart energy minimization with step .pdb files

2020-02-27 Thread Quyen V. Vu
It usually happens when you have two or more atoms overlap.
step_XXX.pdb is the lowest energy state that steepest descend can give you
with your input file.
In my opinion, you should look into the log file of energy minimization
step, find out which atom is exerted the maximum force, visualize your
initial input and check is there any atom overlaps with that atom.
Best,

On Thu, Feb 27, 2020 at 10:37 AM Michele Pellegrino  wrote:

> Hi,
>
>
> I am trying to restart steepest descent using the .pdb files generated by
> mdrun. The name of these files has the following pattern:
>
> step#%_n*.pdb
>
> being # and * integers and % a character ('a', 'b' or 'c').
>
> I read the documentation, but I can't understand what those files
> represent.
>
> Could anyone give me some hint?
>
>
> Cheers,
>
> Michele
>
>
> p.s. I am running GROMACS 2019.5
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Restart energy minimization with step .pdb files

2020-02-27 Thread Michele Pellegrino
Hi,


I am trying to restart steepest descent using the .pdb files generated by 
mdrun. The name of these files has the following pattern:

step#%_n*.pdb

being # and * integers and % a character ('a', 'b' or 'c').

I read the documentation, but I can't understand what those files represent.

Could anyone give me some hint?


Cheers,

Michele


p.s. I am running GROMACS 2019.5
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.