Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Alex

Hi Kevin,

We've been having issues with Power9/V100 very similar to what Jon 
described and basically settled on what I believe is sub-par 
performance. We tested it on systems with ~30-50K particles and threads 
simply cannot be pinned. As far as Gromacs is concerned, our brand-new 
Power9 nodes operate as if they were based on Intel CPUs (two threads 
per core) and zero advantage of IBM parallelization is being taken. 
Other users of the same nodes reported similar issues with other 
software, which to me suggests that our sysadmins don't really know how 
to set these nodes up.


At this point, if someone could figure out a clear set of build 
instructions in combination with slurm/mdrun inputs, it would be very 
much appreciated.


Alex

On 4/23/2020 9:37 PM, Kevin Boyd wrote:

I'm not entirely sure how thread-pinning plays with slurm allocations on
partial nodes. I always reserve the entire node when I use thread pinning,
and run a bunch of simulations by pinning to different cores manually,
rather than relying on slurm to divvy up resources for multiple jobs.

Looking at both logs now, a few more points

* Your benchmarks are short enough that little things like cores spinning
up frequencies can matter. I suggest running longer (increase nsteps in the
mdp or at the command line), and throwing away your initial benchmark data
(see -resetstep and -resethway) to avoid artifacts
* Your benchmark system is quite small for such a powerful GPU. I might
expect better performance running multiple simulations per-GPU if the
workflows being run can rely on replicates, and a larger system would
probably scale better to the V100.
* The P100/intel system appears to have pinned cores properly, it's
unclear whether it had a real impact on these benchmarks
* It looks like the CPU-based computations were the primary contributors to
the observed difference in performance. That should decrease or go away
with increased core counts and shifting the update phase to the GPU. It may
be (I have no prior experience to indicate either way) that the intel cores
are simply better on a 1-1 basis than the Power cores. If you have 4-8
cores per simulation (try -ntomp 4 and increasing the allocation of your
slurm job), the individual core performance shouldn't matter too
much, you're just certainly bottlenecked on one CPU core per GPU, which can
emphasize performance differences..

Kevin

On Thu, Apr 23, 2020 at 6:43 PM Jonathan D. Halverson <
halver...@princeton.edu> wrote:


*Message sent from a system outside of UConn.*


Hi Kevin,

md.log for the Intel run is here:

https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100

Thanks for the info on constraints with 2020. I'll try some runs with
different values of -pinoffset for 2019.6.

I know a group at NIST is having the same or similar problems with
POWER9/V100.

Jon

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
Boyd 
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:

https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:


*Message sent from a system outside of UConn.*


We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running

RHEL

7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
nodes. Everything below is about of the POWER9/V100 node.

We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
CPU-core and 1 GPU (
ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives

102

ns/day. The difference in performance is roughly the same for the larger
ADH benchmark and when different numbers of CPU-cores are used. GROMACS

is

always underperforming on 

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Kevin Boyd
I'm not entirely sure how thread-pinning plays with slurm allocations on
partial nodes. I always reserve the entire node when I use thread pinning,
and run a bunch of simulations by pinning to different cores manually,
rather than relying on slurm to divvy up resources for multiple jobs.

Looking at both logs now, a few more points

* Your benchmarks are short enough that little things like cores spinning
up frequencies can matter. I suggest running longer (increase nsteps in the
mdp or at the command line), and throwing away your initial benchmark data
(see -resetstep and -resethway) to avoid artifacts
* Your benchmark system is quite small for such a powerful GPU. I might
expect better performance running multiple simulations per-GPU if the
workflows being run can rely on replicates, and a larger system would
probably scale better to the V100.
* The P100/intel system appears to have pinned cores properly, it's
unclear whether it had a real impact on these benchmarks
* It looks like the CPU-based computations were the primary contributors to
the observed difference in performance. That should decrease or go away
with increased core counts and shifting the update phase to the GPU. It may
be (I have no prior experience to indicate either way) that the intel cores
are simply better on a 1-1 basis than the Power cores. If you have 4-8
cores per simulation (try -ntomp 4 and increasing the allocation of your
slurm job), the individual core performance shouldn't matter too
much, you're just certainly bottlenecked on one CPU core per GPU, which can
emphasize performance differences..

Kevin

On Thu, Apr 23, 2020 at 6:43 PM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> Hi Kevin,
>
> md.log for the Intel run is here:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100
>
> Thanks for the info on constraints with 2020. I'll try some runs with
> different values of -pinoffset for 2019.6.
>
> I know a group at NIST is having the same or similar problems with
> POWER9/V100.
>
> Jon
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> Boyd 
> Sent: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org 
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better place to start debugging
> performance issues.
>
> A couple quick notes, but need a side-by-side comparison for more useful
> analysis, and these points may apply to both systems so may not be your
> root cause:
> * At first glance, your Power system spends 1/3 of its time in constraint
> calculation, which is unusual. This can be reduced 2 ways - first, by
> adding more CPU cores. It doesn't make a ton of sense to benchmark on one
> core if your applications will use more. Second, if you upgrade to Gromacs
> 2020 you can probably put the constraint calculation on the GPU with
> -update GPU.
> * The Power system log has this line:
>
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
> indicating
> that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
> some other core) to specify where you want the process pinned.
>
> Kevin
>
> On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
> halver...@princeton.edu> wrote:
>
> > *Message sent from a system outside of UConn.*
> >
> >
> > We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> > IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running
> RHEL
> > 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> > nodes. Everything below is about of the POWER9/V100 node.
> >
> > We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> > CPU-core and 1 GPU (
> > ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> > found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives
> 102
> > ns/day. The difference in performance is roughly the same for the larger
> > ADH benchmark and when different numbers of CPU-cores are used. GROMACS
> is
> > always underperforming on our POWER9/V100 nodes. We have pinning turned
> on
> > (see Slurm script at bottom).
> >
> > Below is our build procedure on the POWER9/V100 node:
> >
> > version_gmx=2019.6
> > wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> > tar zxvf gromacs-${version_gmx}.tar.gz
> > cd gromacs-${version_gmx}
> > mkdir build && cd build
> >
> > module purge
> > module load rh/devtoolset/7
> > module load cudatoolkit/10.2
> >
> > OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
> >
> > cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> > -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> > -DCMAKE_CXX_COMPILER=g++ 

Re: [gmx-users] Regarding use of harmonic wall model

2020-04-23 Thread Shashank Ranjan Srivastava
Thank you so much. I will work on it.

Thank you

On Tue, 21 Apr 2020, 18:55 Justin Lemkul,  wrote:

> [image: Boxbe]  This message is eligible
> for Automatic Cleanup! (jalem...@vt.edu) Add cleanup rule
> 
> | More info
> 
>
>
> On 4/21/20 8:07 AM, Shashank Ranjan Srivastava wrote:
> > Thank you so much Prof. Lemkul for your reply.
> > As I don't know what and how to use this pull code, if you have any idea
> > regarding what you have suggested will be of great help to me.
> > Meanwhile I will read more about it.
>
> General principles here:
> http://www.mdtutorials.com/gmx/umbrella/index.html
>
> Obviously that's a very different approach from what you're trying to do
> but you can learn the basic syntax there.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Periodic boundary conditions during the simulation

2020-04-23 Thread Mohamed Abdelaal
Hello everybody,

I know that due to periodic boundary conditions the molecules move from one
side of the box to the other side and moves outside the box. I also know
how to use trjconv to solve this problem and I usually do this step at the
end of the simulation. However I have noticed that after the energy
minimization, some molecules which were position restrained at the bottom
of the box have been moved to the top of the box. I knew that I have this
periodic boundary condition problem not just from visualization of the
structure after the energy minimization, but also from the atoms
coordinates in the .gro file after the energy minimization, the z
coordinate of some atoms have been changed from bottom to top as below:

before energy minimization: (the z coordinate was always zero)

1GRM C11   0.061   0.071   0.000
1GRM C22   0.184   0.142   0.000
1GRM C33   0.184   0.284   0.000
1GRM C44   0.061   0.355   0.000
2GRM C15   0.061   0.497   0.000


after energy minimization: (z coordinate of some atoms have been changed
into 14 which is at the top of the box)

1GRM C11   0.061   0.071  14.000
1GRM C22   0.184   0.142  14.000
1GRM C33   0.184   0.284   0.000
1GRM C44   0.061   0.355   0.000
2GRM C15   0.061   0.497  14.000



Do I need to solve this PBC problem between the different steps (energy
min, NVT, NPT, production run) or it is okay to continue my simulation
(even if the molecules have moved) and solve this problem at the end ?

Many thanks,
Mohamed
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Jonathan D. Halverson
Hi Kevin,

md.log for the Intel run is here:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log.intel-broadwell-P100

Thanks for the info on constraints with 2020. I'll try some runs with different 
values of -pinoffset for 2019.6.

I know a group at NIST is having the same or similar problems with POWER9/V100.

Jon

From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Kevin Boyd 

Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org 
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL
> 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> nodes. Everything below is about of the POWER9/V100 node.
>
> We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> CPU-core and 1 GPU (
> ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives 102
> ns/day. The difference in performance is roughly the same for the larger
> ADH benchmark and when different numbers of CPU-cores are used. GROMACS is
> always underperforming on our POWER9/V100 nodes. We have pinning turned on
> (see Slurm script at bottom).
>
> Below is our build procedure on the POWER9/V100 node:
>
> version_gmx=2019.6
> wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> tar zxvf gromacs-${version_gmx}.tar.gz
> cd gromacs-${version_gmx}
> mkdir build && cd build
>
> module purge
> module load rh/devtoolset/7
> module load cudatoolkit/10.2
>
> OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
>
> cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> -DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
> -DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
> -DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
> -DGMX_BUILD_OWN_FFTW=ON \
> -DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
> -DGMX_OPENMP_MAX_THREADS=128 \
> -DCMAKE_INSTALL_PREFIX=$HOME/.local \
> -DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON
>
> make -j 10
> make check
> make install
>
> 45 of the 46 tests pass with the exception being HardwareUnitTests. There
> are several posts about this and apparently it is not a concern. The full
> build log is here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log
>
>
>
> Here is more info about our POWER9/V100 node:
>
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):128
> On-line CPU(s) list:   0-127
> Thread(s) per core:4
> Core(s) per socket:16
> Socket(s): 2
> NUMA node(s):  6
> Model: 2.3 (pvr 004e 1203)
> Model name:POWER9, altivec supported
> CPU max MHz:   3800.
> CPU min MHz:   2300.
>
> You see that we have 4 hardware threads per physical core. If we use 4
> hardware threads on the RNASE benchmark instead of 1 the performance goes
> to 119 ns/day which is still about 20% less than the Broadwell/P100 value.
> When using multiple CPU-cores on the POWER9/V100 there is significant
> variation in the execution time of the code.
>
> There are four GPUs per POWER9/V100 node:
>
> $ nvidia-smi -q
> Driver Version  : 440.33.01
> CUDA Version: 10.2
> GPU 0004:04:00.0
> Product Name: Tesla V100-SXM2-32GB
>
> The GPUs have been shown to perform as expected on other applications.
>
>
>
>
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: 

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Kevin Boyd
Hi,

Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.

A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so may not be your
root cause:
* At first glance, your Power system spends 1/3 of its time in constraint
calculation, which is unusual. This can be reduced 2 ways - first, by
adding more CPU cores. It doesn't make a ton of sense to benchmark on one
core if your applications will use more. Second, if you upgrade to Gromacs
2020 you can probably put the constraint calculation on the GPU with
-update GPU.
* The Power system log has this line:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log#L304
indicating
that threads perhaps were not actually pinned. Try adding -pinoffset 0 (or
some other core) to specify where you want the process pinned.

Kevin

On Thu, Apr 23, 2020 at 9:40 AM Jonathan D. Halverson <
halver...@princeton.edu> wrote:

> *Message sent from a system outside of UConn.*
>
>
> We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an
> IBM POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL
> 7.7 and Slurm 19.05.5. We have no concerns about GROMACS on our Intel
> nodes. Everything below is about of the POWER9/V100 node.
>
> We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1
> CPU-core and 1 GPU (
> ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and
> found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives 102
> ns/day. The difference in performance is roughly the same for the larger
> ADH benchmark and when different numbers of CPU-cores are used. GROMACS is
> always underperforming on our POWER9/V100 nodes. We have pinning turned on
> (see Slurm script at bottom).
>
> Below is our build procedure on the POWER9/V100 node:
>
> version_gmx=2019.6
> wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
> tar zxvf gromacs-${version_gmx}.tar.gz
> cd gromacs-${version_gmx}
> mkdir build && cd build
>
> module purge
> module load rh/devtoolset/7
> module load cudatoolkit/10.2
>
> OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"
>
> cmake3 .. -DCMAKE_BUILD_TYPE=Release \
> -DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
> -DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
> -DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
> -DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
> -DGMX_BUILD_OWN_FFTW=ON \
> -DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
> -DGMX_OPENMP_MAX_THREADS=128 \
> -DCMAKE_INSTALL_PREFIX=$HOME/.local \
> -DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON
>
> make -j 10
> make check
> make install
>
> 45 of the 46 tests pass with the exception being HardwareUnitTests. There
> are several posts about this and apparently it is not a concern. The full
> build log is here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log
>
>
>
> Here is more info about our POWER9/V100 node:
>
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):128
> On-line CPU(s) list:   0-127
> Thread(s) per core:4
> Core(s) per socket:16
> Socket(s): 2
> NUMA node(s):  6
> Model: 2.3 (pvr 004e 1203)
> Model name:POWER9, altivec supported
> CPU max MHz:   3800.
> CPU min MHz:   2300.
>
> You see that we have 4 hardware threads per physical core. If we use 4
> hardware threads on the RNASE benchmark instead of 1 the performance goes
> to 119 ns/day which is still about 20% less than the Broadwell/P100 value.
> When using multiple CPU-cores on the POWER9/V100 there is significant
> variation in the execution time of the code.
>
> There are four GPUs per POWER9/V100 node:
>
> $ nvidia-smi -q
> Driver Version  : 440.33.01
> CUDA Version: 10.2
> GPU 0004:04:00.0
> Product Name: Tesla V100-SXM2-32GB
>
> The GPUs have been shown to perform as expected on other applications.
>
>
>
>
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: Thread affinity was not set.
>
> The full md.log is available here:
> https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log
>
>
>
>
> Below are the MegaFlops Accounting for the POWER9/V100 versus
> Broadwell/P100:
>
>  IBM POWER9 WITH NVIDIA V100 
> Computing:   M-Number M-Flops  % Flops
>
> -
>  Pair Search distance check 297.7638722679.875 0.0
>  NxN Ewald Elec. + LJ [F]244214.215808

[gmx-users] COMPEL question: Channel filter outside membrane, how to orient compartment boundaries

2020-04-23 Thread Erik Henze
Hi,
I am attempting to study permeation events in an ion channel where the
filter is located in the extracellular region, far away from the middle of
the transmembrane regions. My question is two-fold:

1) Where is the optimal place to set up the compartment boundaries?

If I place the compartment boundaries in the middle of the membrane (as
where most channel filters would approximately be) then, given that the
pore of the channel is very large in this region, I will be swapping lots
of ions which aren't actually permeating the channel. This doesn't
necessarily seem like a problem, given that COMPEL can record the
permeation events separately with the cylinder you define. I was concerned,
however, that this constant swapping that COMPEL has to do will make the
simulation very computationally costly/inefficient in some way. This brings
me to my next question:

2) Can you define the cylinder in a way that is independent of the center
of the channel, so that I can place the cylinder in a region centered near
the extracellular region?

Any thoughts/advice on this would be greatly appreciated, thanks!
-Erik H
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Umbrella Sampling with 2 reaction coordinates

2020-04-23 Thread Ernesto Camparolla
Dear gromacs users, I've performed umbrella sampling using 2 reaction
coordinates (of the same geometry and dimensions and which I previously
used for pulling in two phase spaces). Now I am trying to recover the PMF.
Does g_wham support this scheme? Is it correct to obtain a 1D PMF by
supplying g_wham with pullf and pullx files obtained from a 2 reaction
coordinate pulling setup? Thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] about error running temperature equilibration

2020-04-23 Thread Justin Lemkul




On 4/23/20 3:01 PM, lazaro monteserin wrote:

I am trying to run a temperature equilibration of my system and I get the
following error:

===
NOTE 1 [file nvt.mdp]:
   nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
   nstcomm to nstcalcenergy

NOTE 2 [file nvt.mdp]:
   Center of mass removal not necessary for Andersen.  All velocities of
   coupled groups are rerandomized periodically, so flying ice cube errors
   will not occur.

ERROR 1 [file nvt.mdp]:
   nstcomm must be 1, not 100 for Andersen, as velocities of atoms in
   coupled groups are randomized every time step
===
  but I don't even have in my nvt.mdp any values for the nstcomm or
restriction of center of masses translations or rotation. So I honestly do
not know what is happening here.

Do anybody have an idea of what is going on?


For any settings no explicitly listed, the defaults are used. See the 
manual:


http://manual.gromacs.org/current/user-guide/mdp-options.html

-Justin


This are the parameters for my cvt.mdp:

==
define  = -DPOSRES  ; position restrain for waters
; Run parameters
integrator  = md-vv ; velocity Verlet algorithm
nsteps  = 5 ; 2 * 5 = 100 ps
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500   ; save coordinates every 1.0 ps
nstvout = 500   ; save velocities every 1.0 ps
nstenergy   = 500   ; save energies every 1.0 ps
nstlog  = 500   ; update log file every 1.0 ps
; Bond parameters
continuation= no; first dynamics run
constraint_algorithm= lincs ; holonomic constraints
constraints = h-bonds   ; bonds involving H are constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Nonbonded settings
cutoff-scheme   = Verlet; Buffered neighbor searching
ns_type = grid  ; search neighboring grid cells
nstlist = 10; 1 fs, largely irrelevant with Verlet
rcoulomb= 333.3 ; short-range electrostatic cutoff (in
nm)
rvdw= 333.3 ; short-range van der Waals cutoff (in
nm)
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Electrostatics
coulombtype = cutoff; cutoff treatment
; Temperature coupling is on
tcoupl  = andersen  ; Andersen thermostat
tc-grps = System; couple system to bath
tau_t   = 0.1   ; time constant, in ps
ref_t   = 300   ; reference temperature, one for each
group, in K
nstcomm = 1
; Pressure coupling is off
pcoupl  = no; no pressure coupling in NVT
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell
distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed

=

Kindly, Lazaro


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] about error running temperature equilibration

2020-04-23 Thread lazaro monteserin
I am trying to run a temperature equilibration of my system and I get the
following error:

===
NOTE 1 [file nvt.mdp]:
  nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
  nstcomm to nstcalcenergy

NOTE 2 [file nvt.mdp]:
  Center of mass removal not necessary for Andersen.  All velocities of
  coupled groups are rerandomized periodically, so flying ice cube errors
  will not occur.

ERROR 1 [file nvt.mdp]:
  nstcomm must be 1, not 100 for Andersen, as velocities of atoms in
  coupled groups are randomized every time step
===
 but I don't even have in my nvt.mdp any values for the nstcomm or
restriction of center of masses translations or rotation. So I honestly do
not know what is happening here.

Do anybody have an idea of what is going on?

This are the parameters for my cvt.mdp:

==
define  = -DPOSRES  ; position restrain for waters
; Run parameters
integrator  = md-vv ; velocity Verlet algorithm
nsteps  = 5 ; 2 * 5 = 100 ps
dt  = 0.002 ; 2 fs
; Output control
nstxout = 500   ; save coordinates every 1.0 ps
nstvout = 500   ; save velocities every 1.0 ps
nstenergy   = 500   ; save energies every 1.0 ps
nstlog  = 500   ; update log file every 1.0 ps
; Bond parameters
continuation= no; first dynamics run
constraint_algorithm= lincs ; holonomic constraints
constraints = h-bonds   ; bonds involving H are constrained
lincs_iter  = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
; Nonbonded settings
cutoff-scheme   = Verlet; Buffered neighbor searching
ns_type = grid  ; search neighboring grid cells
nstlist = 10; 1 fs, largely irrelevant with Verlet
rcoulomb= 333.3 ; short-range electrostatic cutoff (in
nm)
rvdw= 333.3 ; short-range van der Waals cutoff (in
nm)
DispCorr= EnerPres  ; account for cut-off vdW scheme
; Electrostatics
coulombtype = cutoff; cutoff treatment
; Temperature coupling is on
tcoupl  = andersen  ; Andersen thermostat
tc-grps = System; couple system to bath
tau_t   = 0.1   ; time constant, in ps
ref_t   = 300   ; reference temperature, one for each
group, in K
nstcomm = 1
; Pressure coupling is off
pcoupl  = no; no pressure coupling in NVT
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell
distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed

=

Kindly, Lazaro
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GROMACS mdp file for doing a single point energy after acpype conversion

2020-04-23 Thread ABEL Stephane
Hi Justin

I obtained the following error with the following command and the mdp mentioned 
below 

gmx mdrun -s 1_OGNG_GLYCAM_SPE_GMX.tpr -rerun 1_OGNG_Amber.pdb

Thank you 

Stéphane

--
Program: gmx mdrun, version 2018.1
Source file: src/programs/mdrun/runner.cpp (line 736)

Fatal error:
The .mdp file specified an energy mininization or normal mode algorithm, and
these are not compatible with mdrun -rerun

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

>> You also shouldn't use a minimizer when doing a zero-point energy. Use the 
>> md integrator.
OK I try your suggestion

--

And the mdp 

Message: 1
Date: Thu, 23 Apr 2020 06:18:33 -0400
From: Justin Lemkul 
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GROMACS mdp file for doing a single point
energy after acpype conversion
Message-ID: <28a2dc8e-20e9-abee-4b10-0f3cceb4f...@vt.edu>
Content-Type: text/plain; charset=utf-8; format=flowed



On 4/23/20 5:42 AM, ABEL Stephane wrote:
> Deal all,
>
> I am using acpype to convert a set of glycolipids modeled with the GLYCAM06 
> force fiedl  into the GROMACS format. acpype works well for this task. But I 
> would like to check if the conversion is correctly done by performing single 
> point energy (SPE) calculations with Amber and GROMACS codes and thus 
> computes the energy differences  for the bonded and non bonded terms
>
> For the former test I using the prmtop and inpcrd files generated with tleap 
> and sander with the minimal commands below
>
> | mdin Single point
> 
> imin=0,
> maxcyc=0,
> ntmin=2,
> ntb=0,
> igb=0,
> cut=999
> /
>
> But For GROMACS vers > 5.0 and 2018.x , I did not find the equivalent mdp 
> parameters that can be used for doing the same task. I I used  the minimal 
> file below The bonded energy terms are very similar between the two codes but 
> not the non bonded terms
>
> integrator  = steep ; Algorithm (steep = steepest descent 
> minimization)
> emtol   = 1000.0; Stop minimization when the maximum force < 
> 1000.0 kJ/mol/nm
> emstep  = 0.01  ; Minimization step size
> nsteps  = 0   ; Maximum number of (minimization) steps to perform 
> (should be 5)
>
> ; Parameters describing how to find the neighbors of each atom and how to 
> calculate the interactions
> nstlist = 1 ; Frequency to update the neighbor list and long 
> range forces
> cutoff-scheme   = Group   ; Buffered neighbor searching
> ns_type = grid  ; Method to determine neighbor list (simple, grid)
> coulombtype = Cut-off   ; Treatment of long range electrostatic 
> interactions
> rcoulomb= 0   ; Short-range electrostatic cut-off
> rvdw= 0   ; Short-range Van der Waals cut-off
> rlist   = 0
> pbc = no   ; P
> continuation = yes
>
> I also also notice that a tpr generated with this mdp can not be used with 
> -rerun argument so how I can compute a SPE equivalent to Sander

Why is it incompatible with mdrun -rerun? Do you get an error?

You also shouldn't use a minimizer when doing a zero-point energy. Use
the md integrator.

-Justin
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-23 Thread Jonathan D. Halverson
We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an IBM 
POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL 7.7 and 
Slurm 19.05.5. We have no concerns about GROMACS on our Intel nodes. Everything 
below is about of the POWER9/V100 node.

We ran the RNASE benchmark with 2019.6 with PME and cubic box using 1 CPU-core 
and 1 GPU (ftp://ftp.gromacs.org/pub/benchmarks/rnase_bench_systems.tar.gz) and 
found that the Broadwell/P100 gives 144 ns/day while POWER9/V100 gives 102 
ns/day. The difference in performance is roughly the same for the larger ADH 
benchmark and when different numbers of CPU-cores are used. GROMACS is always 
underperforming on our POWER9/V100 nodes. We have pinning turned on (see Slurm 
script at bottom).

Below is our build procedure on the POWER9/V100 node:

version_gmx=2019.6
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-${version_gmx}.tar.gz
tar zxvf gromacs-${version_gmx}.tar.gz
cd gromacs-${version_gmx}
mkdir build && cd build

module purge
module load rh/devtoolset/7
module load cudatoolkit/10.2

OPTFLAGS="-Ofast -mcpu=power9 -mtune=power9 -mvsx -DNDEBUG"

cmake3 .. -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=gcc -DCMAKE_C_FLAGS_RELEASE="$OPTFLAGS" \
-DCMAKE_CXX_COMPILER=g++ -DCMAKE_CXX_FLAGS_RELEASE="$OPTFLAGS" \
-DGMX_BUILD_MDRUN_ONLY=OFF -DGMX_MPI=OFF -DGMX_OPENMP=ON \
-DGMX_SIMD=IBM_VSX -DGMX_DOUBLE=OFF \
-DGMX_BUILD_OWN_FFTW=ON \
-DGMX_GPU=ON -DGMX_CUDA_TARGET_SM=70 \
-DGMX_OPENMP_MAX_THREADS=128 \
-DCMAKE_INSTALL_PREFIX=$HOME/.local \
-DGMX_COOL_QUOTES=OFF -DREGRESSIONTEST_DOWNLOAD=ON

make -j 10
make check
make install

45 of the 46 tests pass with the exception being HardwareUnitTests. There are 
several posts about this and apparently it is not a concern. The full build log 
is here:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/build.log



Here is more info about our POWER9/V100 node:

$ lscpu
Architecture:  ppc64le
Byte Order:Little Endian
CPU(s):128
On-line CPU(s) list:   0-127
Thread(s) per core:4
Core(s) per socket:16
Socket(s): 2
NUMA node(s):  6
Model: 2.3 (pvr 004e 1203)
Model name:POWER9, altivec supported
CPU max MHz:   3800.
CPU min MHz:   2300.

You see that we have 4 hardware threads per physical core. If we use 4 hardware 
threads on the RNASE benchmark instead of 1 the performance goes to 119 ns/day 
which is still about 20% less than the Broadwell/P100 value. When using 
multiple CPU-cores on the POWER9/V100 there is significant variation in the 
execution time of the code.

There are four GPUs per POWER9/V100 node:

$ nvidia-smi -q
Driver Version  : 440.33.01
CUDA Version: 10.2
GPU 0004:04:00.0
Product Name: Tesla V100-SXM2-32GB

The GPUs have been shown to perform as expected on other applications.




The following lines are found in md.log for the POWER9/V100 run:

Overriding thread affinity set outside gmx mdrun
Pinning threads with an auto-selected logical core stride of 128
NOTE: Thread affinity was not set.

The full md.log is available here:
https://github.com/jdh4/running_gromacs/blob/master/03_benchmarks/md.log




Below are the MegaFlops Accounting for the POWER9/V100 versus Broadwell/P100:

 IBM POWER9 WITH NVIDIA V100 
Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check 297.7638722679.875 0.0
 NxN Ewald Elec. + LJ [F]244214.21580816118138.24398.0
 NxN Ewald Elec. + LJ [V]2483.565760  265741.536 1.6
 1,4 nonbonded interactions  53.4153414807.381 0.0
 Shift-X  3.029040  18.174 0.0
 Angles  37.0437046223.342 0.0
 Propers 55.825582   12784.058 0.1
 Impropers4.220422 877.848 0.0
 Virial   2.432585  43.787 0.0
 Stop-CM  2.452080  24.521 0.0
 Calc-Ekin   48.1280801299.458 0.0
 Lincs   20.5361591232.170 0.0
 Lincs-Mat  444.6133441778.453 0.0
 Constraint-V   261.1922282089.538 0.0
 Constraint-Vir   2.430161  58.324 0.0
 Settle  73.382008   23702.389 0.1
-
 Total16441499.096   100.0

Re: [gmx-users] Problem C-H bonds of Benzene after minimization

2020-04-23 Thread Paolo Costa
Hi Justin,

indeed there are only six bonds in the topology files! As you said the
hydrogen nomenclature in .rtp file is wrong.

Thanks a lot!

Paolo



Il giorno gio 23 apr 2020 alle ore 12:55 Justin Lemkul  ha
scritto:

>
>
> On 4/23/20 6:52 AM, Paolo Costa wrote:
> > Dear Gromacs Users,
> >
> > I am trying to perform MD simulations of benzene molecule in a cube of
> > water just for practicing.
> > By following the tutorial
> > https://www.svedruziclab.com/tutorials/gromacs/2-methane-in-water/, I
> setup
> > the residue Benzene within Amber99 force field. After the minimization
> > however I got the C-H bonds of benzene distorted and unusually stretched.
> > During the grompp procedure I got the following note:
> >
> >   " In moleculetype 'Other' 6 atoms are not bound by a potential or
> >constraint to any other atom in the same moleculetype. Although
> >technically this might not cause issues in a simulation, this often
> means
> >that the user forgot to add a bond/potential/constraint or put
> multiple
> >molecules in the same moleculetype definition by mistake. Run with -v
> to
> >get information for each atom."
> >
> > Could it be related to the problem I am facing?
> >
> > Here the starting .pdb file of benzene:
> > COMPNDBENZENE
> > REMARK1 File created by GaussView 6.0.16
> > HETATM1  C1  C6H6  0   0.968   1.903   0.000
>C
> > HETATM2  C2  C6H6  0   2.363   1.903   0.000
>C
> > HETATM3  C3  C6H6  0   3.060   3.111   0.000
>C
> > HETATM4  C4  C6H6  0   2.363   4.319  -0.001
>C
> > HETATM5  C5  C6H6  0   0.968   4.319  -0.002
>C
> > HETATM6  C6  C6H6  0   0.270   3.111  -0.001
>C
> > HETATM7  H1  C6H6  0   0.418   0.951   0.000
>H
> > HETATM8  H2  C6H6  0   2.912   0.951   0.001
>H
> > HETATM9  H3  C6H6  0   4.160   3.111   0.001
>H
> > HETATM   10  H4  C6H6  0   2.913   5.272  -0.001
>H
> > HETATM   11  H5  C6H6  0   0.418   5.272  -0.003
>H
> > HETATM   12  H6  C6H6  0  -0.829   3.111  -0.001
>H
> > END
> > CONECT1267
> > CONECT2138
> > CONECT3249
> > CONECT435   10
> > CONECT546   11
> > CONECT615   12
> > CONECT71
> > CONECT82
> > CONECT93
> > CONECT   104
> > CONECT   115
> > CONECT   126
> >
> > Here the .rtp file included in Amber99.ff:
> > [ C6H6 ]
> > [ atoms ]
> > C1   CA   -0.1285  1
> > C2   CA   -0.1285  2
> > C3   CA   -0.1285  3
> > C4   CA   -0.1285  4
> > C5   CA   -0.1285  5
> > C6   CA   -0.1285  6
> > H1   HA0.1285  7
> > H2   HA0.1285  8
> > H3   HA0.1285  9
> > H4   HA0.1285  10
> > H5   HA0.1285  11
> > H6   HA0.1285  12
> > [ bonds ]
> > C1 H7
> > C1 C2
> > C1 C6
> > C2 C8
> > C2 C1
> > C2 C3
> > C3 H9
> > C3 C2
> > C3 C4
> > C4 H10
> > C4 C3
> > C4 C5
> > C5 H11
> > C5 C4
> > C5 C6
> > C6 H12
> > C6 C5
> > C6 C1
> >
> > Can somebody help me to figure out such issue?
>
> The bonds in the .rtp file are wrong. The hydrogen nomenclature is
> incorrect so you do not have any C-H bonds in the topology. You can
> verify this for yourself. You probably have 6 bonds instead of 12.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Paolo Costa, Ph.D.
Postdoctoral Researcher
Department of Chemistry and Biomolecular Sciences
University of Ottawa
10 Marie Curie, Ottawa, ON K1N 6N5, Canada
Room number: DRO 326 (D'Iorio Hall)
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem C-H bonds of Benzene after minimization

2020-04-23 Thread Justin Lemkul




On 4/23/20 6:52 AM, Paolo Costa wrote:

Dear Gromacs Users,

I am trying to perform MD simulations of benzene molecule in a cube of
water just for practicing.
By following the tutorial
https://www.svedruziclab.com/tutorials/gromacs/2-methane-in-water/, I setup
the residue Benzene within Amber99 force field. After the minimization
however I got the C-H bonds of benzene distorted and unusually stretched.
During the grompp procedure I got the following note:

  " In moleculetype 'Other' 6 atoms are not bound by a potential or
   constraint to any other atom in the same moleculetype. Although
   technically this might not cause issues in a simulation, this often means
   that the user forgot to add a bond/potential/constraint or put multiple
   molecules in the same moleculetype definition by mistake. Run with -v to
   get information for each atom."

Could it be related to the problem I am facing?

Here the starting .pdb file of benzene:
COMPNDBENZENE
REMARK1 File created by GaussView 6.0.16
HETATM1  C1  C6H6  0   0.968   1.903   0.000   C
HETATM2  C2  C6H6  0   2.363   1.903   0.000   C
HETATM3  C3  C6H6  0   3.060   3.111   0.000   C
HETATM4  C4  C6H6  0   2.363   4.319  -0.001   C
HETATM5  C5  C6H6  0   0.968   4.319  -0.002   C
HETATM6  C6  C6H6  0   0.270   3.111  -0.001   C
HETATM7  H1  C6H6  0   0.418   0.951   0.000   H
HETATM8  H2  C6H6  0   2.912   0.951   0.001   H
HETATM9  H3  C6H6  0   4.160   3.111   0.001   H
HETATM   10  H4  C6H6  0   2.913   5.272  -0.001   H
HETATM   11  H5  C6H6  0   0.418   5.272  -0.003   H
HETATM   12  H6  C6H6  0  -0.829   3.111  -0.001   H
END
CONECT1267
CONECT2138
CONECT3249
CONECT435   10
CONECT546   11
CONECT615   12
CONECT71
CONECT82
CONECT93
CONECT   104
CONECT   115
CONECT   126

Here the .rtp file included in Amber99.ff:
[ C6H6 ]
[ atoms ]
C1   CA   -0.1285  1
C2   CA   -0.1285  2
C3   CA   -0.1285  3
C4   CA   -0.1285  4
C5   CA   -0.1285  5
C6   CA   -0.1285  6
H1   HA0.1285  7
H2   HA0.1285  8
H3   HA0.1285  9
H4   HA0.1285  10
H5   HA0.1285  11
H6   HA0.1285  12
[ bonds ]
C1 H7
C1 C2
C1 C6
C2 C8
C2 C1
C2 C3
C3 H9
C3 C2
C3 C4
C4 H10
C4 C3
C4 C5
C5 H11
C5 C4
C5 C6
C6 H12
C6 C5
C6 C1

Can somebody help me to figure out such issue?


The bonds in the .rtp file are wrong. The hydrogen nomenclature is 
incorrect so you do not have any C-H bonds in the topology. You can 
verify this for yourself. You probably have 6 bonds instead of 12.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problem C-H bonds of Benzene after minimization

2020-04-23 Thread Paolo Costa
Dear Gromacs Users,

I am trying to perform MD simulations of benzene molecule in a cube of
water just for practicing.
By following the tutorial
https://www.svedruziclab.com/tutorials/gromacs/2-methane-in-water/, I setup
the residue Benzene within Amber99 force field. After the minimization
however I got the C-H bonds of benzene distorted and unusually stretched.
During the grompp procedure I got the following note:

 " In moleculetype 'Other' 6 atoms are not bound by a potential or
  constraint to any other atom in the same moleculetype. Although
  technically this might not cause issues in a simulation, this often means
  that the user forgot to add a bond/potential/constraint or put multiple
  molecules in the same moleculetype definition by mistake. Run with -v to
  get information for each atom."

Could it be related to the problem I am facing?

Here the starting .pdb file of benzene:
COMPNDBENZENE
REMARK1 File created by GaussView 6.0.16
HETATM1  C1  C6H6  0   0.968   1.903   0.000   C
HETATM2  C2  C6H6  0   2.363   1.903   0.000   C
HETATM3  C3  C6H6  0   3.060   3.111   0.000   C
HETATM4  C4  C6H6  0   2.363   4.319  -0.001   C
HETATM5  C5  C6H6  0   0.968   4.319  -0.002   C
HETATM6  C6  C6H6  0   0.270   3.111  -0.001   C
HETATM7  H1  C6H6  0   0.418   0.951   0.000   H
HETATM8  H2  C6H6  0   2.912   0.951   0.001   H
HETATM9  H3  C6H6  0   4.160   3.111   0.001   H
HETATM   10  H4  C6H6  0   2.913   5.272  -0.001   H
HETATM   11  H5  C6H6  0   0.418   5.272  -0.003   H
HETATM   12  H6  C6H6  0  -0.829   3.111  -0.001   H
END
CONECT1267
CONECT2138
CONECT3249
CONECT435   10
CONECT546   11
CONECT615   12
CONECT71
CONECT82
CONECT93
CONECT   104
CONECT   115
CONECT   126

Here the .rtp file included in Amber99.ff:
[ C6H6 ]
[ atoms ]
C1   CA   -0.1285  1
C2   CA   -0.1285  2
C3   CA   -0.1285  3
C4   CA   -0.1285  4
C5   CA   -0.1285  5
C6   CA   -0.1285  6
H1   HA0.1285  7
H2   HA0.1285  8
H3   HA0.1285  9
H4   HA0.1285  10
H5   HA0.1285  11
H6   HA0.1285  12
[ bonds ]
C1 H7
C1 C2
C1 C6
C2 C8
C2 C1
C2 C3
C3 H9
C3 C2
C3 C4
C4 H10
C4 C3
C4 C5
C5 H11
C5 C4
C5 C6
C6 H12
C6 C5
C6 C1

Can somebody help me to figure out such issue?

Thanks.

Paolo

-- 
Paolo Costa, Ph.D.
Postdoctoral Researcher
Department of Chemistry and Biomolecular Sciences
University of Ottawa
10 Marie Curie, Ottawa, ON K1N 6N5, Canada
Room number: DRO 326 (D'Iorio Hall)
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS mdp file for doing a single point energy after acpype conversion

2020-04-23 Thread Justin Lemkul




On 4/23/20 5:42 AM, ABEL Stephane wrote:

Deal all,

I am using acpype to convert a set of glycolipids modeled with the GLYCAM06 
force fiedl  into the GROMACS format. acpype works well for this task. But I 
would like to check if the conversion is correctly done by performing single 
point energy (SPE) calculations with Amber and GROMACS codes and thus computes 
the energy differences  for the bonded and non bonded terms

For the former test I using the prmtop and inpcrd files generated with tleap 
and sander with the minimal commands below

| mdin Single point

imin=0,
maxcyc=0,
ntmin=2,
ntb=0,
igb=0,
cut=999
/

But For GROMACS vers > 5.0 and 2018.x , I did not find the equivalent mdp 
parameters that can be used for doing the same task. I I used  the minimal file 
below The bonded energy terms are very similar between the two codes but not the 
non bonded terms

integrator  = steep ; Algorithm (steep = steepest descent minimization)
emtol   = 1000.0; Stop minimization when the maximum force < 1000.0 
kJ/mol/nm
emstep  = 0.01  ; Minimization step size
nsteps  = 0   ; Maximum number of (minimization) steps to perform 
(should be 5)

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and long 
range forces
cutoff-scheme   = Group   ; Buffered neighbor searching
ns_type = grid  ; Method to determine neighbor list (simple, grid)
coulombtype = Cut-off   ; Treatment of long range electrostatic interactions
rcoulomb= 0   ; Short-range electrostatic cut-off
rvdw= 0   ; Short-range Van der Waals cut-off
rlist   = 0
pbc = no   ; P
continuation = yes

I also also notice that a tpr generated with this mdp can not be used with 
-rerun argument so how I can compute a SPE equivalent to Sander


Why is it incompatible with mdrun -rerun? Do you get an error?

You also shouldn't use a minimizer when doing a zero-point energy. Use 
the md integrator.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 回复: 回复: Problem with Potential Mean Force calculation

2020-04-23 Thread Justin Lemkul




On 4/22/20 10:40 AM, Rolly Ng wrote:

Dear Justin and Vu,

I think I have solved part of my problem. The number of tpr/xvg pairs were too 
much in my case. Although I used the script to generate 50 pairs with 0.1nm 
setting, it turns out that only the first 27 pairs works.


What does "only the first 27 pairs work" mean?


./setupUmbrella.py summary_distances.dat 0.1 run-umbrella.sh &> 
caught-output.txt

Please find my summary_distances.dat and caught-output.txt attached.

I also found that the wham loops for very long time if there is problem with 
the tpr/xvg pairs. A normal run will last only tens of iteration. I have to 
check the pairs one by one in order to get a reasonable PMF. I have uploaded 
them to RG.

What could be the problem with the tpr/xvg pairs? How can I avoid it the next 
time?


The reaction coordinate you established with these 51 .tpr files is 
consistent with the histograms you showed - sampling roughly between 2 - 
8 nm. The PMF is still basically an impossibility based on these data.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GROMACS mdp file for doing a single point energy after acpype conversion

2020-04-23 Thread ABEL Stephane
Deal all, 

I am using acpype to convert a set of glycolipids modeled with the GLYCAM06 
force fiedl  into the GROMACS format. acpype works well for this task. But I 
would like to check if the conversion is correctly done by performing single 
point energy (SPE) calculations with Amber and GROMACS codes and thus computes 
the energy differences  for the bonded and non bonded terms

For the former test I using the prmtop and inpcrd files generated with tleap 
and sander with the minimal commands below 

| mdin Single point

imin=0,
maxcyc=0,
ntmin=2,
ntb=0,
igb=0,
cut=999
/

But For GROMACS vers > 5.0 and 2018.x , I did not find the equivalent mdp 
parameters that can be used for doing the same task. I I used  the minimal file 
below The bonded energy terms are very similar between the two codes but not 
the non bonded terms

integrator  = steep ; Algorithm (steep = steepest descent minimization)
emtol   = 1000.0; Stop minimization when the maximum force < 1000.0 
kJ/mol/nm
emstep  = 0.01  ; Minimization step size
nsteps  = 0   ; Maximum number of (minimization) steps to perform 
(should be 5)

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and long 
range forces
cutoff-scheme   = Group   ; Buffered neighbor searching
ns_type = grid  ; Method to determine neighbor list (simple, grid)
coulombtype = Cut-off   ; Treatment of long range electrostatic interactions
rcoulomb= 0   ; Short-range electrostatic cut-off
rvdw= 0   ; Short-range Van der Waals cut-off
rlist   = 0
pbc = no   ; P
continuation = yes

I also also notice that a tpr generated with this mdp can not be used with 
-rerun argument so how I can compute a SPE equivalent to Sander

Thanks in advance for your help and suggestions


Stéphane

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How to report bugs or issues?

2020-04-23 Thread Yu Du


-Original Messages-
From: "Paul bauer" 
Sent Time: 2020-03-25 00:40:13 (Wednesday)
To: gromacs.org_gmx-users@maillist.sys.kth.se, "gmx-annou...@gromacs.org" 

Cc:
Subject: [gmx-users] GROMACS has switched to use Gitlab

Hello gmx users!

I just finished the transition of our project database to use the Gitlab
servers.

This means that from now on issues should be reported using the issue
tracker at https://gitlab.com/gromacs/gromacs/-/issues instead of
redmine.gromacs.org.
The redmine and gerrit servers have been set be read only from now on,
so that you can still access information there, but it wont be able to
report new things there any more.

Cheers

Paul

--
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Artifact in pull-pbc-ref-prev-step-com

2020-04-23 Thread Magnus Lundborg

Hi Alex,

pull-group1-pbcatom lets you specify the exact atom used as the PBC 
reference. Both 0 and -1 are special cases. For small molecules 0 is 
(almost?) always OK. Find one in the center of you membrane (in the pull 
direction). I'll actually have to check if -1 is even compatible with 
pull-pbc-ref-prev-step-com. It's possible that that combination should 
not be allowed even.


As you say, you can define a subgroup within the larger membrane group, 
but that is mainly of use if you know that some atoms consistently in 
the center of the whole group and others are more flexible.


Regards,

Magnus

On 2020-04-21 16:24, Alex wrote:

Hi Magnus,
Actually I am confused with the available options for the
"pull-pbc-ref-prev-step-com"  and "pull-group1-pbcatom".
For pull-pbc-ref-prev-step-com : YES or NO: where YES should be used when
one of the group is large, even the 2020.1 version of grmacs would give a
warning if one used No in a case of presence of a large group.
Also, for the pull-group1-pbcatom: there are two options of 0 or -1. By 0
the middle atom (number wise) of the large group is used automatically
which is safe only for small groups as manual states. So, only
remaining option is -1.
So, what I understood for a layered-large group similar to what I have one
should use pull-pbc-ref-prev-step-com = YES and pull-group1-pbcatom = -1
which would cause moving the system along -Z during the pulling.

Using gmx select, can I manually define a sub-group around the COM of the
large group, and consider it as one of the pulling groups instead of the
large group?

Thank you
Alex

On Mon, Apr 20, 2020 at 8:46 AM Magnus Lundborg <
magnus.lundb...@scilifelab.se> wrote:


Hi Alex,

I don't see why it would need pull-group1-pbcatom = -1. Why not pick a
central atom?

Regards,

Magnus

On 2020-04-20 13:40, Alex wrote:

Hi Magnus,
Thanks.
The problem raises only because of using the pull-pbc-ref-prev-step-com
which needs the pull-group1-pbcatom to be -1 to be meaningful.
For an identical system and mdp parameters using 2018 version of gromacs
which is independent to the pull-pbc-ref-prev-step-com, I see no issue.

Regards,
Alex

On Mon, Apr 20, 2020 at 3:27 AM Magnus Lundborg <
magnus.lundb...@scilifelab.se> wrote:


Sorry, about the statement about pbcatom -1. I was thinking about 0. I
don't know if pbcatom -1 is good or not in this case.

Regards,

Magnus

On 2020-04-20 09:24, Magnus Lundborg wrote:

Hi Alex,

I don't think this is related to using pull-pbc-ref-prev-step-com.
Have you tried without it? However, it is risky using pbcatom -1,
since you don't know what atom you are using as the initial reference.
I would suggest picking an atom you know is located at the centre of
the structure.

I would think that the problem has to do with the comm removal. What
are your parameters for comm-mode, nstcomm and comm-grps? It is
possible that you need to lower your nstcomm. It is also possible, but
not certain, the comm-mode Linear-acceleration-correction might help
you. For some reason, it seems like I have sometimes avoided similar
problems by using the sd integrator instead, but I haven't evaluated
that properly - it might just have been coincidences. If you see a
clear difference using the sd integrator it might be good if you'd
file an issue about it on gitlab so that someone can look into if
there is something wrong.

Regards,
Magnus

On 2020-04-18 20:12, Alex wrote:

Dear all,
To generate the initial configurations for umbrella sampling, I
conducted a
simple pulling simulation by which a single-small molecule (mol_A) is
being
dragged along -Z from water into the body of a thin film.
Since the thin film is large I used *"pull-pbc-ref-prev-step-com =
yes" and
"pull-group1-pbcatom  = -1"*  which cause a net shifting of the
system
along the pulling direction as soon as the mol_A reach to the thin

film,

please find below the pulling flags movie and  plot in below links.

Centering the thin film and mol_A could solve the issue,  (echo 1 0 |
trjconv -center yes) to some extent but still COM changes in the early
stage below 2ns. ,
COM:
https://drive.google.com/open?id=1-EcnV1uSu0I3eqdvjuUf2OxTZEXFD5m0

Movie in which the water molecules are hidden:
https://drive.google.com/open?id=1gP5GBgfGYMithrA1o1T_RzlSdCS91gkv

-
gmx version 2020.1
-
pull = yes
pull-print-com   = no
pull-print-ref-value = yes
pull-print-components= Yes
pull-nstxout = 1000
pull-nstfout = 1000
pull-pbc-ref-prev-step-com = yes
pull-ngroups = 2
pull-ncoords = 1
pull-group1-name = Thin-film
pull-group1-pbcatom  = -1
pull-group2-name = mol_A
pull-group2-pbcatom  = 0
pull-coord1-type = umbrella
pull-coord1-geometry = direction
pull-coord1-groups   = 1 2
pull-coord1-dim  = N N Y
pull-coord1-origin   = 0.0 0.0 0.0
pull-coord1-vec  = 0.0 0.0 -1.0
pull-coord1-start= yes
pull-coord1-init =