Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Alex
Well, unless something important has changed within a year, I distinctly 
remember being advised here not to offload anything to GPU for EM. Not 
that we ever needed to, to be honest...


In any case, we appear to be dealing with build issues here.

Alex

On 5/1/2019 5:09 PM, Kevin Boyd wrote:

Hi,


Of course, i am not. This is the EM. ;)

I haven't looked back at the code, but IIRC EM can use GPUs for the
nonbondeds, just not the PME. I just double-checked on one of my systems
with 10 cores and a GTX 1080 Ti, offloading to the GPU more than doubled
the minimization speed.

Kevin

On Wed, May 1, 2019 at 6:33 PM Alex  wrote:


Of course, i am not. This is the EM. ;)

On Wed, May 1, 2019, 4:30 PM Kevin Boyd  wrote:


Hi,

In addition to what Mark said (and I've also found pinning to be critical
for performance), you're also not using the GPUs with "-pme cpu -nb cpu".

Kevin

On Wed, May 1, 2019 at 5:56 PM Alex  wrote:


Well, my experience so far has been with the EM, because the rest of

the

script (with all the dynamic things) needed that to finish. And it
"finished" by hitting the wall. However, your comment does touch upon

what

to do with thread pinning and I will try to set '-pin on' throughout to

see

if things make a difference for the better. I am less confident about
setting strides because it is unclear what the job manager provides in
terms of the available core numbers. I will play around some more and
report here.

Thanks!

Alex

On Wed, May 1, 2019 at 3:49 PM Mark Abraham 
wrote:


Hi,

As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus

fairly

insensitive to the compiler's vectorisation abilities. GCC is the

only

compiler we've tested, as xlc can't compile simple C++11. As

everywhere,

you should use the latest version of gcc, as IBM spent quite some

years

landing improvements for POWER9.

EM is useless as a performance indicator of a dynamical simulation,

avoid

that - it runs serial code much much more often.

Your run deliberately didn't fill the available cores, so just like

on

x86,

mdrun will leave the thread affinity handling to the environment,

which

is

often a path to bad performance. So, if you plan on doing that often,
you'll want to check out the mdrun performance guide docs about the

mdrun

-pin and related options.

Mark


On Wed., 1 May 2019, 23:21 Alex,  wrote:


Hi all,

Our institution decided to be all fancy, so now we have a bunch of

Power9

nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by

slurm.

Today

I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu')

and

the

performance is abysmal, I would guess 100 times slower than on

anything

I've ever seen before.

Our admin person emailed me the following:
"-- it would not surprise me if the GCC compilers were relatively

bad

at

taking advantage of POWER9 vectorization, they're likely optimized

for

x86_64 vector stuff like SSE and AVX operations.  This was an issue

in

the

build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but

according

to

my

notes, that was part of an attempt to fix the "unimplemented SIMD"

error

that was dogging me at first, and/but which was eventually cleared

by

switching to gcc-6."

Does anyone have any comments/suggestions on building and running

GMX

on

Power9?

Thank you,

Alex
--
Gromacs Users mailing list

* Please search the archive at


https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=zejDS0OvUCl%2BSch%2BzVtxic%2B%2BDFIPEhB1DygmpmQ2dvw%3Dreserved=0

before

posting!

* Can't post? Read

https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=X87Kk%2FtkodePJ9uhDb9XPIA0Xhaqi52e6Z9%2FhqY35fo%3Dreserved=0

* For (un)subscribe requests visit


https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=mJ%2FfYvTgmL49ZCAUYzSRJqz%2FJY8MxQdGpoYwKtbN39U%3Dreserved=0

or

send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at


https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=zejDS0OvUCl%2BSch%2BzVtxic%2B%2BDFIPEhB1DygmpmQ2dvw%3Dreserved=0

before

posting!

* Can't post? Read


Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Kevin Boyd
Hi,

>Of course, i am not. This is the EM. ;)

I haven't looked back at the code, but IIRC EM can use GPUs for the
nonbondeds, just not the PME. I just double-checked on one of my systems
with 10 cores and a GTX 1080 Ti, offloading to the GPU more than doubled
the minimization speed.

Kevin

On Wed, May 1, 2019 at 6:33 PM Alex  wrote:

> Of course, i am not. This is the EM. ;)
>
> On Wed, May 1, 2019, 4:30 PM Kevin Boyd  wrote:
>
> > Hi,
> >
> > In addition to what Mark said (and I've also found pinning to be critical
> > for performance), you're also not using the GPUs with "-pme cpu -nb cpu".
> >
> > Kevin
> >
> > On Wed, May 1, 2019 at 5:56 PM Alex  wrote:
> >
> > > Well, my experience so far has been with the EM, because the rest of
> the
> > > script (with all the dynamic things) needed that to finish. And it
> > > "finished" by hitting the wall. However, your comment does touch upon
> > what
> > > to do with thread pinning and I will try to set '-pin on' throughout to
> > see
> > > if things make a difference for the better. I am less confident about
> > > setting strides because it is unclear what the job manager provides in
> > > terms of the available core numbers. I will play around some more and
> > > report here.
> > >
> > > Thanks!
> > >
> > > Alex
> > >
> > > On Wed, May 1, 2019 at 3:49 PM Mark Abraham 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus
> fairly
> > > > insensitive to the compiler's vectorisation abilities. GCC is the
> only
> > > > compiler we've tested, as xlc can't compile simple C++11. As
> > everywhere,
> > > > you should use the latest version of gcc, as IBM spent quite some
> years
> > > > landing improvements for POWER9.
> > > >
> > > > EM is useless as a performance indicator of a dynamical simulation,
> > avoid
> > > > that - it runs serial code much much more often.
> > > >
> > > > Your run deliberately didn't fill the available cores, so just like
> on
> > > x86,
> > > > mdrun will leave the thread affinity handling to the environment,
> which
> > > is
> > > > often a path to bad performance. So, if you plan on doing that often,
> > > > you'll want to check out the mdrun performance guide docs about the
> > mdrun
> > > > -pin and related options.
> > > >
> > > > Mark
> > > >
> > > >
> > > > On Wed., 1 May 2019, 23:21 Alex,  wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Our institution decided to be all fancy, so now we have a bunch of
> > > Power9
> > > > > nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by
> slurm.
> > > > Today
> > > > > I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu')
> > and
> > > > the
> > > > > performance is abysmal, I would guess 100 times slower than on
> > anything
> > > > > I've ever seen before.
> > > > >
> > > > > Our admin person emailed me the following:
> > > > > "-- it would not surprise me if the GCC compilers were relatively
> bad
> > > at
> > > > > taking advantage of POWER9 vectorization, they're likely optimized
> > for
> > > > > x86_64 vector stuff like SSE and AVX operations.  This was an issue
> > in
> > > > the
> > > > > build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but
> according
> > to
> > > > my
> > > > > notes, that was part of an attempt to fix the "unimplemented SIMD"
> > > error
> > > > > that was dogging me at first, and/but which was eventually cleared
> by
> > > > > switching to gcc-6."
> > > > >
> > > > > Does anyone have any comments/suggestions on building and running
> GMX
> > > on
> > > > > Power9?
> > > > >
> > > > > Thank you,
> > > > >
> > > > > Alex
> > > > > --
> > > > > Gromacs Users mailing list
> > > > >
> > > > > * Please search the archive at
> > > > >
> > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=zejDS0OvUCl%2BSch%2BzVtxic%2B%2BDFIPEhB1DygmpmQ2dvw%3Dreserved=0
> > > before
> > > > > posting!
> > > > >
> > > > > * Can't post? Read
> > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=X87Kk%2FtkodePJ9uhDb9XPIA0Xhaqi52e6Z9%2FhqY35fo%3Dreserved=0
> > > > >
> > > > > * For (un)subscribe requests visit
> > > > >
> > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C5ae99d654910469ebe9008d6ce8502d1%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923468018052656sdata=mJ%2FfYvTgmL49ZCAUYzSRJqz%2FJY8MxQdGpoYwKtbN39U%3Dreserved=0
> > > or
> > > > > send a mail to gmx-users-requ...@gromacs.org.
> > > > >
> > > > --
> > > > Gromacs Users mailing list

Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Alex
Of course, i am not. This is the EM. ;)

On Wed, May 1, 2019, 4:30 PM Kevin Boyd  wrote:

> Hi,
>
> In addition to what Mark said (and I've also found pinning to be critical
> for performance), you're also not using the GPUs with "-pme cpu -nb cpu".
>
> Kevin
>
> On Wed, May 1, 2019 at 5:56 PM Alex  wrote:
>
> > Well, my experience so far has been with the EM, because the rest of the
> > script (with all the dynamic things) needed that to finish. And it
> > "finished" by hitting the wall. However, your comment does touch upon
> what
> > to do with thread pinning and I will try to set '-pin on' throughout to
> see
> > if things make a difference for the better. I am less confident about
> > setting strides because it is unclear what the job manager provides in
> > terms of the available core numbers. I will play around some more and
> > report here.
> >
> > Thanks!
> >
> > Alex
> >
> > On Wed, May 1, 2019 at 3:49 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus fairly
> > > insensitive to the compiler's vectorisation abilities. GCC is the only
> > > compiler we've tested, as xlc can't compile simple C++11. As
> everywhere,
> > > you should use the latest version of gcc, as IBM spent quite some years
> > > landing improvements for POWER9.
> > >
> > > EM is useless as a performance indicator of a dynamical simulation,
> avoid
> > > that - it runs serial code much much more often.
> > >
> > > Your run deliberately didn't fill the available cores, so just like on
> > x86,
> > > mdrun will leave the thread affinity handling to the environment, which
> > is
> > > often a path to bad performance. So, if you plan on doing that often,
> > > you'll want to check out the mdrun performance guide docs about the
> mdrun
> > > -pin and related options.
> > >
> > > Mark
> > >
> > >
> > > On Wed., 1 May 2019, 23:21 Alex,  wrote:
> > >
> > > > Hi all,
> > > >
> > > > Our institution decided to be all fancy, so now we have a bunch of
> > Power9
> > > > nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm.
> > > Today
> > > > I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu')
> and
> > > the
> > > > performance is abysmal, I would guess 100 times slower than on
> anything
> > > > I've ever seen before.
> > > >
> > > > Our admin person emailed me the following:
> > > > "-- it would not surprise me if the GCC compilers were relatively bad
> > at
> > > > taking advantage of POWER9 vectorization, they're likely optimized
> for
> > > > x86_64 vector stuff like SSE and AVX operations.  This was an issue
> in
> > > the
> > > > build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but according
> to
> > > my
> > > > notes, that was part of an attempt to fix the "unimplemented SIMD"
> > error
> > > > that was dogging me at first, and/but which was eventually cleared by
> > > > switching to gcc-6."
> > > >
> > > > Does anyone have any comments/suggestions on building and running GMX
> > on
> > > > Power9?
> > > >
> > > > Thank you,
> > > >
> > > > Alex
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=hInjXVJw1xyIo23W3Q%2Fnt5UlXy%2Bx5mok7re4cpCopG8%3Dreserved=0
> > before
> > > > posting!
> > > >
> > > > * Can't post? Read
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=iiux5GTZD%2F7xh56kyGi%2BCImX55GOgP9gdi1Bx6lUEOM%3Dreserved=0
> > > >
> > > > * For (un)subscribe requests visit
> > > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=fA46t2G3%2FRErO9ephu1d2QoOcWoLadgzG6DkhSG9Los%3Dreserved=0
> > or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > >
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=hInjXVJw1xyIo23W3Q%2Fnt5UlXy%2Bx5mok7re4cpCopG8%3Dreserved=0
> > before
> > > posting!
> > >
> > > * Can't post? Read
> >
> 

Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Kevin Boyd
Hi,

In addition to what Mark said (and I've also found pinning to be critical
for performance), you're also not using the GPUs with "-pme cpu -nb cpu".

Kevin

On Wed, May 1, 2019 at 5:56 PM Alex  wrote:

> Well, my experience so far has been with the EM, because the rest of the
> script (with all the dynamic things) needed that to finish. And it
> "finished" by hitting the wall. However, your comment does touch upon what
> to do with thread pinning and I will try to set '-pin on' throughout to see
> if things make a difference for the better. I am less confident about
> setting strides because it is unclear what the job manager provides in
> terms of the available core numbers. I will play around some more and
> report here.
>
> Thanks!
>
> Alex
>
> On Wed, May 1, 2019 at 3:49 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus fairly
> > insensitive to the compiler's vectorisation abilities. GCC is the only
> > compiler we've tested, as xlc can't compile simple C++11. As everywhere,
> > you should use the latest version of gcc, as IBM spent quite some years
> > landing improvements for POWER9.
> >
> > EM is useless as a performance indicator of a dynamical simulation, avoid
> > that - it runs serial code much much more often.
> >
> > Your run deliberately didn't fill the available cores, so just like on
> x86,
> > mdrun will leave the thread affinity handling to the environment, which
> is
> > often a path to bad performance. So, if you plan on doing that often,
> > you'll want to check out the mdrun performance guide docs about the mdrun
> > -pin and related options.
> >
> > Mark
> >
> >
> > On Wed., 1 May 2019, 23:21 Alex,  wrote:
> >
> > > Hi all,
> > >
> > > Our institution decided to be all fancy, so now we have a bunch of
> Power9
> > > nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm.
> > Today
> > > I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu') and
> > the
> > > performance is abysmal, I would guess 100 times slower than on anything
> > > I've ever seen before.
> > >
> > > Our admin person emailed me the following:
> > > "-- it would not surprise me if the GCC compilers were relatively bad
> at
> > > taking advantage of POWER9 vectorization, they're likely optimized for
> > > x86_64 vector stuff like SSE and AVX operations.  This was an issue in
> > the
> > > build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but according to
> > my
> > > notes, that was part of an attempt to fix the "unimplemented SIMD"
> error
> > > that was dogging me at first, and/but which was eventually cleared by
> > > switching to gcc-6."
> > >
> > > Does anyone have any comments/suggestions on building and running GMX
> on
> > > Power9?
> > >
> > > Thank you,
> > >
> > > Alex
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=hInjXVJw1xyIo23W3Q%2Fnt5UlXy%2Bx5mok7re4cpCopG8%3Dreserved=0
> before
> > > posting!
> > >
> > > * Can't post? Read
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=iiux5GTZD%2F7xh56kyGi%2BCImX55GOgP9gdi1Bx6lUEOM%3Dreserved=0
> > >
> > > * For (un)subscribe requests visit
> > >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=fA46t2G3%2FRErO9ephu1d2QoOcWoLadgzG6DkhSG9Los%3Dreserved=0
> or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> >
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=hInjXVJw1xyIo23W3Q%2Fnt5UlXy%2Bx5mok7re4cpCopG8%3Dreserved=0
> before
> > posting!
> >
> > * Can't post? Read
> https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C4c05490f75ba4dc9658e08d6ce7fd451%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636923445772493263sdata=iiux5GTZD%2F7xh56kyGi%2BCImX55GOgP9gdi1Bx6lUEOM%3Dreserved=0
> >
> > * For (un)subscribe requests visit
> >
> 

[gmx-users] Flory-Huggins parameter

2019-05-01 Thread Alex
Dear all,
Does anybody know how to calculate the Flory-Huggins parameter? The engine
for all atoms MD simulations is Gromacs and the system of interest is an
emulsion of epoxy resin and surfactant in water.

Thank you.
Best regards,
Alexander
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Alex
Well, my experience so far has been with the EM, because the rest of the
script (with all the dynamic things) needed that to finish. And it
"finished" by hitting the wall. However, your comment does touch upon what
to do with thread pinning and I will try to set '-pin on' throughout to see
if things make a difference for the better. I am less confident about
setting strides because it is unclear what the job manager provides in
terms of the available core numbers. I will play around some more and
report here.

Thanks!

Alex

On Wed, May 1, 2019 at 3:49 PM Mark Abraham 
wrote:

> Hi,
>
> As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus fairly
> insensitive to the compiler's vectorisation abilities. GCC is the only
> compiler we've tested, as xlc can't compile simple C++11. As everywhere,
> you should use the latest version of gcc, as IBM spent quite some years
> landing improvements for POWER9.
>
> EM is useless as a performance indicator of a dynamical simulation, avoid
> that - it runs serial code much much more often.
>
> Your run deliberately didn't fill the available cores, so just like on x86,
> mdrun will leave the thread affinity handling to the environment, which is
> often a path to bad performance. So, if you plan on doing that often,
> you'll want to check out the mdrun performance guide docs about the mdrun
> -pin and related options.
>
> Mark
>
>
> On Wed., 1 May 2019, 23:21 Alex,  wrote:
>
> > Hi all,
> >
> > Our institution decided to be all fancy, so now we have a bunch of Power9
> > nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm.
> Today
> > I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu') and
> the
> > performance is abysmal, I would guess 100 times slower than on anything
> > I've ever seen before.
> >
> > Our admin person emailed me the following:
> > "-- it would not surprise me if the GCC compilers were relatively bad at
> > taking advantage of POWER9 vectorization, they're likely optimized for
> > x86_64 vector stuff like SSE and AVX operations.  This was an issue in
> the
> > build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but according to
> my
> > notes, that was part of an attempt to fix the "unimplemented SIMD" error
> > that was dogging me at first, and/but which was eventually cleared by
> > switching to gcc-6."
> >
> > Does anyone have any comments/suggestions on building and running GMX on
> > Power9?
> >
> > Thank you,
> >
> > Alex
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Mark Abraham
Hi,

As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus fairly
insensitive to the compiler's vectorisation abilities. GCC is the only
compiler we've tested, as xlc can't compile simple C++11. As everywhere,
you should use the latest version of gcc, as IBM spent quite some years
landing improvements for POWER9.

EM is useless as a performance indicator of a dynamical simulation, avoid
that - it runs serial code much much more often.

Your run deliberately didn't fill the available cores, so just like on x86,
mdrun will leave the thread affinity handling to the environment, which is
often a path to bad performance. So, if you plan on doing that often,
you'll want to check out the mdrun performance guide docs about the mdrun
-pin and related options.

Mark


On Wed., 1 May 2019, 23:21 Alex,  wrote:

> Hi all,
>
> Our institution decided to be all fancy, so now we have a bunch of Power9
> nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm. Today
> I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu') and the
> performance is abysmal, I would guess 100 times slower than on anything
> I've ever seen before.
>
> Our admin person emailed me the following:
> "-- it would not surprise me if the GCC compilers were relatively bad at
> taking advantage of POWER9 vectorization, they're likely optimized for
> x86_64 vector stuff like SSE and AVX operations.  This was an issue in the
> build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but according to my
> notes, that was part of an attempt to fix the "unimplemented SIMD" error
> that was dogging me at first, and/but which was eventually cleared by
> switching to gcc-6."
>
> Does anyone have any comments/suggestions on building and running GMX on
> Power9?
>
> Thank you,
>
> Alex
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-01 Thread Alex
Hi all,

Our institution decided to be all fancy, so now we have a bunch of Power9
nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm. Today
I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu') and the
performance is abysmal, I would guess 100 times slower than on anything
I've ever seen before.

Our admin person emailed me the following:
"-- it would not surprise me if the GCC compilers were relatively bad at
taking advantage of POWER9 vectorization, they're likely optimized for
x86_64 vector stuff like SSE and AVX operations.  This was an issue in the
build, I selected "-DGMX_SIMD=IBM_VSX" for the config, but according to my
notes, that was part of an attempt to fix the "unimplemented SIMD" error
that was dogging me at first, and/but which was eventually cleared by
switching to gcc-6."

Does anyone have any comments/suggestions on building and running GMX on
Power9?

Thank you,

Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 2019.2 build warnings

2019-05-01 Thread Alex

Oh, cool -- thanks! I guess we will be replacing the older builds, then.

Funny enough, you may not recall it, but the hardware for particular box 
was purchased with your own advice, which served us very well up until 
now. :)


Again, thank you.

Alex

On 5/1/2019 5:07 AM, Szilárd Páll wrote:

Hi,

You can safely ignore the errors as these are caused by properties of your
hardware that the test scripts are not dealing well enough -- though
admittedly, two of the three errors should be avoided along a message
similar to this
"Mdrun cannot use the requested (or automatic) number of OpenMP threads,
retrying with 8."
(which is what I get when I run the tests on a similar machine).

If you need the tests to pass on the node in question, let me know, I can
suggest workarounds.

Cheers,
--
Szilárd


On Mon, Apr 29, 2019 at 6:47 PM Alex  wrote:


Hi Szilárd,

Since I don't know which directory inside /complex corresponds to which
tests (at least one of the tests that failed was #42), here's a tarball
of the entire /complex directory per location you specified below:


https://www.dropbox.com/s/44uluopkdan2417/regression_complex_2019.2.tar.gz?dl=0

If you can help us figure this out, it will be great!

Thanks,

Alex

On 4/29/2019 4:25 AM, Szilárd Páll wrote:

Hi,

I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests

are

downloaded and unpacked under
BUILD_TREE/tests/regressiontests-release-2019-[SUFFIX]/

In that directory you find the usual regressiontests tree, from there

under

complex/ you'll find the tests in question.

Cheers,
--
Szilárd


On Fri, Apr 26, 2019 at 7:00 PM Alex  wrote:


Hi Szilárd,

I am at a conference right now, but will do my best to upload the
requested data first thing on Monday. In the meantime, could you please
tell me where the stuff of interest would be located within the local
gromacs build directory? I mean, I could make the entire directory a
tarball, but not sure it's all that necessary. I don't remember which

tests

failed, unfortunately...

Thank you!

Alex

On 4/25/2019 2:54 AM, Szilárd Páll wrote:

Hi Alex,

On Wed, Apr 24, 2019 at 9:59 PM Alex  wrote:


Hi Szilárd,

We are using neither Ubuntu 18.04, nor glibc 2.27, but the problem is
most certainly there.

OK.

Can you please post the content of the directories of tests that

failed?

It

would be useful to know the exact software configuration (reported in

the

log) and the details of the errors (reported in the mdrun.out).

Thanks,
--
Szilárd




Until the issue is solved one way or another, we
will be staying with 2018.1, i guess.

$ lsb_release -a

No LSB modules are available.

Distributor ID: Ubuntu

Description:Ubuntu 16.04.6 LTS

Release:16.04

Codename:   xenial

Ubuntu GLIBC 2.23-0ubuntu11


On 4/24/2019 4:57 AM, Szilárd Páll wrote:

What OS are you using? There are some known issues with the Ubuntu

18.04

+

glibc 2.27 which could explain the errors.
--
Szilárd





--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Average rmsd

2019-05-01 Thread neelam wafa
ok thanks i'll try this

Regards

On Mon, Apr 29, 2019 at 5:23 AM Justin Lemkul  wrote:

>
>
> On 4/25/19 8:49 AM, neelam wafa wrote:
> > Hi!
> > I have run 5ns simulation of protein ligand complex and got its rmsd plot
> > using gmx_rms. How can i get average rmsd value for this simulation. Is
> it
> > to be taken from xmgrace graph or there is any way to calculate it? U
> cant
> > find it in log file.
>
> You can get the average of any data series (as well as other statistical
> analysis) with gmx analyze.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Failed tests, need help in troubleshooting

2019-05-01 Thread Szilárd Páll
Hi Cameron,

My strong suspicion is that the NVIDIA OpenCL driver/compiler simply does
not support or is buggy on Turing. I've just checked and an OpenCL build
with the latest 418 drivers and it also fails tests on Volta (which is
similar to the Turing architecture), but it passes on Pascal.

You could verify this by running make check such that only the second
Quadro GPU is utilized, e.g.
$ CUDA_VISIBLE_DEVICES=1 make check

Additionally note that performance with OpenCL on NVIDIA is in general
significantly lower than with CUDA both because NVIDIA's OpenCL support is
rather poor and also some features are not yet fully functional on NVIDIA
OpenCL (that do work on AMD). Hence, if performance matters, e.g.
GPU-accelerated production runs, CUDA is admittedly the better option.

--
Szilárd


On Fri, Apr 26, 2019 at 9:47 AM Cameron Fletcher (CF) <
circumf...@disroot.org> wrote:

> Hi Szilárd,
>
> I am using a Intel Xeon W-2145,
> GPU: Nvidia RTX 2080TI and Nvidia P400
>
>
> cmake log:
>
> https://raw.githubusercontent.com/circumflex-cf/logs/master/cmake_2019-04-23.log
>
> make log:
>
> https://raw.githubusercontent.com/circumflex-cf/logs/master/make_2019-04-23.log
>
> make check logs:
>
> https://raw.githubusercontent.com/circumflex-cf/logs/master/makecheck_2019-04-23.log
>
> Also here is one the regression tests that failed.
> https://github.com/circumflex-cf/logs/tree/master/orientation-restraints
>
>
> --
> CF
>
> On 23/04/19 5:39 PM, Szilárd Páll wrote:
> > Hi Cameron,
> >
> > I meant any log file from a run with the hardware + software combination.
> > The log file contains hardware and software detection output that is
> useful
> > in identifying issues.
> >
> > Do the unit tests pass?
> >
> > --
> > Szilárd
> >
> >
> > On Tue, Apr 23, 2019 at 12:38 PM Cameron Fletcher (CF) <
> > circumf...@disroot.org> wrote:
> >
> >> Hello Szilárd,
> >>
> >> Do you mean log files created in each regression test?
> >>
> >> On 23/04/19 3:43 PM, Szilárd Páll wrote:
> >>> What is the hardware you are running this on? Can you share a log file,
> >>> please?
> >>> --
> >>> Szilárd
> >>>
> >>>
> >>> On Mon, Apr 22, 2019 at 9:24 AM Cameron Fletcher (CF) <
> >>> circumf...@disroot.org> wrote:
> >>>
>  Hello,
> 
>  I have installed gromacs 2019.1 on CentOS 7.6 .
>  While running regressions tests 2019.1 certain tests are failing with
>  errors.
> 
>  I have attached list of some failed tests.
> 
> 
>  Since I am using gcc compilers and openmpi from openhpc repositories
>  the below command was used for cmake.
> 
>  cmake ..
>  -DCMAKE_C_COMPILER=/opt/ohpc/pub/compiler/gcc/7.3.0/bin/gcc
>  -DCMAKE_CXX_COMPILER=/opt/ohpc/pub/compiler/gcc/7.3.0/bin/c++
>  -DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=on -DGMX_GPU=on -DGMX_USE_OPENCL=on
> 
>  cmake version: 3.13.4
>  gcc version: 7.3.0
>  openmpi3 version: 3.1.0
> 
> 
>  What should I be doing further for more troubleshooting.
> 
> 
>  --
>  CF
> 
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>  posting!
> 
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>  send a mail to gmx-users-requ...@gromacs.org.
> >>
> >>
> >> --
> >> CF
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] 2019.2 build warnings

2019-05-01 Thread Szilárd Páll
Hi,

You can safely ignore the errors as these are caused by properties of your
hardware that the test scripts are not dealing well enough -- though
admittedly, two of the three errors should be avoided along a message
similar to this
"Mdrun cannot use the requested (or automatic) number of OpenMP threads,
retrying with 8."
(which is what I get when I run the tests on a similar machine).

If you need the tests to pass on the node in question, let me know, I can
suggest workarounds.

Cheers,
--
Szilárd


On Mon, Apr 29, 2019 at 6:47 PM Alex  wrote:

> Hi Szilárd,
>
> Since I don't know which directory inside /complex corresponds to which
> tests (at least one of the tests that failed was #42), here's a tarball
> of the entire /complex directory per location you specified below:
>
>
> https://www.dropbox.com/s/44uluopkdan2417/regression_complex_2019.2.tar.gz?dl=0
>
> If you can help us figure this out, it will be great!
>
> Thanks,
>
> Alex
>
> On 4/29/2019 4:25 AM, Szilárd Páll wrote:
> > Hi,
> >
> > I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests
> are
> > downloaded and unpacked under
> > BUILD_TREE/tests/regressiontests-release-2019-[SUFFIX]/
> >
> > In that directory you find the usual regressiontests tree, from there
> under
> > complex/ you'll find the tests in question.
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> > On Fri, Apr 26, 2019 at 7:00 PM Alex  wrote:
> >
> >> Hi Szilárd,
> >>
> >> I am at a conference right now, but will do my best to upload the
> >> requested data first thing on Monday. In the meantime, could you please
> >> tell me where the stuff of interest would be located within the local
> >> gromacs build directory? I mean, I could make the entire directory a
> >> tarball, but not sure it's all that necessary. I don't remember which
> tests
> >> failed, unfortunately...
> >>
> >> Thank you!
> >>
> >> Alex
> >>
> >> On 4/25/2019 2:54 AM, Szilárd Páll wrote:
> >>> Hi Alex,
> >>>
> >>> On Wed, Apr 24, 2019 at 9:59 PM Alex  wrote:
> >>>
>  Hi Szilárd,
> 
>  We are using neither Ubuntu 18.04, nor glibc 2.27, but the problem is
>  most certainly there.
> >>> OK.
> >>>
> >>> Can you please post the content of the directories of tests that
> failed?
> >> It
> >>> would be useful to know the exact software configuration (reported in
> the
> >>> log) and the details of the errors (reported in the mdrun.out).
> >>>
> >>> Thanks,
> >>> --
> >>> Szilárd
> >>>
> >>>
> >>>
>  Until the issue is solved one way or another, we
>  will be staying with 2018.1, i guess.
> 
>  $ lsb_release -a
> 
>  No LSB modules are available.
> 
>  Distributor ID: Ubuntu
> 
>  Description:Ubuntu 16.04.6 LTS
> 
>  Release:16.04
> 
>  Codename:   xenial
> 
>  Ubuntu GLIBC 2.23-0ubuntu11
> 
> 
>  On 4/24/2019 4:57 AM, Szilárd Páll wrote:
> > What OS are you using? There are some known issues with the Ubuntu
> >> 18.04
>  +
> > glibc 2.27 which could explain the errors.
> > --
> > Szilárd
> >
> >
> >
> >
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
>  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>  posting!
> 
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>  send a mail to gmx-users-requ...@gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.