Re: [gmx-users] Using GROMACS with (AMD) OpenCL

2016-12-03 Thread Gregory Man Kai Poon
Hi Szilárd,


Thanks for your tips.  It turned out that between installing AMDGPU-PRO, then 
AMD APP SDK, the link to libopencl.so became broken.  Following your 
suggestions (if I interpreted them correctly), I linked against the copy 
installed by AMDGPU-PRO.  That resolved the error and I was able to complete 
the installation.  I also took the opportunity (as per your suggestion) to 
migrate to the 2016 release.  I had wanted to stay with 5.1.x because we had 
many simulations generated with 5.1.2 and 5.1.4 which we want to extend, and I 
was wary of potential issues with being 100% forward compatible.  I am testing 
one of these simulations now.  So far the program has not complained yet 
(expected I think) and is using the GPU.  If you know of any potential issues 
with extending simulations created by previous versions of GROMACS, I would be 
most interested in knowing them.


Thanks again,

Gregory


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Szilárd Páll 
<pall.szil...@gmail.com>
Sent: Saturday, December 3, 2016 10:34:46 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] Using GROMACS with (AMD) OpenCL

Hi Gregory,

First of all, I'd strongly recommend you to start with the 2016
release, 2016.1 has been out for some time. This will be more robust
and has a lot of performance improvement too especially on AMD (see
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmanual.gromacs.org%2Fdocumentation%2F2016%2FReleaseNotes%2F=01%7C01%7Cgpoon%40gsu.edu%7C1c6c949cc26b455fee8f08d41b91eece%7C515ad73d8d5e4169895c9789dc742a70%7C0=4w9grG36HpkQGK%2F44RJhN3f9EQrhhcNBGxnXVxjL8WY%3D=0).

Regarding the link error, to be honest I can't tell what's wrong.
However, note that you do not necessarily need to link against the
SDK, I have clarified this in the last version of the docs:
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmanual.gromacs.org%2Fdocumentation%2F2016%2Finstall-guide%2Findex.html%23opencl-gpu-acceleration=01%7C01%7Cgpoon%40gsu.edu%7C1c6c949cc26b455fee8f08d41b91eece%7C515ad73d8d5e4169895c9789dc742a70%7C0=VKwlCmqzHORZ8PvEEjcYzR0pXx9WtH45S7LpiVUNBY4%3D=0

While linking against the APPSDK libOpenCL _should_ work,  I'd
recommend that you try to link against the stock libOpenCL.so. If that
does not work feel free to share the cmake invocation and cache file
and we may be able to figure out something.

Cheers,
--
Szilárd


On Sat, Dec 3, 2016 at 4:02 PM, Gregory Man Kai Poon <gp...@gsu.edu> wrote:
> Hi Milan and Szilárd,
>
>
> Many thanks for your comments.  So I installed Ubuntu 16.04, installed the 
> AMDGPU-PRO driver, installed the AMD APP SDK.  Then I attempted to install 
> GROMACS 5.1.4 as follows:
>
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON 
> -DGMX_USE_OPENCL=ON
>
>
> Then when I invoked make, things went fine for a bit, then halted with:
>
>
> [  1%] Built target fftwBuild
> make[2]: *** No rule to make target 
> '/home/popeye/AMDAPPSDK-3.0/lib/x86_64/libOpenCL.so', needed by 
> 'lib/libgromacs.so.1.4.0'.  Stop.
> CMakeFiles/Makefile2:2026: recipe for target 
> 'src/gromacs/CMakeFiles/libgromacs.dir/all' failed
> make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
> Makefile:160: recipe for target 'all' failed
>
> I am not nearly good enough to make out what is going on, except to guess 
> that something may not be correct with the AMDAPPSDK-3.0 installation.  The 
> folder is there, and I checked the install log -- it didn't report any error 
> there.
>
>
> Thanks again for your continued help.
>
>
> Gregory
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
> <gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Szilárd Páll 
> <pall.szil...@gmail.com>
> Sent: Friday, December 2, 2016 11:22:29 AM
> To: Discussion list for GROMACS users
> Subject: Re: [gmx-users] Using GROMACS with (AMD) OpenCL
>
> Hi,
>
> While for the pre-Polaris hardware Milan is right, for Polaris you
> *need* a AMDGPU-PRO (which only runs on newer distros/kernels).
>
> The current AMDGPU-PRO from the OpenCL point of view is, AFAIK more or
> less the same compiler/runtime as the last catalyst release (15.12).
>
> One notable caveat is that with AMDGPU-PRO power management seems to
> be overly aggressive (messed up?) -- at least for the load GROMACS
> generates. Uou can loose quite some performance unless you force the
> performance level to "high" (hint use radcard to set
> power_dpm_force_performance_level=high); this has been obsered on both
> Polaris and Fiji.
>
> See here:
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.ar

[gmx-users] Using GROMACS with (AMD) OpenCL

2016-12-02 Thread Gregory Man Kai Poon
Hello all,


I would like to get some practical advice on setting up GROMACS with OpenCL 
using an AMD GPU (an RX 480 is what I happen to have).  The AMD APP SDK 
indicates that it supports the Catalyst Omega 15.7 driver -- is that compatible 
to the proprietary Catalyst driver for Linux?  Since this proprietary driver is 
only compatible with Ubuntu up to version 15.10, does that mean that best 
software configuration (say for GROMACS 5.1.4) is:


Ubuntu 15.10 + propriety Catalyst driver + AMD APP SDK


There is also a newer AMDGPU driver that works with Ubuntu 16.04 -- is that 
compatible with the AMD APP SDK and would allow GROMACS to be set up on a newer 
OS?  If I really should be going about some other way (given the GPU), please 
suggest also.


I have googled the subject a bit but was unable to find enough info to proceed. 
 Thanks in advance for your insight.


Best regards,

Gregory
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Using GROMACS with (AMD) OpenCL

2016-12-03 Thread Gregory Man Kai Poon
Hi Milan and Szilárd,


Many thanks for your comments.  So I installed Ubuntu 16.04, installed the 
AMDGPU-PRO driver, installed the AMD APP SDK.  Then I attempted to install 
GROMACS 5.1.4 as follows:


cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON 
-DGMX_USE_OPENCL=ON


Then when I invoked make, things went fine for a bit, then halted with:


[  1%] Built target fftwBuild
make[2]: *** No rule to make target 
'/home/popeye/AMDAPPSDK-3.0/lib/x86_64/libOpenCL.so', needed by 
'lib/libgromacs.so.1.4.0'.  Stop.
CMakeFiles/Makefile2:2026: recipe for target 
'src/gromacs/CMakeFiles/libgromacs.dir/all' failed
make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
Makefile:160: recipe for target 'all' failed

I am not nearly good enough to make out what is going on, except to guess that 
something may not be correct with the AMDAPPSDK-3.0 installation.  The folder 
is there, and I checked the install log -- it didn't report any error there.


Thanks again for your continued help.


Gregory


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
<gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Szilárd Páll 
<pall.szil...@gmail.com>
Sent: Friday, December 2, 2016 11:22:29 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] Using GROMACS with (AMD) OpenCL

Hi,

While for the pre-Polaris hardware Milan is right, for Polaris you
*need* a AMDGPU-PRO (which only runs on newer distros/kernels).

The current AMDGPU-PRO from the OpenCL point of view is, AFAIK more or
less the same compiler/runtime as the last catalyst release (15.12).

One notable caveat is that with AMDGPU-PRO power management seems to
be overly aggressive (messed up?) -- at least for the load GROMACS
generates. Uou can loose quite some performance unless you force the
performance level to "high" (hint use radcard to set
power_dpm_force_performance_level=high); this has been obsered on both
Polaris and Fiji.

See here:
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.archlinux.org%2Findex.php%2FATI%23Dynamic_power_management=01%7C01%7Cgpoon%40gsu.edu%7Cee289917aa5a4b74dc5808d41acf7095%7C515ad73d8d5e4169895c9789dc742a70%7C0=H0FRzdUGcKtgWvVRkE7fawUX7PZtmrz%2FcwaJq7AaC%2Bo%3D=0

Cheers,
--
Szilárd


On Fri, Dec 2, 2016 at 4:13 PM,  <melicher...@leaf.nh.cas.cz> wrote:
> Hi Gregory,
> it is possible to use with both - Catalyst or AMDGPU, so you can choose. As 
> there won't be (I think) any more updates for Catalyst, I'd prefer to use 
> newer AMDGPU (PRO) choice. I don't have Polaris based card, only some 79xx/R9 
> 280 (X) (uding Catalyst) and one Fury Nano (used Catalyst also, but now 
> AMDGPU PRO). I made the Gromacs' compilation only with the Catalyst drivers, 
> but with AMDGPU shouldn't be more complicated. Just install the system with 
> AMDGPU (I don't know it "PRO part" is needed, probably yes) and try to 
> compile (don't forget about the OpenCL switch). If there would be some 
> missing libraries, than you have to install some -dev versions of packages. 
> Probably Ubuntu has it's own way how to find the right package, but you can 
> also use site at packages.debian.org (as Ubuntu is based on Debian).
> Anyway I'm trying to have most of things directly from the distribution, 
> cause it would automatically upgrades and fixes bugs. Outside the distro I 
> have only Gromacs (performancd issues - compilation to exact machine - and 
> several version installed) and AMDGPU PRO (on that node with Fury Nano, cause 
> Debian doesn't have it in its repositories.
> If any problem, plaase ask more with some details.
>
> Best,
>
> Milan
>
> On Fri, Dec 02, 2016 at 01:56:03PM +, Gregory Man Kai Poon wrote:
>> Hello all,
>>
>>
>> I would like to get some practical advice on setting up GROMACS with OpenCL 
>> using an AMD GPU (an RX 480 is what I happen to have).  The AMD APP SDK 
>> indicates that it supports the Catalyst Omega 15.7 driver -- is that 
>> compatible to the proprietary Catalyst driver for Linux?  Since this 
>> proprietary driver is only compatible with Ubuntu up to version 15.10, does 
>> that mean that best software configuration (say for GROMACS 5.1.4) is:
>>
>>
>> Ubuntu 15.10 + propriety Catalyst driver + AMD APP SDK
>>
>>
>> There is also a newer AMDGPU driver that works with Ubuntu 16.04 -- is that 
>> compatible with the AMD APP SDK and would allow GROMACS to be set up on a 
>> newer OS?  If I really should be going about some other way (given the GPU), 
>> please suggest also.
>>
>>
>> I have googled the subject a bit but was unable to find enough info to 
>> proceed.  Thanks in advance for your insight.
>>
>>
>> Best regards,
>>
>&

[gmx-users] (no subject)

2016-12-01 Thread Gregory Man Kai Poon
Hi all:


I am trying to use hbond in Gromacs 5.1.4 to analyze water-mediated hydrogen 
bonding between two objects simulated in water.  The GROMACS manual discusses 
this in a Figure (9.8) - "water insertion". There is nothing in the online 
documentation as to how this should be done except a single mention with the 
-hbm option, which I tried.  It generated .xpm files such as the one attached.  
They open, as far as I can tell, a very vertically compressed plot which I can 
make nothing out of.  Attempts to convert them to eps using xpm2eps output 
similar results.


So my questions are two-fold: 1) What is happening with the .xpm files?  2) Am 
I using the correct hbond option to enumerate water-mediated hydrogen bonds?


Many thanks in advance,

Gregory


https://www.dropbox.com/s/2vj2mxmb0f0jnyq/hbmap.xpm?dl=0


https://www.dropbox.com/s/w0gb4x0frwwm668/plot.eps?dl=0

[https://www.dropbox.com/temp_thumb_from_token/s/w0gb4x0frwwm668?preserve_transparency=False_mode=2=1024x1024]

plot.eps
www.dropbox.com
Shared with Dropbox



[https://cf.dropboxstatic.com/static/images/icons128/page_white.png]

hbmap.xpm
www.dropbox.com
Shared with Dropbox



-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx convert-tpr problem

2017-04-05 Thread Gregory Man Kai Poon
Hi Mark,

To clarify, I should be calling mdrun like:

gmx mdrun -s new.tpr -cpi old.cpt -deffnm old

?

I apologize ahead if this sounds obvious.

Kind regards,
Gregory

G

et Outlook for Android<https://aka.ms/ghei36>


From: Mark Abraham
Sent: Wednesday, April 5, 19:25
Subject: Re: [gmx-users] gmx convert-tpr problem
To: Discussion list for GROMACS users

Hi, As the messages suggest, if you would like to change the output names, you 
can't append. If you would like to append, don't change the output names. The 
old output files are there, but you've told mdrun to expect them to have 
different names, and to append to them, but used a checkpoint file that has 
different names. It doesn't know whether the error was in you using the wrong 
checkpoint file, or a mismatching output name, or that there are missing files, 
or that you don't want appending. Mark On Thu, 6 Apr 2017 00:43 Gregory Man Kai 
Poon wrote: > Hello all: > > > I am having problems extending my runs using 
convert-tpr in GROMACS > 2016.1. In the latest instance, for example, I have a 
100-ns run that I > want to extend to 200 ns. I used the convert-tpr command to 
get my new > .tpr file: > > > gmx convert-tpr -s md_0_100.tpr -until 200 -o 
md_100_200.tpr > > > With the new .tpr file in hand, I start mdrun: > > > gmx 
mdrun -s md_100_200.tpr -cpi md_0_100.cpt -v -deffnm 
 md_0_200 > > > At this point I get this error: > > > GROMACS: gmx mdrun, 
version 2016.1 > Executable: /usr/local/gromacs/bin/gmx > Data prefix: 
/usr/local/gromacs > Working dir: /media/i5/EXTREME 64/AGC_GTG2 > Command line: 
> gmx mdrun -s md_100_200.tpr -cpi md_0_100.cpt -v -deffnm md_100_200 > > 
Output file appending has been requested, > but some output files listed in the 
checkpoint file md_0_100.cpt > are not present or not named as the output files 
by the current program: > Expect output files present: > > Expected output 
files not present or named differently: > md_0_100.log > md_0_100.xtc > 
md_0_100.trr > md_0_100.edr > > 
--- > Program: gmx mdrun, 
version 2016.1 > Source file: src/gromacs/mdrunutility/handlerestart.cpp (line 
177) > > Fatal error: > File appending requested, but 4 of the 4 output files 
are not present or > are > named differently. For safety reasons, GROMACS-2016 
and later only allows > file > appendi
 ng to be used when all files have the same names as they had in the > original 
run. Checkpointing is merely intended for plain continuation of > runs. > For 
safety reasons you must specify all file names (e.g. with -deffnm), and > all 
these files must match the names used in the run prior to checkpointing > since 
we will append to them by default. If the files are not available, > you > can 
add the -noappend flag to mdrun and write separate new parts. For mere > 
concatenation of files, you should use the gmx trjcat tool instead. > > For 
more information and tips for troubleshooting, please check the GROMACS > 
website at 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FDocumentation%2FErrors=02%7C01%7Cgpoon%40gsu.edu%7Ccd5f1661e0be4bfea85608d47c7b0a58%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636270315275689367=wud%2BY7HyhMxHQhwPzSzJ836fq8vKy%2FUj9BjZ0EMscBY%3D=0
 > --- 
 > > > I checked, double-checked, triple-checked that all the files the error 
 > > > is > referring to is in the working directory. In fact, all the files 
 > > > (the > md_0_100.* files from the previous run) as well as md_100_200.tpr 
 > > > are in > the same folder (the working directory), in which the mdrun 
 > > > command is > invoked. Any suggestions would be appreciated. > > > Kind 
 > > > regards, > > Gregory > > -- > Gromacs Users mailing list > > * Please 
 > > > search the archive at > 
 > > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_List=02%7C01%7Cgpoon%40gsu.edu%7Ccd5f1661e0be4bfea85608d47c7b0a58%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636270315275689367=DpOZWBaO5Sr6Jt5JLKNXIU6Vj4AIrM%2Ba61T4OKeyvO8%3D=0
 > > >  before > posting! > > * Can't post? Read 
 > > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists=02%7C01%7Cgpoon%40gsu.edu%7Ccd5f1661e0be4bfea85608d47c7b0a58%7C5
 
15ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636270315275689367=tuv6gpyzuXApwVSv1rXZ1q8wxvruSmZhgjFnIq9zeks%3D=0
 > > * For (un)subscribe requests visit > 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%

[gmx-users] gmx convert-tpr problem

2017-04-05 Thread Gregory Man Kai Poon
Hello all:


I am having problems extending my runs using convert-tpr in GROMACS 2016.1.  In 
the latest instance, for example, I have a 100-ns run that I want to extend to 
200 ns.  I used the convert-tpr command to get my new .tpr file:


gmx convert-tpr -s md_0_100.tpr -until 200 -o md_100_200.tpr


With the new .tpr file in hand, I start mdrun:


gmx mdrun -s md_100_200.tpr -cpi md_0_100.cpt -v -deffnm md_0_200


At this point I get this error:


GROMACS:  gmx mdrun, version 2016.1
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Working dir:  /media/i5/EXTREME 64/AGC_GTG2
Command line:
  gmx mdrun -s md_100_200.tpr -cpi md_0_100.cpt -v -deffnm md_100_200

Output file appending has been requested,
but some output files listed in the checkpoint file md_0_100.cpt
are not present or not named as the output files by the current program:
Expect output files present:

Expected output files not present or named differently:
  md_0_100.log
  md_0_100.xtc
  md_0_100.trr
  md_0_100.edr

---
Program: gmx mdrun, version 2016.1
Source file: src/gromacs/mdrunutility/handlerestart.cpp (line 177)

Fatal error:
File appending requested, but 4 of the 4 output files are not present or are
named differently. For safety reasons, GROMACS-2016 and later only allows file
appending to be used when all files have the same names as they had in the
original run. Checkpointing is merely intended for plain continuation of runs.
For safety reasons you must specify all file names (e.g. with -deffnm), and
all these files must match the names used in the run prior to checkpointing
since we will append to them by default. If the files are not available, you
can add the -noappend flag to mdrun and write separate new parts. For mere
concatenation of files, you should use the gmx trjcat tool instead.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


I checked, double-checked, triple-checked that all the files the error is 
referring to is in the working directory.  In fact, all the files (the 
md_0_100.* files from the previous run) as well as md_100_200.tpr are in the 
same folder (the working directory), in which the mdrun command is invoked.  
Any suggestions would be appreciated.


Kind regards,

Gregory

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Possible to extract gro file from a tpr?

2018-03-30 Thread Gregory Man Kai Poon
Hi Viveca,

You could specify in trjconv to output a gro file:

gmx trjconv -s md.tpr -f md.xtc -o md.gro ...

Would that work?

Gregory

Get Outlook for Android



From: Viveca Lindahl
Sent: Friday, March 30, 5:38 AM
Subject: [gmx-users] Possible to extract gro file from a tpr?
To: gmx-us...@gromacs.org


Hi users, Is it possible to extract a gro file from a tpr? Unfortunately 'gmx 
mdrun -nsteps 0' does not give a confout.gro :) -- Viveca -- Gromacs Users 
mailing list * Please search the archive at 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_List=02%7C01%7Cgpoon%40gsu.edu%7C2945ce2adec5464708d59621f5d9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636579994967328095=IJQNYK7e7fThCcMMrap%2BWdFIh9yooCxrOEqwul1reKQ%3D=0
 before posting! * Can't post? Read 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists=02%7C01%7Cgpoon%40gsu.edu%7C2945ce2adec5464708d59621f5d9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636579994967328095=PkSVGc8soVK%2F7ujDmWccPDX4Ewyb10wn1ZEXY3cj7II%3D=0
 * For (un)subscribe requests visit 
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flisti
 
nfo%2Fgromacs.org_gmx-users=02%7C01%7Cgpoon%40gsu.edu%7C2945ce2adec5464708d59621f5d9%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636579994967484360=NmMm2zEPkvPcXha6vjzFW897xlgXtYPhO4nhW%2FgopDw%3D=0
 or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Simulated tempering

2018-10-24 Thread Gregory Man Kai Poon
Hi all,

I am attempting to do simulated tempering on GROMACS 2016.3.  In my 
preliminary googling, I have found several questions from different 
people on the procedure in the past few years but unfortunately without 
response.  I have read up on the literature on the few papers that 
described ST with GROMACS, but have not had much success with the 
authors.  From studying the mdp options, it appears that ST is handled 
under the general umbrella of expanded ensemble simulations.  I am 
therefore hopeful that I could learn from those of you who may not be 
doing ST, but are using related mdp options and are willing to help me out.

Specifically, I am looking for how to extract the updated weights used 
to determine the Metropolis transitions in the MC moves.  I can get 
information such as temperature and lambda in gmx energy, and 
kinetic/potential energy and dH from the output md.xvg.  I've looked in 
md.log.  However, I can't find where the weights are stored.

I would also be thankful for any links on ST with GROMACS that I have 
missed.

Best wishes,

Gregory


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Pressure coupling in expanded ensemble simulations

2019-05-17 Thread Gregory Man Kai Poon
Hi Michael,

I am just following up on your thoughts on how carrying out expanded ensemble 
at NVT and converting back to NPT on the mailing list.  Again I appreciate your 
advice in this area.

Best wishes,

Gregory


On 5/8/2019 12:01 PM, Michael Shirts wrote:

Yeah, this is an unfortunately place in the code where not all combinations
work - very long story.  Hopefully this will be working better in 2020.

What I would recommend is, if possible, performing the expanded ensemble
simulation at NVT.  Everything should work fine there (paper coming out
hopefully soon comparing a bunch of free energy methods).  Once can always
correct the free energy at lambda=1 from NVT to NPT.  I Can fill in the
details.

You do NOT want to do Berendsen for NPT when running expanded ensemble.
The results will be incorrect (as I have learned by sad experience_

On Wed, May 8, 2019 at 8:14 AM Gregory Man Kai Poon 
<mailto:gp...@gsu.edu> wrote:



Hi all:

We are interested to do expanded ensemble simulations (such as simulated
tempering) on GROMACS.  Extensive fiddling with the settings and
googling on other people's experience suggests that these simulations
must use the md-vv integrator, which in turn is compatible with
Berendsen or MTTK coupling for pressure.  However, MTTK does not work
with constraints, which are needed for the forcefields.  Berendsen can
handle constraints but is not recommended for preserving thermodynamic
ensembles.  Any ideas on how one should proceed?

Many thanks for your thoughts.

Gregory


--
Gromacs Users mailing list

* Please search the archive at
https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Cgpoon%40gsu.edu%7C0691e8847a2b444b3a8208d6d3ce834d%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C1%7C636929281266198778sdata=6gqdkFvDH2oX6U17pgPfcUvs34mmPP6OCyPnUQbgwKE%3Dreserved=0
 before
posting!

* Can't post? Read 
https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Cgpoon%40gsu.edu%7C0691e8847a2b444b3a8208d6d3ce834d%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C1%7C636929281266198778sdata=xJsYvSXLGH4aeubUlxygOvzszCraVOEywg%2BwFcnAIQA%3Dreserved=0

* For (un)subscribe requests visit
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Cgpoon%40gsu.edu%7C0691e8847a2b444b3a8208d6d3ce834d%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C1%7C636929281266198778sdata=1zXWxg16PNlz5%2FFgFsmfO79eangf1sRB%2BkzpcuT3OFU%3Dreserved=0
 or
send a mail to 
gmx-users-requ...@gromacs.org<mailto:gmx-users-requ...@gromacs.org>.


--

Gregory M. K. Poon, PhD, RPh
Associate Professor
Departments of Chemistry and Nutrition | Georgia State University
NSC 414/415/416 | 50 Decatur St. SE, Atlanta, GA 30302
P.O. Box 3965 | Atlanta, GA 30303
Ph (404) 413-5491 | gp...@gsu.edu<mailto:gp...@gsu.edu>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Pressure coupling problem

2019-05-25 Thread Gregory Man Kai Poon
Hi Ben,

Could you try your simulation on another GROMACS build (on a different machine 
perhaps) to isolate the functional cause of your symptoms?  Or perhaps use 
another FF on the same structure?  I think this would be of interest to many 
users.

Best wishes,

Gregory


On 5/25/2019 5:33 AM, Tam, Benjamin wrote:

Hi Jacob,

Thank you for your responses. My system is set up for generating a polymer with 
over 40800 atoms. So initially, I need to simulate the system in different 
pressure in order to set up a "correct" simulation system. I have simulated 
with Berendsen barostat for 10 ns and through the density and energy plot, I 
can confirm that the system box size has remained stable. Yet with P-R 
barostat, the explosion happened with 100 ps and somehow the system can 
continue to run. There is no restraint or constraint to the atom or to the box. 
I even check it with a box filled with  TIP4P water molecules alone, the box 
still exploded with P-R.

However, when I changed the version. It seems like the problem are solved, 
which is rather strange and unbelievable.

Best,

Ben

From: 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 

 on behalf of Jacob Monroe 

Sent: 24 May 2019 18:56
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Pressure coupling problem

Hi Ben,

I haven’t seen a response yet, so since I’ve encountered this type of thing 
before, I’ll chime in with my two cents.

I’ve run into similar issues before, and it’s always turned out to be something 
with the way I set up the simulation.  What I’ve found is that if things tend 
to explode with the Parinello-Rahman barostat, they are also exploding with the 
Berendsen barostat, just at a slower rate due to the tendency of the latter 
algorithm to incorrectly over dampen fluctuations.  I’ve also found that some 
simulations stay stable for longer than others simply by chance, but if you run 
long enough, they will eventually explode as well.  My recommendation is to 
watch the simulation box carefully or plot its dimension over time - if it just 
keeps growing, now matter how slowly, you have a problem.

Do you have any atoms in the box frozen?  Or position restraints with large 
spring constants?  Sometimes restraints for holding two molecules close to each 
other, depending on the system, can also cause odd behavior.  Without knowing 
more about your system, I’ll also ask, do you actually need a barostat, or will 
NVT be fine?  Or are there ways to relax your system to an appropriate density 
without seeing the problem behavior?

Best,
Jacob

On May 23, 2019, at 10:23 AM, Tam, Benjamin 
mailto:benjamin.tam...@ucl.ac.uk>>
 wrote:

Dear all,

To follow through with this email. It seems like the barostat problem comes 
from a different version. As I tested with version 
gromacs/2016.3/intel-2017-update1, the system remains stable for 1 ns and the 
same system explode with gromacs/2018.3/intel-2018.

Was there some kind of bug in the newer version or am I missing something?

Best,

Ben
-Original Message-
From: 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 
mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se>>
 On Behalf Of Tam, Benjamin
Sent: 23 May 2019 11:40
To: 
gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] Pressure coupling problem

Dear Gromacs user,

Currently, I am baffled about a simulation that I am running and I hope I will 
find some answer here.

Initially, I have run my system with Berendsen barostat to equilibrate for 1 
ns. The system looks fine as there is no box explosion.

However, after the equilibration, I change the barostat to Parinello-Rahman. 
The box exploded. I tried to debug my system by turning off Lennard Jones and 
charge (separately and together). Yet the box still explodes without any 
reason. The system is set as 300K and 1 bar.

Here, I have to mention that the intramolecular bond is correct as the 
molecules did not explode. The only difference is the barostat that I have 
varied. Therefore, can anyone give me some clue what is going and why changing 
the barostat will cause this effect?

Thank you very much.

Best regards,

Ben
--
Gromacs Users mailing list

* Please search the archive at 

[gmx-users] trjconv -drop options

2019-06-11 Thread Gregory Man Kai Poon
Hi all:

I am trying to use the gmx trjconv -drop options to pull frames 
according to criteria in the Y column of an .xvg output (e.g., 
rmsd.xvg), and I am unable to find documentation beyond the command 
reference.  It sounds like one specifies the file with -drop and and 
then use -dropover and -dropunder to specify a range, presumably in the 
Y value, but it does not actually say.  I am unable to figure out what 
the mechanics of these parameters are, as trial values based on the 
sample Y columns produce unexpected selection of frames.  If anyone can 
share their experience, that would be great.

Many thanks,

Gregory

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Pressure coupling in expanded ensemble simulations

2019-05-20 Thread Gregory Man Kai Poon
Hi Michael,

Our system of interest here is a set of small proteins with disordered regions. 
 The literature suggests that RE methods are not well-suited to sampling such 
systems with dynamically "hot/cold" regions (and requiring more specialized 
techniques such as the link below) on the one hand, while approaches such as 
simulated tempering may be more efficient on the other.  We also wanted lower 
requirements on CPU cores given the level of hardware we have access to.

https://www.ncbi.nlm.nih.gov/pubmed/25136274

We have been working with toy systems such as the Ala10 peptide in TIP3P to 
define the workflow with Metropolis MC moves and the WL histogram.  As you 
suggested previously, we ran an NVT ensemble from 280 to 330 K, with 11 
linearly spaced intervals (with a v-rescale tcoupl), starting with a 
NPT-equilibrated system.  The usual diagnostics look okay (e.g., acceptance 
ratio).  The pressure of the system after several hundred ns runs did not seem 
very different, given the typical RMSD on the order of 100's of bar even when 
pressure coupling is on.  Obviously, if we could move the sampling back to NPT 
would be very helpful.  Kindly let me know what specific info you would like to 
see.

Thanks again for your help,

Gregory


On 5/19/2019 11:05 PM, Michael Shirts wrote:

Ah, sorry, I thought there was more information coming.

Have you considered just using temperature replica exchange?  It's not that
much less efficient, and is easier to deal with.  Replica exchange should
be working with NPT (as long as you use Parrinello-Rahman and a reasonable
temperature control algorithm).  The size scaling is about the same; i.e.
if you need a lot of replicas, you will also need a lot of expanded
ensemble intermediates.




On Fri, May 17, 2019 at 8:06 AM Gregory Man Kai Poon 
<mailto:gp...@gsu.edu> wrote:



Hi Michael,

I am just following up on your thoughts on how carrying out expanded
ensemble at NVT and converting back to NPT on the mailing list.  Again I
appreciate your advice in this area.

Best wishes,

Gregory


On 5/8/2019 12:01 PM, Michael Shirts wrote:

Yeah, this is an unfortunately place in the code where not all combinations
work - very long story.  Hopefully this will be working better in 2020.

What I would recommend is, if possible, performing the expanded ensemble
simulation at NVT.  Everything should work fine there (paper coming out
hopefully soon comparing a bunch of free energy methods).  Once can always
correct the free energy at lambda=1 from NVT to NPT.  I Can fill in the
details.

You do NOT want to do Berendsen for NPT when running expanded ensemble.
The results will be incorrect (as I have learned by sad experience_

On Wed, May 8, 2019 at 8:14 AM Gregory Man Kai Poon 
mailto:gp...@gsu.edu>


<mailto:gp...@gsu.edu><mailto:gp...@gsu.edu> wrote:





Hi all:

We are interested to do expanded ensemble simulations (such as simulated
tempering) on GROMACS.  Extensive fiddling with the settings and
googling on other people's experience suggests that these simulations
must use the md-vv integrator, which in turn is compatible with
Berendsen or MTTK coupling for pressure.  However, MTTK does not work
with constraints, which are needed for the forcefields.  Berendsen can
handle constraints but is not recommended for preserving thermodynamic
ensembles.  Any ideas on how one should proceed?

Many thanks for your thoughts.

Gregory


--
Gromacs Users mailing list

* Please search the archive at

https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Cgpoon%40gsu.edu%7Cf2483cf7c772405c9fab08d6dcd01fa6%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636939183786870008sdata=8FzpvoKLUx0xmXnPCrBOIoSvj%2FOR%2Bwf2VnllHv6Nr0k%3Dreserved=0
before
posting!

* Can't post? Read
https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Cgpoon%40gsu.edu%7Cf2483cf7c772405c9fab08d6dcd01fa6%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636939183786870008sdata=E45aW%2BrtTKV5pmA6LF%2F6qSa68VnEJKVV2%2F0jt%2BIRfBk%3Dreserved=0

* For (un)subscribe requests visit

https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Cgpoon%40gsu.edu%7Cf2483cf7c772405c9fab08d6dcd01fa6%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636939183786870008sdata=oD2hr2VuQoxYWl3FzS8tqrXa8v3eqgKZnOccYWjFT9Y%3Dreserved=0
or
send a mail to 
gmx-users-requ...@gromacs.org<mailto:gmx-users-requ...@gromacs.org><mailto:gmx-users-requ...@gromacs.org>.


--

Gregory M. K. Poon, PhD, RPh
Associate Professor
Departments of Chemistry and Nutrition | Georgia State University
NSC 414/415/416 | 50 Decatur St. SE, Atlanta, GA 30302
P.O. Box 3965 | Atlanta, GA 30303
Ph (404) 413-5491 | 
gp...@gsu.edu<mailto:gp...@gsu.edu><mailto:gp...@gsu.edu><mailto:gp.

[gmx-users] Pressure coupling in expanded ensemble simulations

2019-05-08 Thread Gregory Man Kai Poon
Hi all:

We are interested to do expanded ensemble simulations (such as simulated 
tempering) on GROMACS.  Extensive fiddling with the settings and 
googling on other people's experience suggests that these simulations 
must use the md-vv integrator, which in turn is compatible with 
Berendsen or MTTK coupling for pressure.  However, MTTK does not work 
with constraints, which are needed for the forcefields.  Berendsen can 
handle constraints but is not recommended for preserving thermodynamic 
ensembles.  Any ideas on how one should proceed?

Many thanks for your thoughts.

Gregory


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulated tempering using GROMACS software package (protein+membrane)

2019-07-28 Thread Gregory Man Kai Poon
Hi Pratiti,

There could be several reasons for your temperatures not moving.  Assuming that 
that your mdp options are set up correctly, your weights may be the problem.  
If they are very far off from optimal, the Monte Carlo may never accept a 
proposed move.  The md.log file should provide some insight into that.

Hope this helps,

Gregory



Sent from Mail for Windows 10




From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Pratiti Bhadra 

Sent: Friday, July 26, 2019 7:36:42 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se 

Subject: [gmx-users] Simulated tampering using GROMACS software package 
(protein+membrane)

Dear User,

I am trying simulated tampering with Gromacs 2018.1

mdp setting with


nstexpanded = 100
simulated-tempering = yes
sim-temp-low = 300
sim-temp-high = 355
simulated-tempering-scaling = linear
init_lambda_state = 0
temperature_lambdas = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45
0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

But temperature of simulation is not shifting. it is always 300.
I am pulluzed, what I am doing wrong and what parameters I have to set.

Regards,
Pratiti

--
Pratiti Bhadra
Post Doctoral Research Fellow
University of Macau, Macau, China
--
Gromacs Users mailing list

* Please search the archive at 
https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Cgpoon%40gsu.edu%7C70512252bd28403aa23608d711bd94f1%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636997378260880412sdata=eWW6PHGl%2FDBzyryzLzEt82U8yZXmM8OidBaqwlS28Yg%3Dreserved=0
 before posting!

* Can't post? Read 
https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Cgpoon%40gsu.edu%7C70512252bd28403aa23608d711bd94f1%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636997378260880412sdata=764gxDCepsOAFpBL5ARZLmHhabXvZ1AcvLFlL7FUAP8%3Dreserved=0

* For (un)subscribe requests visit
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Cgpoon%40gsu.edu%7C70512252bd28403aa23608d711bd94f1%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636997378260880412sdata=VKkpCggGIJ7E9lD%2Fmp%2BivKzFSmxp3sAKu3zxLuuz3jE%3Dreserved=0
 or send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation on 2 gpus

2019-07-26 Thread Gregory Man Kai Poon
Hi Kevin,
Thanks for your very useful post.  Could you give a few command line examples 
on how to start multiple runs at different times (e.g., allocate a subset of 
CPU/GPU to one run, and start another run later using another unsubset of 
yet-unallocated CPU/GPU).  Also, could you elaborate on the drawbacks of the 
MPI compilation that you hinted at?
Gregory

From: Kevin Boyd
Sent: Thursday, July 25, 2019 10:31 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] simulation on 2 gpus

Hi,

I've done a lot of research/experimentation on this, so I can maybe get you
started - if anyone has any questions about the essay to follow, feel free
to email me personally, and I'll link it to the email thread if it ends up
being pertinent.

First, there's some more internet resources to checkout. See Mark's talk at
-
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbioexcel.eu%2Fwebinar-performance-tuning-and-optimization-of-gromacs%2Fdata=02%7C01%7Cgpoon%40gsu.edu%7Cfd42b6ec3efa41d855b608d711714bdd%7C515ad73d8d5e4169895c9789dc742a70%7C0%7C0%7C636997050628368338sdata=%2BaUIuI63M7HRo%2B2VSUs0WIr0nYB10jE7lxnHW6gM8Os%3Dreserved=0
Gromacs development moves fast, but a lot of it is still relevant.

I'll expand a bit here, with the caveat that Gromacs GPU development is
moving very fast and so the correct commands for optimal performance are
both system-dependent and a moving target between versions. This is a good
thing - GPUs have revolutionized the field, and with each iteration we make
better use of them. The downside is that it's unclear exactly what sort of
CPU-GPU balance you should look to purchase to take advantage of future
developments, though the trend is certainly that more and more computation
is being offloaded to the GPUs.

The most important consideration is that to get maximum total throughput
performance, you should be running not one but multiple simulations
simultaneously. You can do this through the -multidir option, but I don't
recommend that in this case, as it requires compiling with MPI and limits
some of your options. My run scripts usually use "gmx mdrun ... &" to
initiate subprocesses, with combinations of -ntomp, -ntmpi, -pin
-pinoffset, and -gputasks. I can give specific examples if you're
interested.

Another important point is that you can run more simulations than the
number of GPUs you have. Depending on CPU-GPU balance and quality, you
won't double your throughput by e.g. putting 4 simulations on 2 GPUs, but
you might increase it up to 1.5x. This would involve targeting the same GPU
with -gputasks.

Within a simulation, you should set up a benchmarking script to figure out
the best combination of thread-mpi ranks and open-mp threads - this can
have pretty drastic effects on performance. For example, if you want to use
your entire machine for one simulation (not recommended for maximal
efficiency), you have a lot of decomposition options (ignoring PME - which
is important, see below):

-ntmpi 2 -ntomp 32 -gputasks 01
-ntmpi 4 -ntomp 16 -gputasks 0011
-ntmpi 8 -ntomp 8  -gputasks 
-ntmpi 16 -ntomp 4 -gputasks 111
(and a few others - note that ntmpi * ntomp = total threads available)

In my experience, you need to scan the options in a benchmarking script for
each simulation size/content you want to simulate, and the difference
between the best and the worst can be up to a factor of 2-4 in terms of
performance. If you're splitting your machine among multiple simulations, I
suggest running 1 mpi thread (-ntmpi 1) per simulation, unless your
benchmarking suggests that the optimal performance lies elsewhere.

Things get more complicated when you start putting PME on the GPUs. For the
machines I work on, putting PME on GPUs absolutely improves performance,
but I'm not fully confident in that assessment without testing your
specific machine - you have a lot of cores with that threadripper, and this
is another area where I expect Gromacs 2020 might shift the GPU-CPU optimal
balance.

The issue with PME on GPUs is that we can (currently) only have one rank
doing GPU PME work. So, if we have a machine with say 20 cores and 2 gpus,
if I run the following

gmx mdrun  -ntomp 10 -ntmpi 2 -pme gpu -npme 1 -gputasks 01

, two ranks will be started - one with cores 0-9, will work on the
short-range interactions, offloading where it can to GPU 0, and the PME
rank (cores 10-19)  will offload to GPU 1. There is one significant problem
(and one minor problem) with this setup. First, it is massively inefficient
in terms of load balance. In a typical system (there are exceptions), PME
takes up ~1/3 of the computation that short-range interactions take. So, we
are offloading 1/4 of our interactions to one GPU and 3/4 to the other,
which leads to imbalance. In this specific case (2 GPUs and sufficient
cores), the most optimal solution is often (but not always) to run with
-ntmpi 4 (in this example, then -ntomp 

[gmx-users] Failed make check

2020-05-02 Thread Gregory Man Kai Poon
Hello all,
I was trying to install GROMACS 2020.2 and encountered a failed test at make 
check (#43, mdrun).  I pasted what I think is the pertinent snippet below and 
the full output is in the link:
https://www.dropbox.com/s/nfq6yojjdslhxur/make_check.log?dl=0
The cmake incovation was: cmake .. -DGMX_BUILD_OWN_FFTW=ON 
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
I have been using GROMACS 2020 on the machine with various GTX GPUs without 
issues.  For what it's worth, when I re-ran make check for that version, it 
passed.
Many thanks for your help in advance,
Gregory


[--] 1 test from OriresTest
[ RUN  ] OriresTest.OriresCanRun

NOTE 1 [file 
/home/e5-1650/Downloads/gromacs-2020.2/build/src/programs/mdrun/tests/Testing/Temporary/OriresTest_OriresCanRun_input.mdp]:
  The Berendsen thermostat does not generate the correct kinetic energy
  distribution. You might want to consider using the V-rescale thermostat.

Setting the LD random seed to -949742347
Generated 2145 of the 2145 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 2145 of the 2145 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
Excluding 2 bonded neighbours molecule type 'SOL'
Number of degrees of freedom in T-Coupling group System is 518.00

NOTE 2 [file 
/home/e5-1650/Downloads/gromacs-2020.2/build/src/programs/mdrun/tests/Testing/Temporary/OriresTest_OriresCanRun_input.mdp]:
  You are using a plain Coulomb cut-off, which might produce artifacts.
  You might want to consider using PME electrostatics.



There were 2 notes
Reading file 
/home/e5-1650/Downloads/gromacs-2020.2/build/src/programs/mdrun/tests/Testing/Temporary/OriresTest_OriresCanRun.tpr,
 VERSION 2020 (single precision)
Orientation restraints only supports a single rank. Choosing to use only a 
single thread-MPI rank.

---
Program: mdrun-test, version 2020
Source file: src/gromacs/listed_forces/orires.cpp (line 127)

Fatal error:
Found 10 copies of a molecule with orientation restrains while the current
code only supports a single copy. If you want to ensemble average, run
multiple copies of the system using the multi-sim feature of mdrun.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.