Re: [gmx-users] Trajectory

2019-07-18 Thread Dallas Warren
I'd split the .xtc into two separate trajectories, 0-99 and 100-200 using:

gmx trjconv -f 0-200.xtc -o 0-99.xtc -e 99
gmx trjconv -f 0-200.xtc -o 100-200.xtc -b 100

Note actual time value for 99 will be 100 minus simulation time interval
that you save to the xtc

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Fri, 19 Jul 2019 at 13:18, Alex  wrote:

> Dear Gromacs user,
>
> I have a single 200 ns simulations's xtc-trajectory for which the
> "compressed-x-grps= non-Water" option has been used for the first
> 100 ns of it, so that it contains no water molecule, while the second 100
> ns part of it contains water. The mdp file was change at 100 ns by removing
> the "compressed-x-grps= non-Water".
>
> Now when I apply the gmx trjconv on this .xtc file the outputs are just
> from 0 to 100 ns irrespective of the .tpr file I use, I mean the 100 ns is
> last frame are read, but I need to extract data from after 100 to 200 ns
> too.
>
> So, I wonder if there is any way to save me from re-simulating?
>
> I already have the .trr file which contains all the atoms during the whole
> simulation but it has collected the data in a very low frequency respect to
> the xtc.
>
> Regards,
> Alex
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Trajectory

2019-07-18 Thread Alex
Dear Gromacs user,

I have a single 200 ns simulations's xtc-trajectory for which the
"compressed-x-grps= non-Water" option has been used for the first
100 ns of it, so that it contains no water molecule, while the second 100
ns part of it contains water. The mdp file was change at 100 ns by removing
the "compressed-x-grps= non-Water".

Now when I apply the gmx trjconv on this .xtc file the outputs are just
from 0 to 100 ns irrespective of the .tpr file I use, I mean the 100 ns is
last frame are read, but I need to extract data from after 100 to 200 ns
too.

So, I wonder if there is any way to save me from re-simulating?

I already have the .trr file which contains all the atoms during the whole
simulation but it has collected the data in a very low frequency respect to
the xtc.

Regards,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Install on Windows 10 with AMD GPU

2019-07-18 Thread James Burchfield
Thanks Szilárd,

I discovered this after modifying the environmental variables.
Decided to throw in the towel.

If time ever permits I may try the linux approach.

Many thanks 
James

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Thursday, 18 July 2019 11:22 PM
To: Discussion list for GROMACS users 
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Install on Windows 10 with AMD GPU

On Thu, Jul 11, 2019 at 6:33 AM James Burchfield < 
james.burchfi...@sydney.edu.au> wrote:

> I suspect the issue is that
> 64bit OpenCL is required and 32bit is enabled by default on this card.
> Apparently I can somewhere set  GPU_FORCE_64BIT_PTR=1 But no idea how 
> to do this yet...
>

GPU_FORCE_64BIT_PTR seems to be an environment variable which will only affect 
the runtime behavior. However, for that you first need to configure a GROMACS 
build and compile successfully. As far as I can tell, you still get stuck in 
the first stage as cmake can not detect the required dependencies.

--
Szilárd


>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < 
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of 
> Szilárd Páll
> Sent: Tuesday, 9 July 2019 10:46 PM
> To: Discussion list for GROMACS users 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Install on Windows 10 with AMD GPU
>
> Hi James,
>
> On Mon, Jul 8, 2019 at 10:57 AM James Burchfield < 
> james.burchfi...@sydney.edu.au> wrote:
>
> > Thankyou Szilárd,
> > Headers are available here
> > https://protect-au.mimecast.com/s/V9RBCVAGXPtKGBl5sGJu7r?domain=gith
> > ub.com
> > .com
> > But I get
> > CMake Error at cmake/gmxManageOpenCL.cmake:45 (message):
> >   OpenCL is not supported.  OpenCL version 1.2 or newer is required.
> > Call Stack (most recent call first):
> >   CMakeLists.txt:236 (include)
> >
> > I am setting
> > OpenCL_include_DIR to C:/Users/Admin/ OpenCL-Headers-master/CL
> >
>
> That path should not include "CL" (the header is expected to be 
> included as CL/cl.h).
>
> Let me know if that helps.
>
> --
> Szilárd
>
>
> > OpenCL_INCLUDE_DIR OpenCL_Library to C:/Windows/System32/OpenCL.dll
> >
> >
> > The error file includes
> >   Microsoft (R) C/C++ Optimizing Compiler Version 19.21.27702.2 for
> > x64
> >
> >   CheckSymbolExists.c
> >
> >   Copyright (C) Microsoft Corporation.  All rights reserved.
> >
> >   cl /c /Zi /W3 /WX- /diagnostics:column /Od /Ob0 /D WIN32 /D 
> > _WINDOWS /D "CMAKE_INTDIR=\"Debug\"" /D _MBCS /Gm- /RTC1 /MDd /GS 
> > /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"cmTC_2c430.dir\Debug\\"
> > /Fd"cmTC_2c430.dir\Debug\vc142.pdb" /Gd /TC /errorReport:queue 
> > "C:\Program Files\gromacs\CMakeFiles\CMakeTmp\CheckSymbolExists.c"
> >
> > C:\Program Files\gromacs\CMakeFiles\CMakeTmp\CheckSymbolExists.c(2,10):
> > error C1083:  Cannot open include file:
> > 'OpenCL_INCLUDE_DIR-NOTFOUND/CL/cl.h': No such file or directory 
> > [C:\Program Files\gromacs\CMakeFiles\CMakeTmp\cmTC_2c430.vcxproj]
> >
> >
> > File C:/Program Files/gromacs/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
> > /* */
> > #include 
> >
> > int main(int argc, char** argv)
> > {
> >   (void)argv;
> > #ifndef CL_VERSION_1_0
> >   return ((int*)(_VERSION_1_0))[argc]; #else
> >   (void)argc;
> >   return 0;
> > #endif
> > }
> >
> >
> > Guessing it is time to give up
> >
> > Cheers
> > James
> >
> >
> >
> >
> > -Original Message-
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < 
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of 
> > Szilárd Páll
> > Sent: Friday, 5 July 2019 10:20 PM
> > To: Discussion list for GROMACS users 
> > Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] Install on Windows 10 with AMD GPU
> >
> > Dear James,
> >
> > Unfortunately, we have very little experience with OpenCL on 
> > Windows, so
> I
> > am afraid I can not advise you on specifics. However, note that the 
> > only part of the former SDK that is needed is the OpenCL headers and 
> > loader libraries (libOpenCL) which is open source software that can 
> > be obtained from the standards body, KHronos. Not sure what the 
> > mechanism is for Windows, but for Linux these components are in 
> > packaged in the standard repositories of most distributions.
> >
> > However, before going through a large effort of trying to get 
> > GROMACS running on Windows + AMD + OpenCL, you might want to 
> > consider evaluating the potential benefits of the hardware. As these 
> > cards are quite dated
> you
> > might find that they do not provide enough performance benefit to 
> > warrant the effort required -- especially as, if you have a 
> > workstation with significant CPU resources, you might find that 
> > GROMACS runs nearly as
> fast
> > or faster on the CPU only (that's because we have very efficient CPU 
> > SIMD code for all compute-intensive work).
> >
> > To do a 

Re: [gmx-users] Need to install latest Gromacs on ios

2019-07-18 Thread Szilárd Páll
Hi,

Are you sure you mean iOS not OS X?

What does not work, an error message / cmake output would be more useful.
cmake generally does detect your system C++ compiler if there is one.

Cheers
--
Szilárd


On Thu, Jul 18, 2019 at 4:55 PM andrew goring 
wrote:

> Hi,
>
> I need to install the latest version of gromacs on the late Apple software.
>
> I follow the "quick and dirty" instructions, but it does not work.
>
> I believe I have all of the proper software and computers installed, as I
> have XCode up to date (although, I can't figure out how to check if c and
> c++ compilers are present).
>
> Would anyone be able to walk me through this? I think there is something
> simple I am not doing, as I do not have experience installing source code.
>
> Thanks,
>
> Andrew K. Goring
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Need to install latest Gromacs on ios

2019-07-18 Thread andrew goring
Hi,

I need to install the latest version of gromacs on the late Apple software.

I follow the "quick and dirty" instructions, but it does not work.

I believe I have all of the proper software and computers installed, as I
have XCode up to date (although, I can't figure out how to check if c and
c++ compilers are present).

Would anyone be able to walk me through this? I think there is something
simple I am not doing, as I do not have experience installing source code.

Thanks,

Andrew K. Goring
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-18 Thread Michael Williams
Hi Szilárd,

Thanks for the interesting observations on recent hardware. I was wondering if 
you could comment on the use of somewhat older server cpus and motherboards 
(versus more cutting edge consumer parts). I recently noticed that Haswell era 
Xeon cpus (E5 v3) are quite affordable now (~$400 for 12 core models with 40 
pcie lanes) and so are the corresponding 2 cpu socket server motherboards. Of 
course the RAM is slower than what can be used with the latest Ryzen or i7/i9 
cpus. Are there any other bottlenecks with this somewhat older server hardware 
that I might not be aware of? Thanks again for the interesting information and 
practical advice on this topic. 

Mike 


> On Jul 18, 2019, at 2:21 AM, Szilárd Páll  wrote:
> 
> PS: You will get more PCIe lanes without motherboard trickery -- and note
> that consumer motherboards with PCIe switches can sometimes cause
> instabilities when under heavy compute load -- if you buy the aging and
> quite overpriced i9 X-series like the i9-7920 with 12 cores or the
> Threadripper 2950x 16 cores and 60 PCIe lanes.
> 
> Also note that, but more cores always win when the CPU performance matters
> and while 8 cores are generally sufficient, in some use-cases it may not be
> (like runs with free energy).
> 
> --
> Szilárd
> 
> 
> On Thu, Jul 18, 2019 at 10:08 AM Szilárd Páll 
> wrote:
> 
>> On Wed, Jul 17, 2019 at 7:00 PM Moir, Michael (MMoir) 
>> wrote:
>> 
>>> This is not quite true.  I certainly observed this degradation in
>>> performance using the 9900K with two GPUs as Szilárd states using a
>>> motherboard with one PCIe controller, but the limitation is from the
>>> motherboard not from the CPU.
>> 
>> 
>> Sorry, but that's not the case. PCIe controllers have been integrated into
>> CPUs for many years; see
>> 
>> https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-introduction-basics-paper.pdf
>> 
>> https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/
>> 
>> So no, the limitation is the CPU itself. Consumer CPUs these days have 24
>> lanes total, some of which are used to connect the CPU to the chipset, and
>> effectively you get 16-20 lanes (BTW here too the new AMD CPUs win as they
>> provide 16 lanes for GPUs and similar devices and 4 lanes for NVMe, all on
>> PCIe 4.0).
>> 
>> 
>>>  It is possible to obtain a motherboard that contains two PCIe
>>> controllers which overcomes this obstacle for not a whole lot more money.
>>> 
>> 
>> It is possibly to buy motherboards with PCIe switches. These don't
>> increase the number of lanes just do what a swtich does: as long as not all
>> connected devices try to use the full capacity of the CPU (!) at the same
>> time, you can get full speed on all connected devices.
>> e.g.:
>> https://techreport.com/r.x/2015_11_19_Gigabytes_Z170XGaming_G1_motherboard_reviewed/05-diagram_pcie_routing.gif
>> 
>> Cheers,
>> --
>> Szilárd
>> 
>> Mike
>>> 
>>> -Original Message-
>>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
>>> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
>>> Páll
>>> Sent: Wednesday, July 17, 2019 8:14 AM
>>> To: Discussion list for GROMACS users 
>>> Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000
>>> 
>>> Hi Alex,
>>> 
>>> I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
>>> public benchmarks out there point to the fact that they are a major
>>> improvement over the previous generation Ryzen -- which were already
>>> quite competitive for GPU-accelerated GROMACS runs compared to Intel,
>>> especially in perf/price.
>>> 
>>> One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
>>> that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
>>> for both of the GPUs) which will lead to a slightly less performance
>>> (I'd estimate <5-10%) in particular compared to i) having a single GPU
>>> plugged in into the machine ii) compare to CPUs like Threadripper or
>>> the i9 79xx series processors which have more PCIe lanes.
>>> 
>>> However, if throughput is the goal, the ideal use-case especially for
>>> small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
>>> hence 4 runs on a 2-GPU system case in which the impact of the
>>> aforementioned limitation will be further decreased.
>>> 
>>> Cheers,
>>> --
>>> Szilárd
>>> 
>>> 
 On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
 
 That is excellent information, thank you. None of us have dealt with AMD
 CPUs in a while, so would the combination of a Ryzen 3900X and two
 Quadro 2080 Ti be a good choice?
 
 Again, thanks!
 
 Alex
 
 
> On 7/16/2019 8:41 AM, Szilárd Páll wrote:
> Hi Alex,
> 
>> On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
>> Hi all and especially Szilard!
>> 
>> My glorious management asked me to post this here. One of our group
>> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
>> following 

Re: [gmx-users] Install on Windows 10 with AMD GPU

2019-07-18 Thread Szilárd Páll
On Thu, Jul 11, 2019 at 6:33 AM James Burchfield <
james.burchfi...@sydney.edu.au> wrote:

> I suspect the issue is that
> 64bit OpenCL is required and 32bit is enabled by default on this card.
> Apparently I can somewhere set  GPU_FORCE_64BIT_PTR=1
> But no idea how to do this yet...
>

GPU_FORCE_64BIT_PTR seems to be an environment variable which will only
affect the runtime behavior. However, for that you first need to configure
a GROMACS build and compile successfully. As far as I can tell, you still
get stuck in the first stage as cmake can not detect the required
dependencies.

--
Szilárd


>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
> Páll
> Sent: Tuesday, 9 July 2019 10:46 PM
> To: Discussion list for GROMACS users 
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Install on Windows 10 with AMD GPU
>
> Hi James,
>
> On Mon, Jul 8, 2019 at 10:57 AM James Burchfield <
> james.burchfi...@sydney.edu.au> wrote:
>
> > Thankyou Szilárd,
> > Headers are available here
> > https://protect-au.mimecast.com/s/-oPwCNLwM9ixRV85smybKT?domain=github
> > .com
> > But I get
> > CMake Error at cmake/gmxManageOpenCL.cmake:45 (message):
> >   OpenCL is not supported.  OpenCL version 1.2 or newer is required.
> > Call Stack (most recent call first):
> >   CMakeLists.txt:236 (include)
> >
> > I am setting
> > OpenCL_include_DIR to C:/Users/Admin/ OpenCL-Headers-master/CL
> >
>
> That path should not include "CL" (the header is expected to be included
> as CL/cl.h).
>
> Let me know if that helps.
>
> --
> Szilárd
>
>
> > OpenCL_INCLUDE_DIR OpenCL_Library to C:/Windows/System32/OpenCL.dll
> >
> >
> > The error file includes
> >   Microsoft (R) C/C++ Optimizing Compiler Version 19.21.27702.2 for
> > x64
> >
> >   CheckSymbolExists.c
> >
> >   Copyright (C) Microsoft Corporation.  All rights reserved.
> >
> >   cl /c /Zi /W3 /WX- /diagnostics:column /Od /Ob0 /D WIN32 /D _WINDOWS
> > /D "CMAKE_INTDIR=\"Debug\"" /D _MBCS /Gm- /RTC1 /MDd /GS /fp:precise
> > /Zc:wchar_t /Zc:forScope /Zc:inline /Fo"cmTC_2c430.dir\Debug\\"
> > /Fd"cmTC_2c430.dir\Debug\vc142.pdb" /Gd /TC /errorReport:queue
> > "C:\Program Files\gromacs\CMakeFiles\CMakeTmp\CheckSymbolExists.c"
> >
> > C:\Program Files\gromacs\CMakeFiles\CMakeTmp\CheckSymbolExists.c(2,10):
> > error C1083:  Cannot open include file:
> > 'OpenCL_INCLUDE_DIR-NOTFOUND/CL/cl.h': No such file or directory
> > [C:\Program Files\gromacs\CMakeFiles\CMakeTmp\cmTC_2c430.vcxproj]
> >
> >
> > File C:/Program Files/gromacs/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
> > /* */
> > #include 
> >
> > int main(int argc, char** argv)
> > {
> >   (void)argv;
> > #ifndef CL_VERSION_1_0
> >   return ((int*)(_VERSION_1_0))[argc]; #else
> >   (void)argc;
> >   return 0;
> > #endif
> > }
> >
> >
> > Guessing it is time to give up
> >
> > Cheers
> > James
> >
> >
> >
> >
> > -Original Message-
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
> > Páll
> > Sent: Friday, 5 July 2019 10:20 PM
> > To: Discussion list for GROMACS users 
> > Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] Install on Windows 10 with AMD GPU
> >
> > Dear James,
> >
> > Unfortunately, we have very little experience with OpenCL on Windows, so
> I
> > am afraid I can not advise you on specifics. However, note that the only
> > part of the former SDK that is needed is the OpenCL headers and loader
> > libraries (libOpenCL) which is open source software that can be obtained
> > from the standards body, KHronos. Not sure what the mechanism is for
> > Windows, but for Linux these components are in packaged in the standard
> > repositories of most distributions.
> >
> > However, before going through a large effort of trying to get GROMACS
> > running on Windows + AMD + OpenCL, you might want to consider evaluating
> > the potential benefits of the hardware. As these cards are quite dated
> you
> > might find that they do not provide enough performance benefit to warrant
> > the effort required -- especially as, if you have a workstation with
> > significant CPU resources, you might find that GROMACS runs nearly as
> fast
> > or faster on the CPU only (that's because we have very efficient CPU SIMD
> > code for all compute-intensive work).
> >
> > To do a hopefully easier quick performance evaluation, you could simply
> > boot a Linux distribution off of an external disk, you can find Linux
> > drivers for them for Ubuntu 16.04/18.04 at least which you can install
> and
> > see how well does the system perform.
> >
> > I hope that helps!
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> > On Fri, Jul 5, 2019 at 9:11 AM James Burchfield <
> > james.burchfi...@sydney.edu.au> wrote:
> >
> > > Hi there,
> > >
> > > I was hoping to install  gromacs on a windows10 system that runs 2 AMD
> 

Re: [gmx-users] decreased performance with free energy

2019-07-18 Thread Szilárd Páll
David,

Yes, it is greatly affected. The standard interaction kernels are very
fast, but the free energy kernels are known to not be as efficient as they
could and the larger the fraction of atoms involved in perturbed
interactions the more this work dominates the runtime.

If you are trying to set up production runs on this specific
hardware/software combination that you ran the tests on? There are a few
things you could try to get a bit better performance, but details may
depend on hardware software.

Expect major improvements in the upcoming release, we are doing some
thorough rework/optimization of the free energy kernels.

Cheers,
--
Szilárd


On Thu, Jul 18, 2019 at 10:24 AM David de Sancho 
wrote:

> Thanks Szilárd
> I have posted both in the Gist below for the free energy simulation
> https://gist.github.com/daviddesancho/4abdc0d40e2355671ead7f8e40283b57
> May it have to do with the number of particles in the box that are affected
> by the typeA -> typeB change?
>
> David
>
>
> Date: Wed, 17 Jul 2019 17:09:21 +0200
> > From: Szil?rd P?ll 
> > To: Discussion list for GROMACS users 
> > Subject: Re: [gmx-users] decreased performance with free energy
> > Message-ID:
> > <
> > cannyew4uszxnnwz56tzbqsjwkt3cu7pf+8hhfxa6nfug0o7...@mail.gmail.com>
> > Content-Type: text/plain; charset="UTF-8"
> >
> > Hi,
> >
> > Lower performe especially with GPUs is not unexpected, but what you
> report
> > is unusually large. I suggest you post your mdp and log file, perhaps
> there
> > are some things to improve.
> >
> > --
> > Szil?rd
> >
> >
> > On Wed, Jul 17, 2019 at 3:47 PM David de Sancho 
> > wrote:
> >
> > > Hi all
> > > I have been doing some testing for Hamiltonian replica exchange using
> > > Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
> > > box.
> > > For the modified hamiltonian I have simply modified the water
> > interactions
> > > by generating a typeB atom in the force field ffnonbonded.itp with
> > > different parameters file and then creating a number of tpr files for
> > > different lambda values as defined in the mdp files. The only
> difference
> > > between mdp files for a simple NVT run and for the HREX runs are the
> > > following lines:
> > >
> > > > ; H-REPLEX
> > > > free-energy = yes
> > > > init-lambda-state = 0
> > > > nstdhdl = 0
> > > > vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
> > >
> > > I have tested for performance in the same machine and compared the
> > standard
> > > NVT run performance (~175 ns/day in 8 cores) with that for the free
> > energy
> > > tpr file (6.2 ns/day).
> > > Is this performance loss what you would expect or are there any
> immediate
> > > changes you can suggest to improve things? I have found a relatively
> old
> > > post on this on Gromacs developers (
> > https://redmine.gromacs.org/issues/742
> > > ),
> > > but I am not sure whether it is the exact same problem.
> > > Thanks,
> > >
> > > David
> > > --
> > > Gromacs Users mailing list
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] make manual fails

2019-07-18 Thread Szilárd Páll
Is sphinx detected by cmake though?
--
Szilárd


On Wed, Jul 17, 2019 at 8:00 PM Michael Brunsteiner 
wrote:

> hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DCMAKE_C_COMPILER=gcc-7 -DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on
> -DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin
> -DGMX_BUILD_MANUAL=onprompt> make -j 4prompt> make install
> prompt> make manualmanual cannot be built because Sphinx expected minimum
> version 1.6.1 is not available
>
> although I seem to have version 1.8.4 (see below)
>
> prompt> apt policy python-sphinx
> python-sphinx:
>   Installed: 1.8.4-1
>   Candidate: 1.8.4-1
>   Version table:
>  *** 1.8.4-1 500
> 500 http://ftp.at.debian.org/debian buster/main amd64 Packages
> 500 http://ftp.at.debian.org/debian buster/main i386 Packages
> 100 /var/lib/dpkg/status
>
> anybody else seen this issue?cheers,michael
>
>
>
> === Why be happy when you could be normal?
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] core dump error in grompp command

2019-07-18 Thread Mark Abraham
Hi,

It is likely that grompp is trying to use all the available memory, and
failing shortly after unsuccessfully allocating an array whose size is
related to the number of particles. If so, we'd love to make that work
better for you. (We know a place where grompp is inefficient because we
allocate memory proportional to the square of the number of atom types, but
that is likely not the issue here!) If you can open an issue at
https://redmine.gromacs.org and upload your inputs, we can advise further
and/or find a fix once we see exactly where the problem occurs.

Mark

On Thu., 18 Jul. 2019, 13:41 (학생) 박세영 (에너지및화학공학부), 
wrote:

> Dear all,
> I am trying to run large coarse-grained biomolecular system which includes
> about 800million beads in it. (about 500million among them are water
> beads). The .gro file of my system is about 35.62GB. The problem is, that
> although I’m trying to run grompp command to make input .tpr file, I
> continuously get this not enough memory error:
>
> ===
> .
> .
> Excluding 1 bonded neighbours molecule type 'W'
> Excluding 1 bonded neighbours molecule type 'WF'
> Excluding 1 bonded neighbours molecule type 'W'
> Excluding 1 bonded neighbours molecule type 'WF'
> Removing all charge groups because cutoff-scheme=Verlet
>
> ---
> Program gmx_mpi, VERSION 5.0.6
> Source code file:
> /scratch/x1671a04/gromacs/gromacs-5.0.6/src/gromacs/utility/smalloc.c,
> line: 224
>
> Fatal error:
> Not enough memory. Failed to realloc -6970315816 bytes for b->a,
> b->a=ceb96010
> (called from file
> /scratch/x1671a04/gromacs/gromacs-5.0.6/src/gromacs/gmxlib/index.c, line
> 153)
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
> : Cannot allocate memory
> Halting program gmx_mpi
>
> ===
>
>
> or this segmentation fault error:
>
> ===
> .
> .
> Excluding 1 bonded neighbours molecule type 'WF'
> Excluding 1 bonded neighbours molecule type 'W'
> Excluding 1 bonded neighbours molecule type 'NA'
> Excluding 1 bonded neighbours molecule type 'WF'
>
> NOTE 2 [file 11_billion.top, line 372]:
>   System has non-zero total charge: 22320.00
>   Total charge should normally be an integer. See
>   http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
>   for discussion on how close it should be to an integer.
>
>
> Removing all charge groups because cutoff-scheme=Verlet
> /var/spool/slurm/d/job06849/slurm_script: line 10: 36729 Segmentation
> fault  (core dumped) gmx_mpi grompp -f minimization.mdp -c
> 11_billion.gro -p 11_billion.top -o 11_billion.tpr
>
> ===
>
>
> The gromacs version that I’m using is 5.0.6. I tried gromacs version of
> 5.0.6 and 2018.3, and grompp by double and single, but both did not work.
> This is my command line : gmx_mpi grompp -f minimization.mdp -c
> waterbox_for100billion.gro -p 800_billions_only_water_box.top -o test.tpr
> I’m running grompp command in CPU node which has 768GB of memory. I tried
> to find any method to generate .tpr file with parallel calculation, but I
> couldn’t, so I had to grompp the system in node with very large memory.
> However, when I tracked my memory usage during grompp, the maximum memory
> usage was only about 20% of total available memory. Therefore, I guess it
> may not be the problem of memory shortage.
>
> The .mdp file that I used in grompp is for minimization, and the contents
> are like this:
>
> define  = -DPOSRES
> integrator   = steep
> dt   = 0.01
> nsteps   = 25000
> nstcomm  = 100
> comm-grps =
>
> nstxout  = 0
> nstvout  = 0
> nstfout  = 0
> nstlog   = 1000
> nstenergy= 1000
> nstxout-compressed   = 0
> compressed-x-precision   = 100
> compressed-x-grps=
> nstpcouple  = 1
> energygrps   = system ;Protein non-Protein ;Water_and_ions
>
> nstlist  = 20
> ns_type  = grid
> pbc  = xyz
>
> rlist   = 1.4
> coulombtype  = PME
> pme-order  = 4
> fourierspacing  = 0.16
> rcoulomb   = 1.2
> epsilon_r   = 15
>
> cutoff-scheme  = Verlet
> vdw_type = Cut-off
> rvdw_switch  = 0.9
> vdw-modifier = Force-switch
> rvdw = 1.2
>
> constraints  = None

[gmx-users] tcaf

2019-07-18 Thread Amin Rouy
Hi everyone,

Concerning viscosity calculation from TCAF method, I read in manual that:

''The fit weights decay exponentially with time constant w (given with -wt)
as exp (−t/w), and the TCAF and fit are calculated up to time 5 ∗ w.''

So, 5*w is 25 ps for default value. What is the relation between simulation
time 't' and this 5*w? how long should be the simulation time?

will appreciate any respond,
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] core dump error in grompp command

2019-07-18 Thread 학생
Dear all,
I am trying to run large coarse-grained biomolecular system which includes 
about 800million beads in it. (about 500million among them are water beads). 
The .gro file of my system is about 35.62GB. The problem is, that although I’m 
trying to run grompp command to make input .tpr file, I continuously get this 
not enough memory error:
===
.
.
Excluding 1 bonded neighbours molecule type 'W'
Excluding 1 bonded neighbours molecule type 'WF'
Excluding 1 bonded neighbours molecule type 'W'
Excluding 1 bonded neighbours molecule type 'WF'
Removing all charge groups because cutoff-scheme=Verlet

---
Program gmx_mpi, VERSION 5.0.6
Source code file: 
/scratch/x1671a04/gromacs/gromacs-5.0.6/src/gromacs/utility/smalloc.c, line: 224

Fatal error:
Not enough memory. Failed to realloc -6970315816 bytes for b->a, b->a=ceb96010
(called from file 
/scratch/x1671a04/gromacs/gromacs-5.0.6/src/gromacs/gmxlib/index.c, line 153)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
: Cannot allocate memory
Halting program gmx_mpi
===


or this segmentation fault error:
===
.
.
Excluding 1 bonded neighbours molecule type 'WF'
Excluding 1 bonded neighbours molecule type 'W'
Excluding 1 bonded neighbours molecule type 'NA'
Excluding 1 bonded neighbours molecule type 'WF'

NOTE 2 [file 11_billion.top, line 372]:
  System has non-zero total charge: 22320.00
  Total charge should normally be an integer. See
  http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
  for discussion on how close it should be to an integer.


Removing all charge groups because cutoff-scheme=Verlet
/var/spool/slurm/d/job06849/slurm_script: line 10: 36729 Segmentation fault 
 (core dumped) gmx_mpi grompp -f minimization.mdp -c 11_billion.gro -p 
11_billion.top -o 11_billion.tpr
===


The gromacs version that I’m using is 5.0.6. I tried gromacs version of 5.0.6 
and 2018.3, and grompp by double and single, but both did not work.
This is my command line : gmx_mpi grompp -f minimization.mdp -c 
waterbox_for100billion.gro -p 800_billions_only_water_box.top -o test.tpr
I’m running grompp command in CPU node which has 768GB of memory. I tried to 
find any method to generate .tpr file with parallel calculation, but I 
couldn’t, so I had to grompp the system in node with very large memory.
However, when I tracked my memory usage during grompp, the maximum memory usage 
was only about 20% of total available memory. Therefore, I guess it may not be 
the problem of memory shortage.

The .mdp file that I used in grompp is for minimization, and the contents are 
like this:

define  = -DPOSRES
integrator   = steep
dt   = 0.01
nsteps   = 25000
nstcomm  = 100
comm-grps =

nstxout  = 0
nstvout  = 0
nstfout  = 0
nstlog   = 1000
nstenergy= 1000
nstxout-compressed   = 0
compressed-x-precision   = 100
compressed-x-grps=
nstpcouple  = 1
energygrps   = system ;Protein non-Protein ;Water_and_ions

nstlist  = 20
ns_type  = grid
pbc  = xyz

rlist   = 1.4
coulombtype  = PME
pme-order  = 4
fourierspacing  = 0.16
rcoulomb   = 1.2
epsilon_r   = 15

cutoff-scheme  = Verlet
vdw_type = Cut-off
rvdw_switch  = 0.9
vdw-modifier = Force-switch
rvdw = 1.2

constraints  = None
constraint_algorithm = Lincs
lincs_iter  = 1 ; accuracy of LINCS
lincs-order  = 4


What could be the reason of this error? Also, I’m wondering that why gromacs 
does not fully use the available memory. Is there any option that restricts the 
memory usage of gromacs?

Any comment will be helpful.
Thank you for your time.
Sincerely,
Seyeong.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Drude force field

2019-07-18 Thread Justin Lemkul
On Thu, Jul 18, 2019 at 4:23 AM Gordan Horvat  wrote:

> I have found this in the NAMD manual:
>
> NAMD has the ability to load GROMACS ASCII topology (.top) and
> coordinate (.gro) files, which allows you to run most GROMACS
> simulations in NAMD.
> http://www.ks.uiuc.edu/Research/namd/2.9/ug/node14.html
>
> Is that applicable to the Drude input files prepared by the Gromacs
> Drude distribution?
>

No. To do Drude polarizable simulations in NAMD, you need a CHARMM PSF.

-Justin


> Gordan
>
> --
> Gordan Horvat
> Division of Physical Chemistry
> Department of Chemistry
> Faculty of Science, University of Zagreb
> Croatia
>
> On 18.7.2019. 08:33, gromacs.org_gmx-users-requ...@maillist.sys.kth.se
> wrote:
> > On Wed, Jul 17, 2019 at 8:58 PM Myunggi Yi
> wrote:
> >
> >> >Thank you Dr. Lemkel,
> >> >
> >> >I don't have ions in my simulation. It's a neutral system with a
> protein in
> >> >membrane bilayer with solvent.
> >> >I have downloaded the force field (Drude FF for charmm FF in Gromacs
> >> >format). to run the simulation with charmm FF in "Gromacs 2019.3".
> >> >However, it seems the format of the file does not match with the
> current
> >> >version.
> >> >
> >> >In the web,
> >> >
> >> >Compile and install as you would any other (post-5.0) GROMACS version.
> If
> >> >you attempt to use *ANY OTHER VERSION OF GROMACS, the Drude features
> will
> >> >not be accessible.*
> >> >
> >> >There are 5.0 and 5.1 series of Gromacs versions. Which one should I
> use?
> >> >
> >> >Or, it there a way to modify the force field format to use the current
> >> >version of Gromacs?, Then I will modify the format.
> >> >
> >> >
> > Read the information at the previous link more carefully. You cannot use
> > any released version of GROMACS. You must use the developmental version
> as
> > instructed in that link.
> >
> > -Justin
> >
> >
> >> >
> >> >On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:
> >> >
> >>> > >
> >>> > >
> >>> > >On 7/17/19 8:39 PM, Myunggi Yi wrote:
>  > > >Dear users,
>  > > >
>  > > >I want to run a simulation with a polarizable force field.
>  > > >
>  > > >How and where can I get Drude force field for the current
> version of
>  > > >Gromacs?
> >>> > >
> >>> > >Everything you need to know:
> >>> > >
> >>> > >http://mackerell.umaryland.edu/charmm_drude_ff.shtml
> >>> > >
> >>> > >The implementation is not complete. If your system has ions, do not
> use
> >>> > >GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM,
> or
> >>> > >OpenMM. The Drude model is still considered experimental, hence it
> is
> >>> > >not officially supported yet. There have been a lot of snags along
> the
> >>> > >way (mostly in my time to get the code up to par for official
> inclusion).
> >>> > >
> >>> > >-Justin
> >>> > >
> >>> > >--
> >>> > >==
> >>> > >
> >>> > >Justin A. Lemkul, Ph.D.
> >>> > >Assistant Professor
> >>> > >Office: 301 Fralin Hall
> >>> > >Lab: 303 Engel Hall
> >>> > >
> >>> > >Virginia Tech Department of Biochemistry
> >>> > >340 West Campus Dr.
> >>> > >Blacksburg, VA 24061
> >>> > >
> >>> > >jalem...@vt.edu  | (540) 231-3129
> >>> > >http://www.thelemkullab.com
> >>> > >
> >>> > >==
> >>> > >
> >>> > >--
> >>> > >Gromacs Users mailing list
> >>> > >
> >>> > >* Please search the archive at
> >>> > >http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before
> >>> > >posting!
> >>> > >
> >>> > >* Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists
> >>> > >
> >>> > >* For (un)subscribe requests visit
> >>> > >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >>> > >send a mail togmx-users-requ...@gromacs.org.
> >>> > >
> >> >--
> >> >Gromacs Users mailing list
> >> >
> >> >* Please search the archive at
> >> >http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before
> >> >posting!
> >> >
> >> >* Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> >* For (un)subscribe requests visit
> >> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  or
> >> >send a mail togmx-users-requ...@gromacs.org.
> >> >
> > -- == Justin A. Lemkul, Ph.D.
> > Assistant Professor Office: 301 Fralin Hall Lab: 303 Engel Hall
> > Virginia Tech Department of Biochemistry 340 West Campus Dr.
> > Blacksburg, VA 24061 jalem...@vt.edu | (540) 231-3129
> > http://www.thelemkullab.com ==
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 

==

Justin A. Lemkul, Ph.D.


Re: [gmx-users] decreased performance with free energy

2019-07-18 Thread David de Sancho
Thanks Szilárd
I have posted both in the Gist below for the free energy simulation
https://gist.github.com/daviddesancho/4abdc0d40e2355671ead7f8e40283b57
May it have to do with the number of particles in the box that are affected
by the typeA -> typeB change?

David


Date: Wed, 17 Jul 2019 17:09:21 +0200
> From: Szil?rd P?ll 
> To: Discussion list for GROMACS users 
> Subject: Re: [gmx-users] decreased performance with free energy
> Message-ID:
> <
> cannyew4uszxnnwz56tzbqsjwkt3cu7pf+8hhfxa6nfug0o7...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> Lower performe especially with GPUs is not unexpected, but what you report
> is unusually large. I suggest you post your mdp and log file, perhaps there
> are some things to improve.
>
> --
> Szil?rd
>
>
> On Wed, Jul 17, 2019 at 3:47 PM David de Sancho 
> wrote:
>
> > Hi all
> > I have been doing some testing for Hamiltonian replica exchange using
> > Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
> > box.
> > For the modified hamiltonian I have simply modified the water
> interactions
> > by generating a typeB atom in the force field ffnonbonded.itp with
> > different parameters file and then creating a number of tpr files for
> > different lambda values as defined in the mdp files. The only difference
> > between mdp files for a simple NVT run and for the HREX runs are the
> > following lines:
> >
> > > ; H-REPLEX
> > > free-energy = yes
> > > init-lambda-state = 0
> > > nstdhdl = 0
> > > vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
> >
> > I have tested for performance in the same machine and compared the
> standard
> > NVT run performance (~175 ns/day in 8 cores) with that for the free
> energy
> > tpr file (6.2 ns/day).
> > Is this performance loss what you would expect or are there any immediate
> > changes you can suggest to improve things? I have found a relatively old
> > post on this on Gromacs developers (
> https://redmine.gromacs.org/issues/742
> > ),
> > but I am not sure whether it is the exact same problem.
> > Thanks,
> >
> > David
> > --
> > Gromacs Users mailing list
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Drude force field

2019-07-18 Thread Gordan Horvat

I have found this in the NAMD manual:

NAMD has the ability to load GROMACS ASCII topology (.top) and 
coordinate (.gro) files, which allows you to run most GROMACS 
simulations in NAMD.

http://www.ks.uiuc.edu/Research/namd/2.9/ug/node14.html

Is that applicable to the Drude input files prepared by the Gromacs 
Drude distribution?


Gordan

--
Gordan Horvat
Division of Physical Chemistry
Department of Chemistry
Faculty of Science, University of Zagreb
Croatia

On 18.7.2019. 08:33, gromacs.org_gmx-users-requ...@maillist.sys.kth.se 
wrote:

On Wed, Jul 17, 2019 at 8:58 PM Myunggi Yi  wrote:


>Thank you Dr. Lemkel,
>
>I don't have ions in my simulation. It's a neutral system with a protein in
>membrane bilayer with solvent.
>I have downloaded the force field (Drude FF for charmm FF in Gromacs
>format). to run the simulation with charmm FF in "Gromacs 2019.3".
>However, it seems the format of the file does not match with the current
>version.
>
>In the web,
>
>Compile and install as you would any other (post-5.0) GROMACS version. If
>you attempt to use *ANY OTHER VERSION OF GROMACS, the Drude features will
>not be accessible.*
>
>There are 5.0 and 5.1 series of Gromacs versions. Which one should I use?
>
>Or, it there a way to modify the force field format to use the current
>version of Gromacs?, Then I will modify the format.
>
>

Read the information at the previous link more carefully. You cannot use
any released version of GROMACS. You must use the developmental version as
instructed in that link.

-Justin



>
>On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:
>

> >
> >
> >On 7/17/19 8:39 PM, Myunggi Yi wrote:

> > >Dear users,
> > >
> > >I want to run a simulation with a polarizable force field.
> > >
> > >How and where can I get Drude force field for the current version of
> > >Gromacs?

> >
> >Everything you need to know:
> >
> >http://mackerell.umaryland.edu/charmm_drude_ff.shtml
> >
> >The implementation is not complete. If your system has ions, do not use
> >GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or
> >OpenMM. The Drude model is still considered experimental, hence it is
> >not officially supported yet. There have been a lot of snags along the
> >way (mostly in my time to get the code up to par for official inclusion).
> >
> >-Justin
> >
> >--
> >==
> >
> >Justin A. Lemkul, Ph.D.
> >Assistant Professor
> >Office: 301 Fralin Hall
> >Lab: 303 Engel Hall
> >
> >Virginia Tech Department of Biochemistry
> >340 West Campus Dr.
> >Blacksburg, VA 24061
> >
> >jalem...@vt.edu  | (540) 231-3129
> >http://www.thelemkullab.com
> >
> >==
> >
> >--
> >Gromacs Users mailing list
> >
> >* Please search the archive at
> >http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before
> >posting!
> >
> >* Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists
> >
> >* For (un)subscribe requests visit
> >https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  or
> >send a mail togmx-users-requ...@gromacs.org.
> >

>--
>Gromacs Users mailing list
>
>* Please search the archive at
>http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before
>posting!
>
>* Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists
>
>* For (un)subscribe requests visit
>https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  or
>send a mail togmx-users-requ...@gromacs.org.
>
-- == Justin A. Lemkul, Ph.D. 
Assistant Professor Office: 301 Fralin Hall Lab: 303 Engel Hall 
Virginia Tech Department of Biochemistry 340 West Campus Dr. 
Blacksburg, VA 24061 jalem...@vt.edu | (540) 231-3129 
http://www.thelemkullab.com ==



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-18 Thread Szilárd Páll
PS: You will get more PCIe lanes without motherboard trickery -- and note
that consumer motherboards with PCIe switches can sometimes cause
instabilities when under heavy compute load -- if you buy the aging and
quite overpriced i9 X-series like the i9-7920 with 12 cores or the
Threadripper 2950x 16 cores and 60 PCIe lanes.

Also note that, but more cores always win when the CPU performance matters
and while 8 cores are generally sufficient, in some use-cases it may not be
(like runs with free energy).

--
Szilárd


On Thu, Jul 18, 2019 at 10:08 AM Szilárd Páll 
wrote:

> On Wed, Jul 17, 2019 at 7:00 PM Moir, Michael (MMoir) 
> wrote:
>
>> This is not quite true.  I certainly observed this degradation in
>> performance using the 9900K with two GPUs as Szilárd states using a
>> motherboard with one PCIe controller, but the limitation is from the
>> motherboard not from the CPU.
>
>
> Sorry, but that's not the case. PCIe controllers have been integrated into
> CPUs for many years; see
>
> https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-introduction-basics-paper.pdf
>
> https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/
>
> So no, the limitation is the CPU itself. Consumer CPUs these days have 24
> lanes total, some of which are used to connect the CPU to the chipset, and
> effectively you get 16-20 lanes (BTW here too the new AMD CPUs win as they
> provide 16 lanes for GPUs and similar devices and 4 lanes for NVMe, all on
> PCIe 4.0).
>
>
>>   It is possible to obtain a motherboard that contains two PCIe
>> controllers which overcomes this obstacle for not a whole lot more money.
>>
>
> It is possibly to buy motherboards with PCIe switches. These don't
> increase the number of lanes just do what a swtich does: as long as not all
> connected devices try to use the full capacity of the CPU (!) at the same
> time, you can get full speed on all connected devices.
> e.g.:
> https://techreport.com/r.x/2015_11_19_Gigabytes_Z170XGaming_G1_motherboard_reviewed/05-diagram_pcie_routing.gif
>
> Cheers,
> --
> Szilárd
>
> Mike
>>
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
>> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
>> Páll
>> Sent: Wednesday, July 17, 2019 8:14 AM
>> To: Discussion list for GROMACS users 
>> Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000
>>
>> Hi Alex,
>>
>> I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
>> public benchmarks out there point to the fact that they are a major
>> improvement over the previous generation Ryzen -- which were already
>> quite competitive for GPU-accelerated GROMACS runs compared to Intel,
>> especially in perf/price.
>>
>> One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
>> that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
>> for both of the GPUs) which will lead to a slightly less performance
>> (I'd estimate <5-10%) in particular compared to i) having a single GPU
>> plugged in into the machine ii) compare to CPUs like Threadripper or
>> the i9 79xx series processors which have more PCIe lanes.
>>
>> However, if throughput is the goal, the ideal use-case especially for
>> small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
>> hence 4 runs on a 2-GPU system case in which the impact of the
>> aforementioned limitation will be further decreased.
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
>> >
>> > That is excellent information, thank you. None of us have dealt with AMD
>> > CPUs in a while, so would the combination of a Ryzen 3900X and two
>> > Quadro 2080 Ti be a good choice?
>> >
>> > Again, thanks!
>> >
>> > Alex
>> >
>> >
>> > On 7/16/2019 8:41 AM, Szilárd Páll wrote:
>> > > Hi Alex,
>> > >
>> > > On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
>> > >> Hi all and especially Szilard!
>> > >>
>> > >> My glorious management asked me to post this here. One of our group
>> > >> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
>> > >> following basics have been spec'ed for him:
>> > >>
>> > >> CPU: Xeon Gold 6244
>> > >> GPU: RTX 5000 or 6000
>> > >>
>> > >> I'll be surprised if he runs systems with more than 50K particles.
>> Could
>> > >> you please comment on whether this is a cost-efficient and reasonably
>> > >> powerful setup? Your past suggestions have been invaluable for us.
>> > > That will be reasonably fast, but cost efficiency will be awful, to
>> be honest:
>> > > - that CPU is a ~$3000 part and won't perform much better than a
>> > > $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
>> > > would be significantly faster.
>> > > - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
>> > > close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
>> > > slower for at least another 1.5x less.
>> > >
>> > > Single run at a time or possibly multiple? The 

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-18 Thread Szilárd Páll
On Wed, Jul 17, 2019 at 7:00 PM Moir, Michael (MMoir) 
wrote:

> This is not quite true.  I certainly observed this degradation in
> performance using the 9900K with two GPUs as Szilárd states using a
> motherboard with one PCIe controller, but the limitation is from the
> motherboard not from the CPU.


Sorry, but that's not the case. PCIe controllers have been integrated into
CPUs for many years; see
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-introduction-basics-paper.pdf
https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/

So no, the limitation is the CPU itself. Consumer CPUs these days have 24
lanes total, some of which are used to connect the CPU to the chipset, and
effectively you get 16-20 lanes (BTW here too the new AMD CPUs win as they
provide 16 lanes for GPUs and similar devices and 4 lanes for NVMe, all on
PCIe 4.0).


>   It is possible to obtain a motherboard that contains two PCIe
> controllers which overcomes this obstacle for not a whole lot more money.
>

It is possibly to buy motherboards with PCIe switches. These don't increase
the number of lanes just do what a swtich does: as long as not all
connected devices try to use the full capacity of the CPU (!) at the same
time, you can get full speed on all connected devices.
e.g.:
https://techreport.com/r.x/2015_11_19_Gigabytes_Z170XGaming_G1_motherboard_reviewed/05-diagram_pcie_routing.gif

Cheers,
--
Szilárd

Mike
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
> Páll
> Sent: Wednesday, July 17, 2019 8:14 AM
> To: Discussion list for GROMACS users 
> Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000
>
> Hi Alex,
>
> I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
> public benchmarks out there point to the fact that they are a major
> improvement over the previous generation Ryzen -- which were already
> quite competitive for GPU-accelerated GROMACS runs compared to Intel,
> especially in perf/price.
>
> One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
> that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
> for both of the GPUs) which will lead to a slightly less performance
> (I'd estimate <5-10%) in particular compared to i) having a single GPU
> plugged in into the machine ii) compare to CPUs like Threadripper or
> the i9 79xx series processors which have more PCIe lanes.
>
> However, if throughput is the goal, the ideal use-case especially for
> small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
> hence 4 runs on a 2-GPU system case in which the impact of the
> aforementioned limitation will be further decreased.
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
> >
> > That is excellent information, thank you. None of us have dealt with AMD
> > CPUs in a while, so would the combination of a Ryzen 3900X and two
> > Quadro 2080 Ti be a good choice?
> >
> > Again, thanks!
> >
> > Alex
> >
> >
> > On 7/16/2019 8:41 AM, Szilárd Páll wrote:
> > > Hi Alex,
> > >
> > > On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
> > >> Hi all and especially Szilard!
> > >>
> > >> My glorious management asked me to post this here. One of our group
> > >> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
> > >> following basics have been spec'ed for him:
> > >>
> > >> CPU: Xeon Gold 6244
> > >> GPU: RTX 5000 or 6000
> > >>
> > >> I'll be surprised if he runs systems with more than 50K particles.
> Could
> > >> you please comment on whether this is a cost-efficient and reasonably
> > >> powerful setup? Your past suggestions have been invaluable for us.
> > > That will be reasonably fast, but cost efficiency will be awful, to be
> honest:
> > > - that CPU is a ~$3000 part and won't perform much better than a
> > > $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
> > > would be significantly faster.
> > > - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
> > > close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
> > > slower for at least another 1.5x less.
> > >
> > > Single run at a time or possibly multiple? The proposed (or any 8+
> > > core) workstation CPU is fast enough in the majority of the
> > > simulations to pair well with two of those GPUs if used for two
> > > concurrent simulations. If that's a relevant use-case, I'd recommend
> > > two 2070 Super or 2080 cards.
> > >
> > > Cheers,
> > > --
> > > Szilárd
> > >
> > >
> > >> Thank you,
> > >>
> > >> Alex
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> 

[gmx-users] remd error

2019-07-18 Thread Bratin Kumar Das
Hi,
   I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
--
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Warning

2019-07-18 Thread Bratin Kumar Das
Hi Mark,
It came from grompp command...in mdp file I have constraint
all-bonds. is it coming due to that.

On Mon, Jul 15, 2019 at 4:57 PM Quyen Vu  wrote:

> Hi,
> I think he got this warning due to he did not constraint the H-bond in his
> simulation while he used a timestep of 2fs
>
> On Mon, Jul 15, 2019 at 11:20 AM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > As it says, your time step is too large to be a valid model of that bond
> > interaction. Where did it come from?
> >
> > Mark
> >
> > On Mon., 15 Jul. 2019, 10:32 Bratin Kumar Das, <
> > 177cy500.bra...@nitk.edu.in>
> > wrote:
> >
> > > Dear All,
> > >  I am getting the following error during running REMD
> > > The bond in molecule-type Protein between atoms 49 OH and 50 HH has an
> > >   estimated oscillational period of 9.0e-03 ps, which is less than 5
> > times
> > >   the time step of 2.0e-03 ps.
> > >   Maybe you forgot to change the constraints mdp option.
> > > What could be the reason.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.