Re: [gmx-users] Drude force field

2019-07-17 Thread Justin Lemkul
On Wed, Jul 17, 2019 at 8:58 PM Myunggi Yi  wrote:

> Thank you Dr. Lemkel,
>
> I don't have ions in my simulation. It's a neutral system with a protein in
> membrane bilayer with solvent.
> I have downloaded the force field (Drude FF for charmm FF in Gromacs
> format). to run the simulation with charmm FF in "Gromacs 2019.3".
> However, it seems the format of the file does not match with the current
> version.
>
> In the web,
>
> Compile and install as you would any other (post-5.0) GROMACS version. If
> you attempt to use *ANY OTHER VERSION OF GROMACS, the Drude features will
> not be accessible.*
>
> There are 5.0 and 5.1 series of Gromacs versions. Which one should I use?
>
> Or, it there a way to modify the force field format to use the current
> version of Gromacs?, Then I will modify the format.
>
>
Read the information at the previous link more carefully. You cannot use
any released version of GROMACS. You must use the developmental version as
instructed in that link.

-Justin


>
> On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:
>
> >
> >
> > On 7/17/19 8:39 PM, Myunggi Yi wrote:
> > > Dear users,
> > >
> > > I want to run a simulation with a polarizable force field.
> > >
> > > How and where can I get Drude force field for the current version of
> > > Gromacs?
> >
> > Everything you need to know:
> >
> > http://mackerell.umaryland.edu/charmm_drude_ff.shtml
> >
> > The implementation is not complete. If your system has ions, do not use
> > GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or
> > OpenMM. The Drude model is still considered experimental, hence it is
> > not officially supported yet. There have been a lot of snags along the
> > way (mostly in my time to get the code up to par for official inclusion).
> >
> > -Justin
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Assistant Professor
> > Office: 301 Fralin Hall
> > Lab: 303 Engel Hall
> >
> > Virginia Tech Department of Biochemistry
> > 340 West Campus Dr.
> > Blacksburg, VA 24061
> >
> > jalem...@vt.edu | (540) 231-3129
> > http://www.thelemkullab.com
> >
> > ==
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 

==

Justin A. Lemkul, Ph.D.

Assistant Professor

Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com


==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Alex
Perfect, thanks a lot! We are less constrained by cost, so we'll go 
straight to 2080Ti. You guys already saved us a few grand here. ;)


Alex

On 7/17/2019 7:34 PM, Moir, Michael (MMoir) wrote:

Alex,
The motherboard I am using with the 9900K is the ASUS WS Z390 PRO.  The PRO 
version has the extra PCIe controller.  I am using two RTX 1070ti GPUs, and I can 
hear everyone snorting with derision, but with this configuration I get 
performance with 100,000 atoms of about 72 ns/day with 2019.1 which is adequate 
for my needs.  I’ll upgrade to the 2080ti when the price drops to <$1000. 
Another tip is to get the highest speed memory that the motherboard will handle.  
It doesn’t make a huge difference, maybe 2-3% over the cheapest memory but it is 
something that is easy and low cost to do.

Mike

Sent from my iPhone


On Jul 17, 2019, at 11:16 AM, Alex  wrote:

Gentlemen, thank you both!

Michael, would you be able to suggest a specific motherboard that removes the 
bottleneck? We aren't really limited by price in this case and would prefer to 
get every bit of benefit out of the processing components, if possible.

Thanks,

Alex


On 7/17/2019 10:44 AM, Moir, Michael (MMoir) wrote:
This is not quite true.  I certainly observed this degradation in performance 
using the 9900K with two GPUs as Szilárd states using a motherboard with one 
PCIe controller, but the limitation is from the motherboard not from the CPU.  
It is possible to obtain a motherboard that contains two PCIe controllers which 
overcomes this obstacle for not a whole lot more money.

Mike

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Wednesday, July 17, 2019 8:14 AM
To: Discussion list for GROMACS users 
Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000

Hi Alex,

I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
public benchmarks out there point to the fact that they are a major
improvement over the previous generation Ryzen -- which were already
quite competitive for GPU-accelerated GROMACS runs compared to Intel,
especially in perf/price.

One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
for both of the GPUs) which will lead to a slightly less performance
(I'd estimate <5-10%) in particular compared to i) having a single GPU
plugged in into the machine ii) compare to CPUs like Threadripper or
the i9 79xx series processors which have more PCIe lanes.

However, if throughput is the goal, the ideal use-case especially for
small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
hence 4 runs on a 2-GPU system case in which the impact of the
aforementioned limitation will be further decreased.

Cheers,
--
Szilárd



On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
That is excellent information, thank you. None of us have dealt with AMD
CPUs in a while, so would the combination of a Ryzen 3900X and two
Quadro 2080 Ti be a good choice?

Again, thanks!

Alex



On 7/16/2019 8:41 AM, Szilárd Páll wrote:
Hi Alex,


On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
Hi all and especially Szilard!

My glorious management asked me to post this here. One of our group
members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
following basics have been spec'ed for him:

CPU: Xeon Gold 6244
GPU: RTX 5000 or 6000

I'll be surprised if he runs systems with more than 50K particles. Could
you please comment on whether this is a cost-efficient and reasonably
powerful setup? Your past suggestions have been invaluable for us.

That will be reasonably fast, but cost efficiency will be awful, to be honest:
- that CPU is a ~$3000 part and won't perform much better than a
$4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
would be significantly faster.
- Quadro cards also pretty low in bang for buck: a 2080 Ti will be
close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
slower for at least another 1.5x less.

Single run at a time or possibly multiple? The proposed (or any 8+
core) workstation CPU is fast enough in the majority of the
simulations to pair well with two of those GPUs if used for two
concurrent simulations. If that's a relevant use-case, I'd recommend
two 2070 Super or 2080 cards.

Cheers,
--
Szilárd



Thank you,

Alex
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
I've got the following Error.

It seems this version of Gromacs does not recognize the Drude force field
format distributed by MacKerell.

There are additional Terms like [anisotropic_polarization] , etc..
Gromacs read this as a residue name.




Program gmx, VERSION 5.0.7
Source code file:
/data/cluster/apps/gromacs/source/gromacs-5.0.7/src/gromacs/gmxpreprocess/resall.c,
line: 488

Fatal error:
in .rtp file in residue anisotropic_polarization at line:
 DBR6   BR6C6C1C5 1.8000 0.6000 0.6000




On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:

>
>
> On 7/17/19 8:39 PM, Myunggi Yi wrote:
> > Dear users,
> >
> > I want to run a simulation with a polarizable force field.
> >
> > How and where can I get Drude force field for the current version of
> > Gromacs?
>
> Everything you need to know:
>
> http://mackerell.umaryland.edu/charmm_drude_ff.shtml
>
> The implementation is not complete. If your system has ions, do not use
> GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or
> OpenMM. The Drude model is still considered experimental, hence it is
> not officially supported yet. There have been a lot of snags along the
> way (mostly in my time to get the code up to par for official inclusion).
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
Alex,
The motherboard I am using with the 9900K is the ASUS WS Z390 PRO.  The PRO 
version has the extra PCIe controller.  I am using two RTX 1070ti GPUs, and I 
can hear everyone snorting with derision, but with this configuration I get 
performance with 100,000 atoms of about 72 ns/day with 2019.1 which is adequate 
for my needs.  I’ll upgrade to the 2080ti when the price drops to <$1000. 
Another tip is to get the highest speed memory that the motherboard will 
handle.  It doesn’t make a huge difference, maybe 2-3% over the cheapest memory 
but it is something that is easy and low cost to do.

Mike

Sent from my iPhone

> On Jul 17, 2019, at 11:16 AM, Alex  wrote:
>
> Gentlemen, thank you both!
>
> Michael, would you be able to suggest a specific motherboard that removes the 
> bottleneck? We aren't really limited by price in this case and would prefer 
> to get every bit of benefit out of the processing components, if possible.
>
> Thanks,
>
> Alex
>
>> On 7/17/2019 10:44 AM, Moir, Michael (MMoir) wrote:
>> This is not quite true.  I certainly observed this degradation in 
>> performance using the 9900K with two GPUs as Szilárd states using a 
>> motherboard with one PCIe controller, but the limitation is from the 
>> motherboard not from the CPU.  It is possible to obtain a motherboard that 
>> contains two PCIe controllers which overcomes this obstacle for not a whole 
>> lot more money.
>>
>> Mike
>>
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>>  On Behalf Of Szilárd Páll
>> Sent: Wednesday, July 17, 2019 8:14 AM
>> To: Discussion list for GROMACS users 
>> Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000
>>
>> Hi Alex,
>>
>> I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
>> public benchmarks out there point to the fact that they are a major
>> improvement over the previous generation Ryzen -- which were already
>> quite competitive for GPU-accelerated GROMACS runs compared to Intel,
>> especially in perf/price.
>>
>> One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
>> that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
>> for both of the GPUs) which will lead to a slightly less performance
>> (I'd estimate <5-10%) in particular compared to i) having a single GPU
>> plugged in into the machine ii) compare to CPUs like Threadripper or
>> the i9 79xx series processors which have more PCIe lanes.
>>
>> However, if throughput is the goal, the ideal use-case especially for
>> small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
>> hence 4 runs on a 2-GPU system case in which the impact of the
>> aforementioned limitation will be further decreased.
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>>> On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
>>> That is excellent information, thank you. None of us have dealt with AMD
>>> CPUs in a while, so would the combination of a Ryzen 3900X and two
>>> Quadro 2080 Ti be a good choice?
>>>
>>> Again, thanks!
>>>
>>> Alex
>>>
>>>
 On 7/16/2019 8:41 AM, Szilárd Páll wrote:
 Hi Alex,

> On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
> Hi all and especially Szilard!
>
> My glorious management asked me to post this here. One of our group
> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
> following basics have been spec'ed for him:
>
> CPU: Xeon Gold 6244
> GPU: RTX 5000 or 6000
>
> I'll be surprised if he runs systems with more than 50K particles. Could
> you please comment on whether this is a cost-efficient and reasonably
> powerful setup? Your past suggestions have been invaluable for us.
 That will be reasonably fast, but cost efficiency will be awful, to be 
 honest:
 - that CPU is a ~$3000 part and won't perform much better than a
 $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
 would be significantly faster.
 - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
 close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
 slower for at least another 1.5x less.

 Single run at a time or possibly multiple? The proposed (or any 8+
 core) workstation CPU is fast enough in the majority of the
 simulations to pair well with two of those GPUs if used for two
 concurrent simulations. If that's a relevant use-case, I'd recommend
 two 2070 Super or 2080 cards.

 Cheers,
 --
 Szilárd


> Thank you,
>
> Alex
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>>> --
>>> 

Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Thank you Dr. Lemkel,

I don't have ions in my simulation. It's a neutral system with a protein in
membrane bilayer with solvent.
I have downloaded the force field (Drude FF for charmm FF in Gromacs
format). to run the simulation with charmm FF in "Gromacs 2019.3".
However, it seems the format of the file does not match with the current
version.

In the web,

Compile and install as you would any other (post-5.0) GROMACS version. If
you attempt to use *ANY OTHER VERSION OF GROMACS, the Drude features will
not be accessible.*

There are 5.0 and 5.1 series of Gromacs versions. Which one should I use?

Or, it there a way to modify the force field format to use the current
version of Gromacs?, Then I will modify the format.



On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:

>
>
> On 7/17/19 8:39 PM, Myunggi Yi wrote:
> > Dear users,
> >
> > I want to run a simulation with a polarizable force field.
> >
> > How and where can I get Drude force field for the current version of
> > Gromacs?
>
> Everything you need to know:
>
> http://mackerell.umaryland.edu/charmm_drude_ff.shtml
>
> The implementation is not complete. If your system has ions, do not use
> GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or
> OpenMM. The Drude model is still considered experimental, hence it is
> not officially supported yet. There have been a lot of snags along the
> way (mostly in my time to get the code up to par for official inclusion).
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Thank you.


On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul  wrote:

>
>
> On 7/17/19 8:39 PM, Myunggi Yi wrote:
> > Dear users,
> >
> > I want to run a simulation with a polarizable force field.
> >
> > How and where can I get Drude force field for the current version of
> > Gromacs?
>
> Everything you need to know:
>
> http://mackerell.umaryland.edu/charmm_drude_ff.shtml
>
> The implementation is not complete. If your system has ions, do not use
> GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or
> OpenMM. The Drude model is still considered experimental, hence it is
> not officially supported yet. There have been a lot of snags along the
> way (mostly in my time to get the code up to par for official inclusion).
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Drude force field

2019-07-17 Thread Justin Lemkul




On 7/17/19 8:39 PM, Myunggi Yi wrote:

Dear users,

I want to run a simulation with a polarizable force field.

How and where can I get Drude force field for the current version of
Gromacs?


Everything you need to know:

http://mackerell.umaryland.edu/charmm_drude_ff.shtml

The implementation is not complete. If your system has ions, do not use 
GROMACS due to the lack of NBTHOLE. In that case, use NAMD, CHARMM, or 
OpenMM. The Drude model is still considered experimental, hence it is 
not officially supported yet. There have been a lot of snags along the 
way (mostly in my time to get the code up to par for official inclusion).


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Dear users,

I want to run a simulation with a polarizable force field.

How and where can I get Drude force field for the current version of
Gromacs?

Thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
Certainly.  When I get home this evening I will post the information.

Mike

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Alex
Sent: Wednesday, July 17, 2019 11:16 AM
To: gmx-us...@gromacs.org
Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000

Gentlemen, thank you both!

Michael, would you be able to suggest a specific motherboard that
removes the bottleneck? We aren't really limited by price in this case
and would prefer to get every bit of benefit out of the processing
components, if possible.

Thanks,

Alex

On 7/17/2019 10:44 AM, Moir, Michael (MMoir) wrote:
> This is not quite true.  I certainly observed this degradation in performance 
> using the 9900K with two GPUs as Szilárd states using a motherboard with one 
> PCIe controller, but the limitation is from the motherboard not from the CPU. 
>  It is possible to obtain a motherboard that contains two PCIe controllers 
> which overcomes this obstacle for not a whole lot more money.
>
> Mike
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>  On Behalf Of Szilárd Páll
> Sent: Wednesday, July 17, 2019 8:14 AM
> To: Discussion list for GROMACS users 
> Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000
>
> Hi Alex,
>
> I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
> public benchmarks out there point to the fact that they are a major
> improvement over the previous generation Ryzen -- which were already
> quite competitive for GPU-accelerated GROMACS runs compared to Intel,
> especially in perf/price.
>
> One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
> that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
> for both of the GPUs) which will lead to a slightly less performance
> (I'd estimate <5-10%) in particular compared to i) having a single GPU
> plugged in into the machine ii) compare to CPUs like Threadripper or
> the i9 79xx series processors which have more PCIe lanes.
>
> However, if throughput is the goal, the ideal use-case especially for
> small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
> hence 4 runs on a 2-GPU system case in which the impact of the
> aforementioned limitation will be further decreased.
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
>> That is excellent information, thank you. None of us have dealt with AMD
>> CPUs in a while, so would the combination of a Ryzen 3900X and two
>> Quadro 2080 Ti be a good choice?
>>
>> Again, thanks!
>>
>> Alex
>>
>>
>> On 7/16/2019 8:41 AM, Szilárd Páll wrote:
>>> Hi Alex,
>>>
>>> On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
 Hi all and especially Szilard!

 My glorious management asked me to post this here. One of our group
 members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
 following basics have been spec'ed for him:

 CPU: Xeon Gold 6244
 GPU: RTX 5000 or 6000

 I'll be surprised if he runs systems with more than 50K particles. Could
 you please comment on whether this is a cost-efficient and reasonably
 powerful setup? Your past suggestions have been invaluable for us.
>>> That will be reasonably fast, but cost efficiency will be awful, to be 
>>> honest:
>>> - that CPU is a ~$3000 part and won't perform much better than a
>>> $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
>>> would be significantly faster.
>>> - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
>>> close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
>>> slower for at least another 1.5x less.
>>>
>>> Single run at a time or possibly multiple? The proposed (or any 8+
>>> core) workstation CPU is fast enough in the majority of the
>>> simulations to pair well with two of those GPUs if used for two
>>> concurrent simulations. If that's a relevant use-case, I'd recommend
>>> two 2070 Super or 2080 cards.
>>>
>>> Cheers,
>>> --
>>> Szilárd
>>>
>>>
 Thank you,

 Alex
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
 a mail to gmx-users-requ...@gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List 

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Alex

Gentlemen, thank you both!

Michael, would you be able to suggest a specific motherboard that 
removes the bottleneck? We aren't really limited by price in this case 
and would prefer to get every bit of benefit out of the processing 
components, if possible.


Thanks,

Alex

On 7/17/2019 10:44 AM, Moir, Michael (MMoir) wrote:

This is not quite true.  I certainly observed this degradation in performance 
using the 9900K with two GPUs as Szilárd states using a motherboard with one 
PCIe controller, but the limitation is from the motherboard not from the CPU.  
It is possible to obtain a motherboard that contains two PCIe controllers which 
overcomes this obstacle for not a whole lot more money.

Mike

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Wednesday, July 17, 2019 8:14 AM
To: Discussion list for GROMACS users 
Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000

Hi Alex,

I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
public benchmarks out there point to the fact that they are a major
improvement over the previous generation Ryzen -- which were already
quite competitive for GPU-accelerated GROMACS runs compared to Intel,
especially in perf/price.

One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
for both of the GPUs) which will lead to a slightly less performance
(I'd estimate <5-10%) in particular compared to i) having a single GPU
plugged in into the machine ii) compare to CPUs like Threadripper or
the i9 79xx series processors which have more PCIe lanes.

However, if throughput is the goal, the ideal use-case especially for
small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
hence 4 runs on a 2-GPU system case in which the impact of the
aforementioned limitation will be further decreased.

Cheers,
--
Szilárd


On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:

That is excellent information, thank you. None of us have dealt with AMD
CPUs in a while, so would the combination of a Ryzen 3900X and two
Quadro 2080 Ti be a good choice?

Again, thanks!

Alex


On 7/16/2019 8:41 AM, Szilárd Páll wrote:

Hi Alex,

On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:

Hi all and especially Szilard!

My glorious management asked me to post this here. One of our group
members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
following basics have been spec'ed for him:

CPU: Xeon Gold 6244
GPU: RTX 5000 or 6000

I'll be surprised if he runs systems with more than 50K particles. Could
you please comment on whether this is a cost-efficient and reasonably
powerful setup? Your past suggestions have been invaluable for us.

That will be reasonably fast, but cost efficiency will be awful, to be honest:
- that CPU is a ~$3000 part and won't perform much better than a
$4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
would be significantly faster.
- Quadro cards also pretty low in bang for buck: a 2080 Ti will be
close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
slower for at least another 1.5x less.

Single run at a time or possibly multiple? The proposed (or any 8+
core) workstation CPU is fast enough in the majority of the
simulations to pair well with two of those GPUs if used for two
concurrent simulations. If that's a relevant use-case, I'd recommend
two 2070 Super or 2080 cards.

Cheers,
--
Szilárd



Thank you,

Alex
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] make manual fails

2019-07-17 Thread Michael Brunsteiner
hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc-7 
-DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on 
-DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin 
-DGMX_BUILD_MANUAL=onprompt> make -j 4prompt> make install 
prompt> make manualmanual cannot be built because Sphinx expected minimum 
version 1.6.1 is not available

although I seem to have version 1.8.4 (see below)

prompt> apt policy python-sphinx 
python-sphinx:
  Installed: 1.8.4-1
  Candidate: 1.8.4-1
  Version table:
 *** 1.8.4-1 500
    500 http://ftp.at.debian.org/debian buster/main amd64 Packages
    500 http://ftp.at.debian.org/debian buster/main i386 Packages
    100 /var/lib/dpkg/status

anybody else seen this issue?cheers,michael



=== Why be happy when you could be normal?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
This is not quite true.  I certainly observed this degradation in performance 
using the 9900K with two GPUs as Szilárd states using a motherboard with one 
PCIe controller, but the limitation is from the motherboard not from the CPU.  
It is possible to obtain a motherboard that contains two PCIe controllers which 
overcomes this obstacle for not a whole lot more money.

Mike

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Wednesday, July 17, 2019 8:14 AM
To: Discussion list for GROMACS users 
Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + RTX 5000

Hi Alex,

I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
public benchmarks out there point to the fact that they are a major
improvement over the previous generation Ryzen -- which were already
quite competitive for GPU-accelerated GROMACS runs compared to Intel,
especially in perf/price.

One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
for both of the GPUs) which will lead to a slightly less performance
(I'd estimate <5-10%) in particular compared to i) having a single GPU
plugged in into the machine ii) compare to CPUs like Threadripper or
the i9 79xx series processors which have more PCIe lanes.

However, if throughput is the goal, the ideal use-case especially for
small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
hence 4 runs on a 2-GPU system case in which the impact of the
aforementioned limitation will be further decreased.

Cheers,
--
Szilárd


On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
>
> That is excellent information, thank you. None of us have dealt with AMD
> CPUs in a while, so would the combination of a Ryzen 3900X and two
> Quadro 2080 Ti be a good choice?
>
> Again, thanks!
>
> Alex
>
>
> On 7/16/2019 8:41 AM, Szilárd Páll wrote:
> > Hi Alex,
> >
> > On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
> >> Hi all and especially Szilard!
> >>
> >> My glorious management asked me to post this here. One of our group
> >> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
> >> following basics have been spec'ed for him:
> >>
> >> CPU: Xeon Gold 6244
> >> GPU: RTX 5000 or 6000
> >>
> >> I'll be surprised if he runs systems with more than 50K particles. Could
> >> you please comment on whether this is a cost-efficient and reasonably
> >> powerful setup? Your past suggestions have been invaluable for us.
> > That will be reasonably fast, but cost efficiency will be awful, to be 
> > honest:
> > - that CPU is a ~$3000 part and won't perform much better than a
> > $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
> > would be significantly faster.
> > - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
> > close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
> > slower for at least another 1.5x less.
> >
> > Single run at a time or possibly multiple? The proposed (or any 8+
> > core) workstation CPU is fast enough in the majority of the
> > simulations to pair well with two of those GPUs if used for two
> > concurrent simulations. If that's a relevant use-case, I'd recommend
> > two 2070 Super or 2080 cards.
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> >> Thank you,
> >>
> >> Alex
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at 
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
> >> a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Szilárd Páll
Hi Alex,

I've not had a chance to test the new 3rd gen Ryzen CPUs, but all
public benchmarks out there point to the fact that they are a major
improvement over the previous generation Ryzen -- which were already
quite competitive for GPU-accelerated GROMACS runs compared to Intel,
especially in perf/price.

One caveat for dual-GPU setups on the i9 9900 or the Ryzen 3900X is
that they don't have enough PCI lanes for peak CPU-GPU transfer (x8
for both of the GPUs) which will lead to a slightly less performance
(I'd estimate <5-10%) in particular compared to i) having a single GPU
plugged in into the machine ii) compare to CPUs like Threadripper or
the i9 79xx series processors which have more PCIe lanes.

However, if throughput is the goal, the ideal use-case especially for
small simulation systems like <=50k atoms is to run e.g. 2 runs / GPU,
hence 4 runs on a 2-GPU system case in which the impact of the
aforementioned limitation will be further decreased.

Cheers,
--
Szilárd


On Tue, Jul 16, 2019 at 7:18 PM Alex  wrote:
>
> That is excellent information, thank you. None of us have dealt with AMD
> CPUs in a while, so would the combination of a Ryzen 3900X and two
> Quadro 2080 Ti be a good choice?
>
> Again, thanks!
>
> Alex
>
>
> On 7/16/2019 8:41 AM, Szilárd Páll wrote:
> > Hi Alex,
> >
> > On Mon, Jul 15, 2019 at 8:53 PM Alex  wrote:
> >> Hi all and especially Szilard!
> >>
> >> My glorious management asked me to post this here. One of our group
> >> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
> >> following basics have been spec'ed for him:
> >>
> >> CPU: Xeon Gold 6244
> >> GPU: RTX 5000 or 6000
> >>
> >> I'll be surprised if he runs systems with more than 50K particles. Could
> >> you please comment on whether this is a cost-efficient and reasonably
> >> powerful setup? Your past suggestions have been invaluable for us.
> > That will be reasonably fast, but cost efficiency will be awful, to be 
> > honest:
> > - that CPU is a ~$3000 part and won't perform much better than a
> > $4-500 desktop CPU like an i9 9900, let alone a Ryzen 3900X which
> > would be significantly faster.
> > - Quadro cards also pretty low in bang for buck: a 2080 Ti will be
> > close to the RTX 6000 for ~5x less and the 2080 or 2070 Super a bit
> > slower for at least another 1.5x less.
> >
> > Single run at a time or possibly multiple? The proposed (or any 8+
> > core) workstation CPU is fast enough in the majority of the
> > simulations to pair well with two of those GPUs if used for two
> > concurrent simulations. If that's a relevant use-case, I'd recommend
> > two 2070 Super or 2080 cards.
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> >> Thank you,
> >>
> >> Alex
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at 
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
> >> a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] decreased performance with free energy

2019-07-17 Thread Szilárd Páll
Hi,

Lower performe especially with GPUs is not unexpected, but what you report
is unusually large. I suggest you post your mdp and log file, perhaps there
are some things to improve.

--
Szilárd


On Wed, Jul 17, 2019 at 3:47 PM David de Sancho 
wrote:

> Hi all
> I have been doing some testing for Hamiltonian replica exchange using
> Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
> box.
> For the modified hamiltonian I have simply modified the water interactions
> by generating a typeB atom in the force field ffnonbonded.itp with
> different parameters file and then creating a number of tpr files for
> different lambda values as defined in the mdp files. The only difference
> between mdp files for a simple NVT run and for the HREX runs are the
> following lines:
>
> > ; H-REPLEX
> > free-energy = yes
> > init-lambda-state = 0
> > nstdhdl = 0
> > vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
>
> I have tested for performance in the same machine and compared the standard
> NVT run performance (~175 ns/day in 8 cores) with that for the free energy
> tpr file (6.2 ns/day).
> Is this performance loss what you would expect or are there any immediate
> changes you can suggest to improve things? I have found a relatively old
> post on this on Gromacs developers (https://redmine.gromacs.org/issues/742
> ),
> but I am not sure whether it is the exact same problem.
> Thanks,
>
> David
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 17, 2019 at 2:13 PM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

> Hi Benson,
> thanks for your answer and sorry for my delay: in the meantime I had to
> restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA
> 10.1, I re-compiled Gromacs 2019.2 with the following command:
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_SIMD=AVX2_256 -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DREGRESSIONTEST_DOWNLOAD=ON
>
> I did make test and I got 100% passed. but this is the log file:
>
> GROMACS:  gmx mdrun, version 2019.2
> Executable:   /usr/local/gromacs/bin/gmx
> Data prefix:  /usr/local/gromacs
> Working dir:  /home/stefano/CB2
> Process ID:   117020
> Command line:
>   gmx mdrun -deffnm cb2_trz2c3ohene -ntmpi 1 -pin on
>
> GROMACS version:2019.2
> Precision:  single
> Memory model:   64 bit
> MPI library:thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support:CUDA
> SIMD instructions:  AVX2_256
> FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128
> RDTSCP usage:   enabled
> TNG support:enabled
> Hwloc support:  disabled
> Tracing support:disabled
> C compiler: /usr/bin/cc GNU 4.8.5
> C compiler flags:-mavx2 -mfma -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/bin/c++ GNU 4.8.5
> C++ compiler flags:  -mavx2 -mfma-std=c++11   -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
> driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1,
> V10.1.168
> CUDA compiler
>
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;;
>
> ;-mavx2;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver:10.10
> CUDA runtime:   N/A
>
> NOTE: Detection of GPUs failed. The API reported:
>   unknown error
>   GROMACS cannot run tasks on a GPU.
>
> Running on 1 node with total 32 cores, 64 logical cores, 0 compatible GPUs
> Hardware detected:
>   CPU info:
> Vendor: AMD
> Brand:  AMD Ryzen Threadripper 2990WX 32-Core Processor
> Family: 23   Model: 8   Stepping: 2
> Features: aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf
> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdrnd rdtscp
> sha sse2 sse3 sse4a sse4.1 sse4.2 ssse3
>   Hardware topology: Basic
> Sockets, cores, and logical processors:
>   Socket  0: [   0  32] [   1  33] [   2  34] [   3  35] [   4  36] [
> 5  37] [   6  38] [   7  39] [  16  48] [  17  49] [  18  50] [  19  51] [
>  20  52] [  21  53] [  22  54] [  23  55] [   8  40] [   9  41] [  10  42]
> [  11  43] [  12  44] [  13  45] [  14  46] [  15  47] [  24  56] [  25
>  57] [  26  58] [  27  59] [  28  60] [  29  61] [  30  62] [  31  63]
>
> Do you have any suggestions?
>
> PS: I set SIMD option to AVX2_256 with an AMD Ryzen Threadripper 2990WX
> 32-Core Processor: do you think it is a good idea?
>

In general, I suggest you stick to the defaults (which is not AVX2_128),
this will typically be faster, in particular in CPU-only runs. The
difference may not be significant in GPU-accelerated runs and (in some no
too common cases it can even be a little bit faster with AVX2_256).

Cheers,
--
Szilárd


>
> Thanks again
> Stefano
>
> Il giorno mer 10 lug 2019 alle ore 08:13 Benson Muite <
> benson_mu...@emailplus.org> ha scritto:
>
> > Hi Stefano,
> >
> > What was your compilation command? (it may be helpful to add SIMD
> > support appropriate to your processor
> >
> >
> http://manual.gromacs.org/documentation/current/install-guide/index.html#simd-support
> > )
> >
> > Did you run make test after compiling?
> >
> > Benson
> >
> > On 7/10/19 1:18 AM, Stefano Guglielmo wrote:
> > > Dear all,
> > > I have a centOS machine equipped with two RTX 2080 cards, with nvidia
> > > drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the
> > log
> > > reported the following message:
> > >
> > > GROMACS version:2019.2
> > > Precision:  single
> > > Memory model:   64 bit
> > > MPI library:thread_mpi
> > > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> > > GPU support:CUDA
> > > SIMD instructions:  NONE
> > > FFT library:fftw-3.3.8
> > > RDTSCP usage:   disabled
> > > TNG support:enabled
> > > Hwloc support:  disabled
> > > Tracing support:disabled
> > > C compiler: /usr/bin/cc GNU 4.8.5
> > > C compiler flags:-O3 -DNDEBUG -funroll-all-loops
> > > -fexcess-precision=fast
> > > C++ compiler:   

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 10, 2019 at 2:18 AM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:

> Dear all,
> I have a centOS machine equipped with two RTX 2080 cards, with nvidia
> drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the log
> reported the following message:
>
> GROMACS version:2019.2
> Precision:  single
> Memory model:   64 bit
> MPI library:thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support:CUDA
> SIMD instructions:  NONE
> FFT library:fftw-3.3.8
> RDTSCP usage:   disabled
> TNG support:enabled
> Hwloc support:  disabled
> Tracing support:disabled
> C compiler: /usr/bin/cc GNU 4.8.5
> C compiler flags:-O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/bin/c++ GNU 4.8.5
> C++ compiler flags: -std=c++11   -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
> driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1,
> V10.1.168
> CUDA compiler
>
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;;
> ;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver:10.20
> CUDA runtime:   N/A
>

ˆˆˆ
Something was not correct about your CUDA runtime installation.

--
Szilárd


>
> NOTE: Detection of GPUs failed. The API reported:
>   unknown error
>   GROMACS cannot run tasks on a GPU.
>
> Does anyone have any suggestions?
> Thanks in advance
> Stefano
>
>
>
> --
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] decreased performance with free energy

2019-07-17 Thread David de Sancho
Hi all
I have been doing some testing for Hamiltonian replica exchange using
Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
box.
For the modified hamiltonian I have simply modified the water interactions
by generating a typeB atom in the force field ffnonbonded.itp with
different parameters file and then creating a number of tpr files for
different lambda values as defined in the mdp files. The only difference
between mdp files for a simple NVT run and for the HREX runs are the
following lines:

> ; H-REPLEX
> free-energy = yes
> init-lambda-state = 0
> nstdhdl = 0
> vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

I have tested for performance in the same machine and compared the standard
NVT run performance (~175 ns/day in 8 cores) with that for the free energy
tpr file (6.2 ns/day).
Is this performance loss what you would expect or are there any immediate
changes you can suggest to improve things? I have found a relatively old
post on this on Gromacs developers (https://redmine.gromacs.org/issues/742),
but I am not sure whether it is the exact same problem.
Thanks,

David
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Stefano Guglielmo
Hi Benson,
thanks for your answer and sorry for my delay: in the meantime I had to
restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA
10.1, I re-compiled Gromacs 2019.2 with the following command:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_SIMD=AVX2_256 -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DREGRESSIONTEST_DOWNLOAD=ON

I did make test and I got 100% passed. but this is the log file:

GROMACS:  gmx mdrun, version 2019.2
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Working dir:  /home/stefano/CB2
Process ID:   117020
Command line:
  gmx mdrun -deffnm cb2_trz2c3ohene -ntmpi 1 -pin on

GROMACS version:2019.2
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  AVX2_256
FFT library:fftw-3.3.8-sse2-avx-avx2-avx2_128
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
C compiler: /usr/bin/cc GNU 4.8.5
C compiler flags:-mavx2 -mfma -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /usr/bin/c++ GNU 4.8.5
C++ compiler flags:  -mavx2 -mfma-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, V10.1.168
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;;
;-mavx2;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:10.10
CUDA runtime:   N/A

NOTE: Detection of GPUs failed. The API reported:
  unknown error
  GROMACS cannot run tasks on a GPU.

Running on 1 node with total 32 cores, 64 logical cores, 0 compatible GPUs
Hardware detected:
  CPU info:
Vendor: AMD
Brand:  AMD Ryzen Threadripper 2990WX 32-Core Processor
Family: 23   Model: 8   Stepping: 2
Features: aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf
misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdrnd rdtscp
sha sse2 sse3 sse4a sse4.1 sse4.2 ssse3
  Hardware topology: Basic
Sockets, cores, and logical processors:
  Socket  0: [   0  32] [   1  33] [   2  34] [   3  35] [   4  36] [
5  37] [   6  38] [   7  39] [  16  48] [  17  49] [  18  50] [  19  51] [
 20  52] [  21  53] [  22  54] [  23  55] [   8  40] [   9  41] [  10  42]
[  11  43] [  12  44] [  13  45] [  14  46] [  15  47] [  24  56] [  25
 57] [  26  58] [  27  59] [  28  60] [  29  61] [  30  62] [  31  63]

Do you have any suggestions?

PS: I set SIMD option to AVX2_256 with an AMD Ryzen Threadripper 2990WX
32-Core Processor: do you think it is a good idea?

Thanks again
Stefano

Il giorno mer 10 lug 2019 alle ore 08:13 Benson Muite <
benson_mu...@emailplus.org> ha scritto:

> Hi Stefano,
>
> What was your compilation command? (it may be helpful to add SIMD
> support appropriate to your processor
>
> http://manual.gromacs.org/documentation/current/install-guide/index.html#simd-support
> )
>
> Did you run make test after compiling?
>
> Benson
>
> On 7/10/19 1:18 AM, Stefano Guglielmo wrote:
> > Dear all,
> > I have a centOS machine equipped with two RTX 2080 cards, with nvidia
> > drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the
> log
> > reported the following message:
> >
> > GROMACS version:2019.2
> > Precision:  single
> > Memory model:   64 bit
> > MPI library:thread_mpi
> > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> > GPU support:CUDA
> > SIMD instructions:  NONE
> > FFT library:fftw-3.3.8
> > RDTSCP usage:   disabled
> > TNG support:enabled
> > Hwloc support:  disabled
> > Tracing support:disabled
> > C compiler: /usr/bin/cc GNU 4.8.5
> > C compiler flags:-O3 -DNDEBUG -funroll-all-loops
> > -fexcess-precision=fast
> > C++ compiler:   /usr/bin/c++ GNU 4.8.5
> > C++ compiler flags: -std=c++11   -O3 -DNDEBUG -funroll-all-loops
> > -fexcess-precision=fast
> > CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda
> compiler
> > driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> > Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1,
> V10.1.168
> > CUDA compiler
> >
> 

Re: [gmx-users] heat capacity collection

2019-07-17 Thread Amin Rouy
Thank you David, but I just had an issue with output file. I wanted to have
heat-capacities as a data file, which I guess gromacs does not provide a
separate output file for that. So I could mange to do that with my own
script (calculating heat capacity from energy and temperature outpute
data).


On Tue, Jul 16, 2019 at 9:32 PM David van der Spoel 
wrote:

> Den 2019-07-16 kl. 13:30, skrev Amin Rouy:
> > Hi everyone,
> >
> > I try to collect the heat capacities from my set of simulations. I see
> that
> > the heat capacity through gmx energy -fluct_props does not provide an
> > output file with the heat capacities.
> > Any suggestion how can I collect them (or how to do with an script)?
> >
> > thanks
> >
> What kind of systems are these?
> I have quite a few reference data for liquids on
> http://virtualchemistry.org
> These were done using the gmx dos program that computes quantum
> corrections but this is not entirely trivial.
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.