Thank you again, "I'll be back" when I sort all this out.
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Szilárd Páll
Sent: Monday, December 17, 2018 1:16 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users]
On Thu, Dec 13, 2018 at 8:39 PM p buscemi wrote:
> Carsten
>
> thanks for the suggestion.
> Is it necessary to use the MPI version for gromacs when using multdir? -
> now have the single node version loaded.
>
Yes.
> I'm hammering out the first 2080ti with the 32 core AMD. results are not
>
ct this to be more than ~1.5x faster than using
one GPU.
Thanks again,
> Paul
>
>
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of paul
> buscemi
> Sent: Tuesday, Dec
tra
percent performance)
>
> Paul
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
> On Behalf Of p buscemi
> Sent: Thursday, December 13, 2018 1:38 PM
> To: gmx-us...@gromacs.org
> Cc: gmx-us...@gromacs.org
> Subject: Re:
...@gromacs.org
Cc: gmx-us...@gromacs.org
Subject: Re: [gmx-users] using dual CPU's
Carsten
thanks for the suggestion.
Is it necessary to use the MPI version for gromacs when using multdir? - now
have the single node version loaded.
I'm hammering out the first 2080ti with the 32 core AMD. results
ros and the last two 1, i.e..
Would you please complete the i.e...
Thanks again,
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of paul buscemi
Sent: Tuesday, December 11, 2018 5:56 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] usi
Carsten
thanks for the suggestion.
Is it necessary to use the MPI version for gromacs when using multdir? - now
have the single node version loaded.
I'm hammering out the first 2080ti with the 32 core AMD. results are not
stellar. slower than an intel 17-7000 But I'll beat on it some more
Hi,
> On 13. Dec 2018, at 01:11, paul buscemi wrote:
>
> Carsten,THanks for the response.
>
> my mistake - it was the GTX 980 from fig 3. … I was recalling from memory…..
> I assume that similar
There we measured a 19 percent performance increase for the 80k atom system.
> results would
Carsten,THanks for the response.
my mistake - it was the GTX 980 from fig 3. … I was recalling from memory…..
I assume that similar results would be achieved with the 1060’s
No I did not reset , my results were a compilation of 4-5 runs each under
slightly different conditions on two
Hi Paul,
> On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
>
> Dear users ( one more try )
>
> I am trying to use 2 GPU cards to improve modeling speed. The computer
> described in the log files is used to iron out models and am using to learn
> how to use two GPU cards before purchasing
Dear users ( one more try )
I am trying to use 2 GPU cards to improve modeling speed. The computer
described in the log files is used to iron out models and am using to learn
how to use two GPU cards before purchasing two new RTX 2080 ti's. The CPU is a
8 core 16 thread AMD and the GPU's
Hi,
In your case the slow down was in part because with a single GPU the PME
work by default went to that GPU. But with two GPUs the default is to leave
the PME work on the CPU (which for your test was very weak), because the
alternative is often not a good idea. You can try it out with the
Szilard,
Thank you vey much for the information and I apologize how the text appeared -
internet demons at work.
The computer described in the log files is a basic test rig which we use to
iron out models. The workhorse is a many core AMD with now one and hopefully
soon to be two 2080ti’s,
Without having read all details (partly due to the hard to read log
files), what I can certainly recommend is: unless you really need to,
avoid running single simulations with only a few 10s of thousands of
atoms across multiple GPUs. You'll be _much_ better off using your
limited resources by
> On Dec 10, 2018, at 7:33 PM, paul buscemi wrote:
>
>
> Mark, attached are the tail ends of three log files for
> the same system but run on an AMD 8 Core/16 Thread 2700x, 16G ram
> In summary:
> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and
> 6.0 ns/day.
>
Mark,
I may have misread the ppt on optimization, but I did experiment with
variations of mtomp mtmpi and so using less than si x threads was a 2 x 3
combination. Tonight I will put both
this is the last part of the log from a 2 gpu
setup
using gmx
Hi,
One of your reported runs only used six threads, by the way.
Something sensible can be said when the performance report at the end of
the log file can be seen.
Mark
On Tue., 11 Dec. 2018, 01:25 p buscemi, wrote:
> Thank you, Mark, for the prompt response. I realize the limitations of the
Thank you, Mark, for the prompt response. I realize the limitations of the
system ( its over 8 yo ), but I did not expect the speed to decrease by 50%
with 12 available threads ! No combination of ntomp, ntmpi could raise ns/day
above 4 with two GPU, vs 6 with one GPU.
This is actually a
Hi,
Your CPUs are pretty old and few, and your system is rather small, so I
would not expect to get a useful speedup from adding a second GPU to a
setup that may have already been limited by the CPU. Run 5000 steps with
one GPU and look at the reporting at the end of the log file (or upload it
to
Dear Users,
I have good luck using a single GPU with the basic setup.. However in going
from one gtx 1060 to a system with two - 50,000 atoms - the rate decrease from
10 ns/day to 5 or worse. The system models a ligand, solvent ( water ) and a
lipid membrane
the cpu is a 6 core intel i7 970(
20 matches
Mail list logo