Re: [gpfsug-discuss] AMD Rome support?

2019-10-22 Thread Peinkofer, Stephan
Dear Jon,

we run a bunch of AMD EPYC Naples Dual Socket servers with GPFS in our TSM 
Server Cluster. From what I can say it runs stable, but IO performance in 
general and GPFS performance in particular - even compared to an Xeon E5 v3 
system -  is rather poor. So to put that into perspective on the Xeon Systems 
with two EDR IB Links, we get 20GB/s read and write performance to GPFS using 
iozone very easily. On the AMD systems - with all AMD EPYC tuning suggestions 
applied you can find in the internet - we get around 15GB/s write but only 
6GB/s read. We also opened a ticket at IBM for this but never found out 
anything. Probably because not many are running GPFS on AMD EPYC right now? The 
answer from AMD basically was that the bad IO performance is expected in Dual 
Socket systems because the Socket Interconnect is the bottleneck. (See also the 
IB tests DELL did 
https://www.dell.com/support/article/de/de/debsdt1/sln313856/amd-epyc-stream-hpl-infiniband-and-wrf-performance-study?lang=en
 as soon as you have to cross the socket border you get only half of the IB 
performance)

Of course with ROME everything get’s better (that’s what AMD told us through 
our vendor) but if you have the chance then I would recommend to benchmark AMD 
vs. XEON with your particular IO workloads before buying.

Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Dipl. Inf. (FH), M. Sc. (TUM)

Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
URL: http://www.lrz.de

On 22. Oct 2019, at 11:12, Jon Diprose 
mailto:j...@well.ox.ac.uk>> wrote:

Dear GPFSUG,

I see the faq says Spectrum Scale is supported on "AMD Opteron based servers".

Does anyone know if/when support will be officially extended to cover AMD Epyc, 
especially the new 7002 (Rome) series?

Does anyone have any experience of running Spectrum Scale on Rome they could 
share, in particular for protocol nodes and for plain clients?

Thanks,

Jon

--
Dr. Jonathan Diprose mailto:j...@well.ox.ac.uk>>
 Tel: 01865 287837
Research Computing Manager
Henry Wellcome Building for Genomic Medicine Roosevelt Drive, Headington, 
Oxford OX3 7BN

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AMD Rome support?

2019-10-22 Thread Jonathan Buzzard
On 22/10/2019 17:30, Felipe Knop wrote:
> Jon,
> 
> AMD processors which are completely compatible with Opteron should also 
> work.
> 
> Please also refer to Q5.3 on the SMP scaling limit: 64 cores:
> 
> 
> https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html
> 

Hum, is that per CPU or the total for a machine? The reason I ask is we 
have some large memory nodes (3TB of RAM) and these are quad Xeon 6138 
CPU's giving a total of 80 cores in the machine...

We have not seen any problems, but if it is 64 cores per machine IBM 
needs to do some scaling testing ASAP to raise the limit as 64 cores per 
machine in 2019 is ridiculously low.

JAB.

-- 
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] AMD Rome support?

2019-10-22 Thread Jon Diprose
Dear GPFSUG,

I see the faq says Spectrum Scale is supported on "AMD Opteron based servers".

Does anyone know if/when support will be officially extended to cover AMD Epyc, 
especially the new 7002 (Rome) series?

Does anyone have any experience of running Spectrum Scale on Rome they could 
share, in particular for protocol nodes and for plain clients?

Thanks,

Jon

-- 
Dr. Jonathan Diprose  Tel: 01865 287837
Research Computing Manager
Henry Wellcome Building for Genomic Medicine Roosevelt Drive, Headington, 
Oxford OX3 7BN

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss