Giovanni,
I have clients in Australia that are running AMD ROME processors in their Visualisation nodes connected to scale 5.0.4 clusters with no issues.
Spectrum Scale doesn't differentiate between x86 processor technologies -- it only looks at x86_64 (OS support more than anything else)
Andrew Beattie
File and Object Storage Technical Specialist - A/NZ
IBM Systems - Storage
Phone: 614-2133-7927
E-mail: [email protected]
----- Original message -----
From: Giovanni Bracco <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>, Frederick Stock <[email protected]>
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] tsgskkm stuck---> what about AMD epyc support in GPFS?
Date: Thu, Sep 3, 2020 7:29 AM
I am curious to know about AMD epyc support by GPFS: what is the status?
Giovanni Bracco
On 28/08/20 14:25, Frederick Stock wrote:
> Not sure that Spectrum Scale has stated it supports the AMD epyc (Rome?)
> processors. You may want to open a help case to determine the cause of
> this problem.
> Note that Spectrum Scale 4.2.x goes out of service on September 30, 2020
> so you may want to consider upgrading your cluster. And should Scale
> officially support the AMD epyc processor it would not be on Scale 4.2.x.
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> [email protected]
>
> ----- Original message -----
> From: Philipp Helo Rehs <[email protected]>
> Sent by: [email protected]
> To: gpfsug main discussion list <[email protected]>
> Cc:
> Subject: [EXTERNAL] [gpfsug-discuss] tsgskkm stuck
> Date: Fri, Aug 28, 2020 5:52 AM
> Hello,
>
> we have a gpfs v4 cluster running with 4 nsds and i am trying to add
> some clients:
>
> mmaddnode -N hpc-storage-1-ib:client:hpc-storage-1
>
> this commands hangs and do not finish
>
> When i look into the server, i can see the following processes which
> never finish:
>
> root 38138 0.0 0.0 123048 10376 ? Ss 11:32 0:00
> /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/mmremote checkNewClusterNode3
> lc/setupClient
> %%9999%%:00_VERSION_LINE::1709:3:1::lc:gpfs3.hilbert.hpc.uni-duesseldorf.de::0:/bin/ssh:/bin/scp:5362040003754711198:lc2:1597757602::HPCStorage.hilbert.hpc.uni-duesseldorf.de:2:1:1:2:A:::central:0.0:
> %%home%%:20_MEMBER_NODE::5:20:hpc-storage-1
> root 38169 0.0 0.0 123564 10892 ? S 11:32 0:00
> /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/mmremote ccrctl setupClient 2
> 21479
> 1=gpfs3-ib.hilbert.hpc.uni-duesseldorf.de:1191,2=gpfs4-ib.hilbert.hpc.uni-duesseldorf.de:1191,4=gpfs6-ib.hilbert.hpc.uni-duesseldorf.de:1191,3=gpfs5-ib.hilbert.hpc.uni-duesseldorf.de:1191
> 0 1191
> root 38212 100 0.0 35544 5752 ? R 11:32 9:40
> /usr/lpp/mmfs/bin/tsgskkm store --cert
> /var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.cert --priv
> /var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.priv --out
> /var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.keystore --fips off
>
> The node is an AMD epyc.
>
> Any idea what could cause the issue?
>
> ssh is possible in both directions and firewall is disabled.
>
>
> Kind regards
>
> Philipp Rehs
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
--
Giovanni Bracco
phone +39 351 8804788
E-mail [email protected]
WWW http://www.afs.enea.it/bracco
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
