It doesn’t seem to be documented anywhere, but we can add ESS to nok-ESS
clusters. It’s mostly just following the QDG, skipping the gssgencluster
step.

Just beware that it will take down your current cluster when doing the
first «gssgenclusterrgs». This is to change quite a few config settings —
it recently caugth me by surprise :-/


Involve IBM lab services, and we should be able to help :-)


  -jf

ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi <
[email protected]>:

> Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage.
> We also have protocol nodes for SMB access to SAS applications/users. Now,
> we are planning to gradually move our cluster from V7000/Flash to ESS and
> retire V7Ks. So, when we grow our filesystem, we are thinking of adding an
> ESS as an additional block of storage instead of adding another V7000.
> Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in
> the same filesystem, but can't create a new filesystem as we want to have
> single name space for our SMB Shares. Also, we'd like keep all our existing
> compute, protocol, and NSD servers all in the same scale cluster along with
> ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option
> of adding ESS nodes to existing cluster like  mmaddnode or similar
> commands. So, just wondering how we could add ESS IO nodes to existing
> cluster like any other node..is running mmaddnode command on ESS possible?
> Also, looks like it's against the IBMs recommendation of separating the
> Storage, Compute and Protocol nodes into their own scale clusters and use
> cross-cluster filesystem mounts..any comments/suggestions?
>
> Prasad Surampudi
>
> The ATS Group
>
>
>
> ------------------------------
> *From:* [email protected] <
> [email protected]> on behalf of
> [email protected] <
> [email protected]>
> *Sent:* Wednesday, April 3, 2019 2:54 PM
> *To:* [email protected]
> *Subject:* gpfsug-discuss Digest, Vol 87, Issue 4
>
> Send gpfsug-discuss mailing list submissions to
>         [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> or, via email, send a message with subject or body 'help' to
>         [email protected]
>
> You can reach the person managing the list at
>         [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gpfsug-discuss digest..."
>
>
> Today's Topics:
>
>    1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul)
>    2. New ESS install - Network adapter down level (Oesterlin, Robert)
>    3. Re: New ESS install - Network adapter down level
>       (Jan-Frode Myklebust)
>    4. Re: New ESS install - Network adapter down level
>       (Stephen R Buchanan)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 3 Apr 2019 16:41:32 +0000
> From: "Sanchez, Paul" <[email protected]>
> To: gpfsug main discussion list <[email protected]>
> Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="us-ascii"
>
> > note though you can't have GNR based vdisks (ESS/DSS-G) in the same
> storage pool.
>
> At one time there was definitely a warning from IBM in the docs about not
> mixing big-endian and little-endian GNR in the same cluster/filesystem.
> But at least since Nov 2017, IBM has published videos showing clusters
> containing both.  (In my opinion, they had to support this because they
> changed the endian-ness of the ESS from BE to LE.)
>
> I don't know about all ancillary components (e.g. GUI) but as for Scale
> itself, I can confirm that filesystems can contain NSDs which are provided
> by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN
> storage based NSD servers.  We typically do rolling upgrades of GNR
> building blocks by adding blocks to an existing cluster, emptying and
> removing the existing blocks, upgrading those in isolation, then repeating
> with the next cluster.  As a result, we have had every combination in play
> at some point in time.  Care just needs to be taken with nodeclass naming
> and mmchconfig parameters.  (We derive the correct params for each new
> building block from its final config after upgrading/testing it in
> isolation.)
>
> -Paul
>
> -----Original Message-----
> From: [email protected] <
> [email protected]> On Behalf Of Simon Thompson
> Sent: Wednesday, April 3, 2019 12:18 PM
> To: gpfsug main discussion list <[email protected]>
> Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster
>
> We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB
> storage (IBM, DDN). But we don't have them in the same file-system.
>
> In theory as a different pool it should work, note though you can't have
> GNR based vdisks (ESS/DSS-G) in the same storage pool.
>
> And if you want to move to new block size or v5 variable sunblocks then
> you are going to have to have a new filesystem and copy data. So it depends
> what your endgame is really. We just did such a process and one of my
> colleagues is going to talk about it at the London user group in May.
>
> Simon
> ________________________________________
> From: [email protected] [
> [email protected]] on behalf of
> [email protected] [[email protected]]
> Sent: 03 April 2019 17:12
> To: [email protected]
> Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster
>
> We are planning to add an ESS GL6 system to our existing Spectrum Scale
> cluster. Can the ESS nodes be added to existing scale cluster without
> changing existing cluster name? Or do we need to create a new scale cluster
> with ESS and import existing filesystems into the new ESS cluster?
>
> Prasad Surampudi
> Sr. Systems Engineer
> The ATS Group
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 3 Apr 2019 17:25:54 +0000
> From: "Oesterlin, Robert" <[email protected]>
> To: gpfsug main discussion list <[email protected]>
> Subject: [gpfsug-discuss] New ESS install - Network adapter down level
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
> Any insight on what command I need to fix this? It?s the only error I have
> when running gssinstallcheck.
>
> [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected
> 12.23.8010, net adapter count: 4
>
>
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
> 507-269-0413
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/42850e8f/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Wed, 3 Apr 2019 20:11:45 +0200
> From: Jan-Frode Myklebust <[email protected]>
> To: gpfsug main discussion list <[email protected]>
> Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down
>         level
> Message-ID:
>         <CAHwPatjx-r7EHoV_PuB1XMGyGgQomz6KtLS=
> [email protected]>
> Content-Type: text/plain; charset="utf-8"
>
> Have you tried:
>
>   updatenode nodename -P gss_ofed
>
> But, is this the known issue listed in the qdg?
>
>
> https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf
>
>
>  -jf
>
> ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert <
> [email protected]>:
>
> > Any insight on what command I need to fix this? It?s the only error I
> have
> > when running gssinstallcheck.
> >
> >
> >
> > [ERROR] Network adapter
> > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf
> > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4
> >
> >
> >
> >
> >
> > Bob Oesterlin
> >
> > Sr Principal Storage Engineer, Nuance
> >
> > 507-269-0413
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/fad4ff57/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Wed, 3 Apr 2019 18:54:00 +0000
> From: "Stephen R Buchanan" <[email protected]>
> To: [email protected]
> Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down
>         level
> Message-ID:
>         <
> ofbd2a098d.0085093e-on002583d1.0066d1e2-002583d1.0067d...@notes.na.collabserv.com
> >
>
> Content-Type: text/plain; charset="us-ascii"
>
> An HTML attachment was scrubbed...
> URL: <
> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/09e229d1/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> End of gpfsug-discuss Digest, Vol 87, Issue 4
> *********************************************
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to