Adding the GL2 into your existing cluster shouldn’t be any problem. You would just delete the existing cluster on the GL2, then on the EMS run something like:
gssaddnode -N gssio1-hs --cluster-node netapp-node --nodetype gss --accept-license gssaddnode -N gssio2-hs --cluster-node netapp-node --nodetype gss --accept-license and then afterwards create the RGs: gssgenclusterrgs -G gss_ppc64 --suffix=-hs Then create the vdisks/nsds and add to your existing filesystem. Beware that last time I did this, gssgenclusterrgs triggered a "mmshutdown -a" on the whole cluster, because it wanted to change some config settings... Caught me a bit by surprise.. -jf ons. 4. des. 2019 kl. 10:44 skrev Dorigo Alvise (PSI) <[email protected] >: > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* [email protected] < > [email protected]> on behalf of Lyle Gayne < > [email protected]> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* [email protected] > *Cc:* [email protected] > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" <[email protected]> > Sent by: [email protected] > To: gpfsug main discussion list <[email protected]> > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: [email protected] > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" <[email protected]> > To: gpfsug main discussion list <[email protected]> > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: [email protected] > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" <[email protected]> > To: "[email protected]" < > [email protected]> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: [email protected] > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>* > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
