One additional question to ask is : what are your long term plans for the 4.2.3 
Spectrum Scake cluster?  Do you expect to upgrade it to version 5.x (hopefully 
before 4.2.3 goes out of support)?

Also I assume your Netapp hardware is the standard Netapp block storage, 
perhaps based on their standard 4U60 shelves daisy-chained together?

Daniel
 
_________________________________________________________
Daniel Kidger
IBM Technical Sales Specialist
Spectrum Scale, Spectrum Discover and IBM Cloud Object Store

+44-(0)7818 522 266 
[email protected]
                



> On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) <[email protected]> wrote:
> 
> 
> Thank Anderson for the material. In principle our idea was to scratch the 
> filesystem in the GL2, put its NSD on a dedicated pool and then merge it into 
> the Filesystem which would remain on V4. I do not want to create a FS in the 
> GL2 but use its space to expand the space of the other cluster.
> 
> 
> 
>    A
> 
> From: [email protected] 
> <[email protected]> on behalf of Anderson Ferreira 
> Nobre <[email protected]>
> Sent: Wednesday, December 4, 2019 3:07:18 PM
> To: [email protected]
> Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>  
> Hi Dorigo,
>  
> From point of view of cluster administration I don't think it's a good idea 
> to have hererogeneous cluster. There are too many diferences between V4 and 
> V5. And much probably many of enhancements of V5 you won't take advantage. 
> One example is the new filesystem layout in V5. And at this moment the way to 
> migrate the filesystem is create a new filesystem in V5 with the new layout 
> and migrate the data. That is inevitable. I have seen clients saying that 
> they don't need all that enhancements, but the true is when you face 
> performance issue that is only addressable with the new features someone will 
> raise the question why we didn't consider that in the beginning.
>  
> Use this time to review if it would be better to change the block size of 
> your fileystem. There's a script called filehist in 
> /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your 
> current filesystem. Here's the link with additional information:
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata
>  
> Different RAID configurations also brings unexpected performance behaviors. 
> Unless you are planning create different pools and use ILM to manage the 
> files in different pools.
>  
> One last thing, it's a good idea to follow the recommended levels for 
> Spectrum Scale:
> https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning
>  
> Anyway, you are the system administrator, you know better than anyone how 
> complex is to manage this cluster.
>  
> Abraços / Regards / Saludos,
>  
> 
> Anderson Nobre
> Power and Storage Consultant
> IBM Systems Hardware Client Technical Team – IBM Systems Lab Services
> 
> 
>  
> Phone: 55-19-2132-4317
> E-mail: [email protected]     
>  
>  
> ----- Original message -----
> From: "Dorigo Alvise (PSI)" <[email protected]>
> Sent by: [email protected]
> To: "[email protected]" <[email protected]>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR 
> cluster
> Date: Wed, Dec 4, 2019 06:44
>  
> Thank you all for the answer. I try to recap my answers to your questions:
> 
>  
> 
> the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB 
> raw space to the current filesystem provided by the GPFS/NetApp (which is 
> running out of free space); of course I know well the heterogeneity of this 
> hypothetical system, so the GL2's NSD would go to a special pool; but in the 
> end I need a unique namespace for files.
> I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) 
> because the former is in production and I cannot schedule long downtimes
> All system have proper licensing of course; what does it means that I could 
> loose IBM support ? if the support is for a failing disk drive I do not think 
> so; if the support is for a "strange" behaviour of GPFS I can probably 
> understand
> NSD (in the NetApp system) are in their roles: what do you mean exactly ? 
> there's RAIDset attached to servers that are actually NSD together with their 
> attached LUN
>  
>    Alvise
> From: [email protected] 
> <[email protected]> on behalf of Lyle Gayne 
> <[email protected]>
> Sent: Tuesday, December 3, 2019 8:30:31 PM
> To: [email protected]
> Cc: [email protected]
> Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>  
> For:
>  
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD 
> servers in their GPFS roles (where Scale "runs on top"?
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
> 
> What I need to do is to merge the GL2 in the other GPFS cluster (running on 
> the NetApp) without loosing, of course, the RecoveryGroup configuration, etc.
> 
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS 
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what is 
> the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at least 
> inspiring the correct procedure ?
>  
> ......
> Some observations:
>  
>  
> 1) Why do you want to MERGE the GL2 into a single cluster with the rest 
> cluster, rather than simply allowing remote mount of the ESS servers by the 
> other GPFS (NSD client) nodes?
>  
> 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our 
> coexistence rules.
>  
> 3) Mixing x86 and Power, especially as NSD servers, should pose no issues.  
> Having them as separate file systems (NetApp vs. ESS) means no concerns 
> regarding varying architectures within the same fs serving or failover 
> scheme.  Mixing such as compute nodes would mean some performance differences 
> across the different clients, but you haven't described your compute (NSD 
> client) details.
>  
> Lyle
> ----- Original message -----
> From: "Tomer Perry" <[email protected]>
> Sent by: [email protected]
> To: gpfsug main discussion list <[email protected]>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR 
> cluster
> Date: Tue, Dec 3, 2019 10:03 AM
>  
> Hi,
> 
> Actually, I believe that GNR is not a limiting factor here. 
> mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR 
> configuration as well:
> "If the specified file system device is a IBM Spectrum Scale RAID-based file 
> system, then all affected IBM Spectrum Scale RAID objects will be exported as 
> well. This includes recovery groups, declustered arrays, vdisks, and any 
> other file systems that are based on these objects. For more information 
> about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. "
> 
> OTOH, I suspect that due to the version mismatch, it wouldn't work - since I 
> would assume that the cluster config version is to high for the NetApp based 
> cluster.
> I would also suspect that the filesystem version on the ESS will be different.
> 
> 
> Regards,
> 
> Tomer Perry
> Scalable I/O Development (Spectrum Scale)
> email: [email protected]
> 1 Azrieli Center, Tel Aviv 67021, Israel
> Global Tel:    +1 720 3422758
> Israel Tel:      +972 3 9188625
> Mobile:         +972 52 2554625
> 
> 
> 
> 
> From:        "Olaf Weiser" <[email protected]>
> To:        gpfsug main discussion list <[email protected]>
> Date:        03/12/2019 16:54
> Subject:        [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a 
> non-GNR cluster
> Sent by:        [email protected]
> 
> 
> 
> Hallo
> "merging" 2 different GPFS cluster into one .. is not possible ..
> for sure you can do "nested" mounts .. .but that's most likely not, what you 
> want to do .. 
> 
> if you want to add a GL2 (or any other ESS) ..to an existing (other) 
> cluster... -  you can't preserve ESS's RG definitions...
> you need to create the RGs after adding the IO-nodes to the existing 
> cluster... 
> 
> so if you got a new ESS.. (no data on it) .. simply unconfigure cluster ..  
> .. add the nodes to your existing cluster.. and then start configuring the RGs
> 
> 
> 
> 
> 
> From:        "Dorigo Alvise (PSI)" <[email protected]>
> To:        "[email protected]" 
> <[email protected]>
> Date:        12/03/2019 09:35 AM
> Subject:        [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a 
> non-GNR cluster
> Sent by:        [email protected]
> 
> 
> 
> Hello everyone,
> I have:
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
> 
> What I need to do is to merge the GL2 in the other GPFS cluster (running on 
> the NetApp) without loosing, of course, the RecoveryGroup configuration, etc.
> 
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS 
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what is 
> the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at least 
> inspiring the correct procedure ?
> 
> Thank you very much,
> 
>   Alvise Dorigo_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
>  
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>  
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
>  
> 
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to