Hi, from phone so sorry for typos. 

I really think you should look into Spectrum Scale Erasure Code Edition (ECE) 
for this. 

Sure you could do a RAID on each node as you mention here but that sounds like 
a lot of waste to me on storage capacity. Not to forget you get other goodies 
like end to end checksum and rapid rebuilds with ECE, among others. 

Four servers is the minimum requirement for ECE (4+3p) and from top of my head 
12 disk per RG, you are fine with both requirements. 

There is a presentation on ECE on the user group web page from London May 2019 
were we talk about ECE. 

And the ibm page of the product 
https://www.ibm.com/support/knowledgecenter/STXKQY_ECE_5.0.3/com.ibm.spectrum.scale.ece.v5r03.doc/b1lece_intro.htm
--
Cheers

> El 29 jul 2019, a las 19:06, David Johnson <[email protected]> escribió:
> 
> We are planning a 5.0.x upgrade onto new hardware to make use of the new 5.x 
> GPFS features.
> The goal is to use up to four NSD nodes for metadata, each one with 6 NVMe 
> drives (to be determined
> whether we use Intel VROC for raid 5 or raid 1, or just straight disks).  
> 
> So questions — 
> Has anyone done system pool on shared nothing cluster?  How did you set it up?
> With default metadata replication set at 3, can you make use of four NSD 
> nodes effectively?
> How would one design the location vectors and failure groups so that the 
> system metadata is
> spread evenly across the four servers?
> 
> Thanks,
> — ddj
> Dave Johnson
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=1mZ896psa5caYzBeaugTlc7TtRejJp3uvKYxas3S7Xc&m=Pu3G5GzwRsM4iwztIiWzzngNSgsPzrRe8cd3pPAFVM0&s=q3w_lvsuB88gMM_J5psgKuWDex_wSW1v5h16WNMCtzU&e=
>  
> 

Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
Oy IBM Finland Ab
PL 265, 00101 Helsinki, Finland
Business ID, Y-tunnus: 0195876-3 
Registered in Finland

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to