We are planning a 5.0.x upgrade onto new hardware to make use of the new 5.x 
GPFS features.
The goal is to use up to four NSD nodes for metadata, each one with 6 NVMe 
drives (to be determined
whether we use Intel VROC for raid 5 or raid 1, or just straight disks).  

So questions — 
Has anyone done system pool on shared nothing cluster?  How did you set it up?
With default metadata replication set at 3, can you make use of four NSD nodes 
effectively?
How would one design the location vectors and failure groups so that the system 
metadata is
spread evenly across the four servers?

Thanks,
 — ddj
Dave Johnson
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to