Hi, I don't think the GPFS is good choice for your setup. Did you consider GlusterFS? It's used at Max Planck Institute at Dresden for HPC computing of Molecular Biology data. They have similar setup, tens (hundreds) of computers with shared local storage in glusterfs. But you will need 10Gb network.
Michal Dne 12.3.2018 v 16:23 Lukas Hejtmanek napsal(a): > On Mon, Mar 12, 2018 at 11:18:40AM -0400, valdis.kletni...@vt.edu wrote: >> On Mon, 12 Mar 2018 15:51:05 +0100, Lukas Hejtmanek said: >>> I don't think like 5 or more data/metadata replicas are practical here. On >>> the >>> other hand, multiple node failures is something really expected. >> Umm.. do I want to ask *why*, out of only 60 nodes, multiple node >> failures are an expected event - to the point that you're thinking >> about needing 5 replicas to keep things running? > as of my experience with cluster management, we have multiple nodes down on > regular basis. (HW failure, SW maintenance and so on.) > > I'm basically thinking that 2-3 replicas might not be enough while 5 or more > are becoming too expensive (both disk space and required bandwidth being > scratch space - high i/o load expected). >
smime.p7s
Description: Elektronicky podpis S/MIME
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss