Hi,

We have here around 2PB GPFS where users access oney through GPFS client (used 
by an HPC cluster), but we will have to setup protocols nodes.


We will have to share GPFS data to ~ 1000 users, where each users will have 
different access usage, meaning:


- some will do large I/O (e.g: store 1TB files)

- some will read/write more than 10k files in a raw

- other will do only sequential read


I already read the following wiki page:


https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node

IBM Spectrum Scale Wiki - Sizing Guidance for Protocol 
Node<https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Sizing%20Guidance%20for%20Protocol%20Node>
www.ibm.com
developerWorks wikis allow groups of people to jointly create and maintain 
content through contribution and collaboration. Wikis apply the wisdom of 
crowds to ...




But I wondering if some people have recommendations regarding hardware sizing 
and software tuning for such situation ?

Or better, if someone already such setup ?


Thank you by advance,

Frank.


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to