I inherited from previous admin 2 separate gpfs machines.
All hardware+software is old so I want to switch to new
servers, new disk arrays, new gpfs version and new gpfs
Each machine has 4 gpfs filesystems and runs a TSM HSM
client that migrates data to tapes using separate TSM
GPFS+HSM no 1 -> TSM server no 1 -> tapes
GPFS+HSM no 2 -> TSM server no 2 -> tapes
Migration is done by HSM (not GPFS policies).
All filesystems are used for archiving results from HPC
system and other files (a kind of backup - don't ask...).
Data is written by users via nfs shares. There are 8 nfs
mount points corresponding to 8 gpfs filesystems, but there
is no real reason for that.
4 filesystems are large and heavily used, 4 remaining
are almost not used.
The question is how to configure new gpfs infrastructure?
My initial impression is that I should create a GPFS cluster
of 2+ nodes and export NFS using CES. The most important
question is how many filesystem do I need? Maybe just 2 and
Or how to do that in a flexible way and not to lock myself
in stupid configuration?
ps. I will recall all data and copy it to new
infrastructure. Yes, that's the way I want
to do that. :)
Pawel Dziekonski <pawel.dziekon...@wcss.pl>, http://www.wcss.pl
Wroclaw Centre for Networking & Supercomputing, HPC Department
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org