I'm looking for "best practices" for how to set up backups for a (1.5 Tb) gpfs
filesystem.  There is a gpfs redbook (sg245610) that includes benchmark info,
but I've not found the answers to a few issues.

-We have two SP nodes that can host gpfs, and we can fail-over gpfs to run from
a partner node (I think this uses vsd failover, etc.)  Disk is ssa-shared.
-We share the gpfs filesystem over 14 client nodes in some cases, 4 nodes in
other cases.
-The redbook seems to suggest there's no benefit to running TSM clients on the
GPFS hosts, but there IS a benefit (for small files, at least) in running
multiple TSM clients on multiple client nodes.

So...from a TSM perspective there's a few thoughts:

-let's assume that gpfs filesystems are considered local (I do seem to be
backing them up multiple times today!), so using domain all-local won't help.
(I have a pmr open to confirm this.)
-therefore, I should exclude the gpfs directories on all client nodes, EXCEPT
FOR those nodes that I want to do the backup work (let's assume 4).
-on only those 4 client nodes, we'll include the gpfs filesystem.  I expect
I'll need to use virtualmountpoints to back up different portions of one
filesystem via different clients, right?    That means that to do restores,
I'll need to keep track of what portions of this gpfs filesystem have been
backed up by which tsm clients.  It also means that if I need to rebalance
(change the sizes of different portions), then some data may become "owned" by
different tsm clients, and I'll (ick!) need to sort this out somehow?

Has anyone done this kind of setup (successfully) before?  There doesn't seem
to be a lot of documentation available.  Even good old "adsm.org" only comes up
with one hit on a search for "gpfs".

Are there any "best practices" that others can share on this topic?

Thanks...JRS.

Reply via email to