Hi Stefan,
Since you're using TSM with GPFS, are you following their current
integration instructions? My understanding is that what you want is a
regular use case of TSM/GPFS backups.
For file system scans, I believe that the policy engine scales linearly
with the number of nodes you run it on. Can you add more storage nodes?
Or run your policy scans across more existing nodes?
Regards,
Alex
On 12/10/13, 2:16 AM, Stefan Fritzsche wrote:
Dear gpfsug,
we are the SLUB, Saxon State and University Library Dresden.
Our goal is to build a long term preservation system. We use gpfs and a
tsm with hsm integration to backup, migrate and distribute the data over
two computing centers.
Currently, we are making backups with the normal tsm ba-client.
Our pre-/migration runs with the gpfs-policy engine to find all files
that are in the state "rersistent" and match some additional rules.
After the scan, we create a filelist and premigrate the data with dsmmigfs.
The normal backup takes a long time for the scan of the whole
gpfs-filesystem, so we are looking for a better way to perfom the backups.
I know that i can also use the policy engine to perfom the backup but my
questions are:
How do I perform backups with gpfs?
Is there anyone who uses the mmbackup command or mmbackup in companies
with snapshots?
Does anyone have any expirence in writing an application with gpfs-api
and/or dmapi?
Thank you for your answers and proposals.
Best regards,
Stefan
--
[email protected]
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss