Hi Rob, we in Jülich have a long history and experience using GPFS & HSM.We don't use SOBAR (which is for disaster recovery) but mmbackup (single file restore).
In principle it is working fine. But there are two problems.
The first one is if the user rename the file/directory. If renamed, the file(s) must be backed up again and will trigger a recall(inline tape copy). The same happens if ACLs are modified on files. This is working, but if we are scaling up (1K ore more files are effected, we are storing >40PB in these file systems), the inline tape copy for backup is not tape optimized and will take a long time.
If you want more details, you can contact me directly. Stephan On 3/9/23 16:44, Robert Horton wrote:
Hi Folks,I’m setting up a filesystem for “archive” data which will be aggressively tiered to tape using the Spectrum Protect (or whatever it’s called today) Space Management. I would like to have two copies on tape for a) reading back the data on demand b) recovering accidentally deleted files etc c) disaster recovery of the whole filesystem if necessary.My understanding is: 1. Backup and Migration are completely separate things to Spectrum Protect. You can’t “restore” from a migrated file nor do a DMAPI read from a backup. 2. A SOBAR backup would enable the namespace to be restored if the filesystem were lost but needs all files to be (pre-)migrated and needs the filesystem blocksize etc to match. 3. A SOBAR backup isn’t much help for restoring individual (deleted) files. There is a dsmmigundelete utility that restores individual stubs but doesn’t restore directories etc so you really want a separate backup.My thinking is to do backups to one (non-replicated) tape pool and migrate to another and run mmimgbackup regularly. I’d then have a path to do a full restore if either set of tapes were lost although it seems rather messy and it’s a bit of a pain that SP needs to read everything twice.So… have I understood that correctly and does anyone have any better / alternative suggestions?Thanks, Rob *Robert Horton*| Scientific Computing Infrastructure Lead The Institute of Cancer Research | 237 Fulham Road, London, SW3 6JB*T*+44 (0) 20 7153 5350 | *E* [email protected] <mailto:[email protected]> | *W* www.icr.ac.uk <http://www.icr.ac.uk/> | *Twitter* @ICR_London <https://twitter.com/ICR_London>*Facebook*www.facebook.com/theinstituteofcancerresearch <http://www.facebook.com/theinstituteofcancerresearch>*Making the discoveries that defeat cancer* ICR Logo <http://www.icr.ac.uk/>The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company Limited by Guarantee, Registered in England under Company No. 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer and network._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
-- Stephan Graf Juelich Supercomputing Centre Phone: +49-2461-61-6578 Fax: +49-2461-61-6656 E-mail: [email protected] WWW: http://www.fz-juelich.de/jsc/ --------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Volker Rieke Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior --------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
