I am happy to Jamie.

I will contact Peter and Robert directly to see if they are interested.

Kindest regards,
Paul

Paul Ward
TS Infrastructure Architect
Natural History Museum
T: 02079426450
E: [email protected]


-----Original Message-----
From: gpfsug-discuss <[email protected]> On Behalf Of Jaime 
Pinto
Sent: Tuesday, April 25, 2023 2:27 PM
To: [email protected]
Subject: Re: [gpfsug-discuss] [EXTERNAL] mmbackup vs SOBAR

I would be very interested on this session it I could attend it remotely.
Thanks
Jaime

On 4/25/2023 07:55:59, Paul Ward wrote:
> Hi Peter and Robert,
>
> Sorry for the delayed reply, only occasionally check the mailing list.
> I'm happy to have a MS Teams (or other platform) call about the setup you are 
> talking about, as we've just decommission that kind of environment.
>
> Spectrum SCALE
> Spectrum Protect with HSM using a dual robot High density tape library
> with off site copy SOBAR - (in theory) implemented.
>
> Kindest regards,
> Paul
>
> Paul Ward
> TS Infrastructure Architect
> Natural History Museum
> T: 02079426450
> E: [email protected]
>
>
> -----Original Message-----
> From: gpfsug-discuss <[email protected]> On Behalf Of
> Peter Childs
> Sent: Thursday, March 9, 2023 4:08 PM
> To: [email protected]
> Subject: Re: [gpfsug-discuss] [EXTERNAL] mmbackup vs SOBAR
>
> I've been told "you really should be using SOBAR" a few times, but never 
> really understood how to do so and the steps involved. I feel sure it should 
> have some kind of white paper, so far I've been thinking to setup some kind 
> of test system, but get a little lost on where to start (and lack of time).
>
> We currently use mmbackup to x2 servers using `--tsm-servers
> TSMServer1, TSMServer2` to have two independant backups and this works
> nicely until you lose a tape when restoring that tape is going to be a
> nightmare. (read rebuild the whole shadow database)
>
> We started with a copy pool until we filled our tape library up, and then 
> swapped to Protect replication until we found this really did not work very 
> well (really slow and missing files), and IBM surgested we use mmbackup with 
> 2 servers and have two independ backups, which is working very well for us 
> now.
>
> I think if I was going to implement SOBAR I'd want to run mmbackup as
> well as SOBAR will not give you point in time recovery or partial
> recovery and is really only a disarster solution. I'd also probably
> want 3 copies on tape, 1 in SOBAR, and 2x via mmbackup via two backups
> or via a copy pool
>
> I'm currently thinking to play with HSM and SOBAR on a test system, but have 
> not started yet...... Maybe a talk at the next UG would be helpful on 
> backups, I'm not sure if I want to do one, or if we can find an "expert"
>
> Peter Childs
>
>
>
> ________________________________________
> From: gpfsug-discuss <[email protected]> on behalf of
> Robert Horton <[email protected]>
> Sent: Thursday, March 9, 2023 3:44 PM
> To: [email protected]
> Subject: [EXTERNAL] [gpfsug-discuss] mmbackup vs SOBAR
>
> CAUTION: This email originated from outside of QMUL. Do not click links or 
> open attachments unless you recognise the sender and know the content is safe.
>
> Hi Folks,
>
> I'm setting up a filesystem for "archive" data which will be aggressively 
> tiered to tape using the Spectrum Protect (or whatever it's called today) 
> Space Management. I would like to have two copies on tape for a) reading back 
> the data on demand b) recovering accidentally deleted files etc c) disaster 
> recovery of the whole filesystem if necessary.
>
> My understanding is:
>
>    1.  Backup and Migration are completely separate things to Spectrum 
> Protect. You can't "restore" from a migrated file nor do a DMAPI read from a 
> backup.
>    2.  A SOBAR backup would enable the namespace to be restored if the 
> filesystem were lost but needs all files to be (pre-)migrated and needs the 
> filesystem blocksize etc to match.
>    3.  A SOBAR backup isn't much help for restoring individual (deleted) 
> files. There is a dsmmigundelete utility that restores individual stubs but 
> doesn't restore directories etc so you really want a separate backup.
>
> My thinking is to do backups to one (non-replicated) tape pool and migrate to 
> another and run mmimgbackup regularly. I'd then have a path to do a full 
> restore if either set of tapes were lost although it seems rather messy and 
> it's a bit of a pain that SP needs to read everything twice.
>
> So... have I understood that correctly and does anyone have any better / 
> alternative suggestions?
>
> Thanks,
> Rob
>
> Robert Horton | Scientific Computing Infrastructure Lead The Institute
> of Cancer Research | 237 Fulham Road, London, SW3 6JB T +44 (0) 20
> 7153 5350 | E [email protected]<mailto:[email protected]>
> | W
> http://www.i/
> cr.ac.uk%2F&data=05%7C01%7Cp.ward%40nhm.ac.uk%7Cf850a1c2f0744aaa1c9608
> db459138bb%7C73a29c014e78437fa0d4c8553e1960c1%7C1%7C0%7C63818026224535
> 6209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBT
> iI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=pW%2FX3kf8c53WXrfscjVf
> ejhsKiXZ3y4GiXIN1AG2%2BNg%3D&reserved=0<https://eur03.safelinks.protec/
> tion.outlook.com/?url=http%3A%2F%2Fwww.icr.ac.uk%2F&data=05%7C01%7Cp.w
> ard%40nhm.ac.uk%7Cf850a1c2f0744aaa1c9608db459138bb%7C73a29c014e78437fa
> 0d4c8553e1960c1%7C1%7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3d8e
> yJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C30
> 00%7C%7C%7C&sdata=pW%2FX3kf8c53WXrfscjVfejhsKiXZ3y4GiXIN1AG2%2BNg%3D&r
> eserved=0> | Twitter
> @ICR_London<https://eur03.safelinks.protection.outlook.com/?url=https%25
> 3A%2F%2Ftwitter.com%2FICR_London&data=05%7C01%7Cp.ward%40nhm.ac.uk%7Cf
> 850a1c2f0744aaa1c9608db459138bb%7C73a29c014e78437fa0d4c8553e1960c1%7C1
> %7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAi
> LCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T
> dEByp%2BDrWfX%2F%2Bue7ArI7ice95h8w%2F8QycMlfqbaktk%3D&reserved=0>
> Facebook
> http://www.f/
> acebook.com%2Ftheinstituteofcancerresearch&data=05%7C01%7Cp.ward%40nhm
> .ac.uk%7Cf850a1c2f0744aaa1c9608db459138bb%7C73a29c014e78437fa0d4c8553e
> 1960c1%7C1%7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC
> 4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%
> 7C&sdata=5oAewAF9x2%2BnPVV1iRQUO71UYs37MfqE3jAaEFIHqxg%3D&reserved=0<h
> ttps://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.fa
> cebook.com%2Ftheinstituteofcancerresearch&data=05%7C01%7Cp.ward%40nhm.
> ac.uk%7Cf850a1c2f0744aaa1c9608db459138bb%7C73a29c014e78437fa0d4c8553e1
> 960c1%7C1%7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4
> wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7
> C&sdata=5oAewAF9x2%2BnPVV1iRQUO71UYs37MfqE3jAaEFIHqxg%3D&reserved=0>
> Making the discoveries that defeat cancer [ICR
> Logo]<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%252
> Fwww.icr.ac.uk%2F&data=05%7C01%7Cp.ward%40nhm.ac.uk%7Cf850a1c2f0744aaa
> 1c9608db459138bb%7C73a29c014e78437fa0d4c8553e1960c1%7C1%7C0%7C63818026
> 2245356209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzI
> iLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=pW%2FX3kf8c53WXr
> fscjVfejhsKiXZ3y4GiXIN1AG2%2BNg%3D&reserved=0>
>
>
> The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company 
> Limited by Guarantee, Registered in England under Company No. 534147 with its 
> Registered Office at 123 Old Brompton Road, London SW7 3RP.
>
> This e-mail message is confidential and for use by the addressee only. If the 
> message is received by anyone other than the addressee, please return the 
> message to the sender by replying to it and then delete the message from your 
> computer and network.
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsu/
> g.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss_gpfsug.org&data=05%7C01%7C
> p.ward%40nhm.ac.uk%7Cf850a1c2f0744aaa1c9608db459138bb%7C73a29c014e7843
> 7fa0d4c8553e1960c1%7C1%7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3
> d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7
> C3000%7C%7C%7C&sdata=vTZc4aUe5tuCKplF1dowgTrAIW4iG2dJIw%2B7WiWJ42c%3D&
> reserved=0
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsu/
> g.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss_gpfsug.org&data=05%7C01%7C
> p.ward%40nhm.ac.uk%7Cf850a1c2f0744aaa1c9608db459138bb%7C73a29c014e7843
> 7fa0d4c8553e1960c1%7C1%7C0%7C638180262245356209%7CUnknown%7CTWFpbGZsb3
> d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7
> C3000%7C%7C%7C&sdata=vTZc4aUe5tuCKplF1dowgTrAIW4iG2dJIw%2B7WiWJ42c%3D&
> reserved=0
>

---
Jaime Pinto - Storage Analyst
SciNet HPC Consortium - http://www.scinet.utoronto.ca/
University of Toronto


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to