Re: Dumping and taping in parallel / yearly archive runs, WORM
> On Jan 13, 2019, at 4:20 AM, Stefan G. Weichinger wrote: > > Am 10.01.19 um 20:41 schrieb Debra S Baddorf: >> >> >>> On Jan 10, 2019, at 10:32 AM, Stefan G. Weichinger wrote: >>> >>> Am 30.11.18 um 19:40 schrieb Debra S Baddorf: >>> IFF you have the holding disk space, you might want these params: flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT tested this one flush-threshold-scheduled 100# ###However, all dumps will be flushed; none will ### be left on the holding disk. taperflush0 autoflush yes >>> >>> I am still trying to come up with a set of parameters to keep the latest >>> lev0 backups in holdingdisk *additional* to having them on tape. >>> >>> Trying to accumulate a lev0 of every DLE on disk to then switch over to >>> the archive tapes and flush to them. >>> >>> As far as I understand that would mean increasing taperflush and set >>> autoflush to NO, right? >>> >>> (only flush intentionally ...) >>> >>> >> >> As I understand it, “autoflush” only applies to files left over from >> previous runs. >> Taperflush might do what you want, if you set it to 1000 ??? But I’m >> doubtful. >> But either one means writing to tape AND removing from disk. >> >> Other readers: is there a newer means to leave dumps on disk AND tape them? >> I don’t think the above parameters are able to do that, at all. >> But I’ve heard snippets that make me think other params in the latest version >> can do it. >> >> Stefan - we need input from others who have done this (leaving dumps on disk >> AND on tape). If nobody answers, maybe start a new “thread” with this >> actual >> question in the subject line. > > Sure. > > I think that "vaulting" with amvault might help here ... although I > still struggle with getting the picture. > > Especially as I only have one tapedrive, I can't copy from one to the other. > > Back then I tried to define two storages, one with the normal daily > tapes (and separate labels), a second with archive tapes and the > additional line: > > dump-selection ALL FULL > > I should rethink that approach. You can let the normal job send the dumps to tape, and then “vault” a copy back onto a “vtape” on disk (not your holding disk, but that’s semantics). That might suit your need. I’ve done vaulting from a tape onto a disk area ….. moved the disk area to another node, and vaulted again to copy it to a different flavor of tape drive. So the very first part of my task might suit you to a tee. Deb Baddorf
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 10.01.19 um 20:41 schrieb Debra S Baddorf: > > >> On Jan 10, 2019, at 10:32 AM, Stefan G. Weichinger wrote: >> >> Am 30.11.18 um 19:40 schrieb Debra S Baddorf: >> >>> IFF you have the holding disk space, you might want these params: >>> >>> flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT >>> tested this one >>> flush-threshold-scheduled 100# >>> ###However, all dumps will be flushed; none will >>> ### be left on the holding disk. >>> taperflush0 >>> autoflush yes >> >> I am still trying to come up with a set of parameters to keep the latest >> lev0 backups in holdingdisk *additional* to having them on tape. >> >> Trying to accumulate a lev0 of every DLE on disk to then switch over to >> the archive tapes and flush to them. >> >> As far as I understand that would mean increasing taperflush and set >> autoflush to NO, right? >> >> (only flush intentionally ...) >> >> > > As I understand it, “autoflush” only applies to files left over from > previous runs. > Taperflush might do what you want, if you set it to 1000 ??? But I’m > doubtful. > But either one means writing to tape AND removing from disk. > > Other readers: is there a newer means to leave dumps on disk AND tape them? > I don’t think the above parameters are able to do that, at all. > But I’ve heard snippets that make me think other params in the latest version > can do it. > > Stefan - we need input from others who have done this (leaving dumps on disk > AND on tape). If nobody answers, maybe start a new “thread” with this > actual > question in the subject line. Sure. I think that "vaulting" with amvault might help here ... although I still struggle with getting the picture. Especially as I only have one tapedrive, I can't copy from one to the other. Back then I tried to define two storages, one with the normal daily tapes (and separate labels), a second with archive tapes and the additional line: dump-selection ALL FULL I should rethink that approach.
Re: Dumping and taping in parallel / yearly archive runs, WORM
> On Jan 10, 2019, at 10:32 AM, Stefan G. Weichinger wrote: > > Am 30.11.18 um 19:40 schrieb Debra S Baddorf: > >> IFF you have the holding disk space, you might want these params: >> >> flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT >> tested this one >> flush-threshold-scheduled 100# >> ###However, all dumps will be flushed; none will >> ### be left on the holding disk. >> taperflush0 >> autoflush yes > > I am still trying to come up with a set of parameters to keep the latest > lev0 backups in holdingdisk *additional* to having them on tape. > > Trying to accumulate a lev0 of every DLE on disk to then switch over to > the archive tapes and flush to them. > > As far as I understand that would mean increasing taperflush and set > autoflush to NO, right? > > (only flush intentionally ...) > > As I understand it, “autoflush” only applies to files left over from previous runs. Taperflush might do what you want, if you set it to 1000 ??? But I’m doubtful. But either one means writing to tape AND removing from disk. Other readers: is there a newer means to leave dumps on disk AND tape them? I don’t think the above parameters are able to do that, at all. But I’ve heard snippets that make me think other params in the latest version can do it. Stefan - we need input from others who have done this (leaving dumps on disk AND on tape). If nobody answers, maybe start a new “thread” with this actual question in the subject line. Deb Baddorf Fermilab
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 30.11.18 um 19:40 schrieb Debra S Baddorf: > IFF you have the holding disk space, you might want these params: > > flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT > tested this one > flush-threshold-scheduled 100# > ###However, all dumps will be flushed; none will > ### be left on the holding disk. > taperflush0 > autoflush yes I am still trying to come up with a set of parameters to keep the latest lev0 backups in holdingdisk *additional* to having them on tape. Trying to accumulate a lev0 of every DLE on disk to then switch over to the archive tapes and flush to them. As far as I understand that would mean increasing taperflush and set autoflush to NO, right? (only flush intentionally ...)
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 17.12.18 um 10:18 schrieb Stefan G. Weichinger: I switch over configs (and in consequence tapes) this week: instead of running normal "daily" config, I now run "archive" to a separate set of tapes now ... same disklist, other amanda.conf The plan is to let that config "balance" somehow, collect lev0 backups on holding disk and then flush them to the WORM tapes. bak to normal "daily config", I now want to collect the lev0 there. Will adjust these thresholds somehow etc What about WORM tapes, are they non-reusable after labelling them? ;-)
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 30.11.18 um 19:40 schrieb Debra S Baddorf: > Well, from comp sci courses 20 years ago, the best algorithm to fill the > tapes (or the “Bag” in CS class) > is the Greedy Method. Which is also the most obvious one. > taperalgo=largestfit > I.E. Pick the biggest thing that will still fit in the bag. > Repeat. > (Physical example: put the large stones in the jar first. Then medium, then > small pebbles. > Then sand, then water. Used in life-strategy classes. Determine what > matters most to you, do it first.) > > To do this, you need to have a large choice of DLE’s dumped into your > holding disk, before amanda > starts putting them to tape.So it can find the most optimal “biggest, > then next biggest, then ….. “ > > So you want to have lots of DLE’s on disk before taping starts. > IFF you have the holding disk space, you might want these params: > > flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT > tested this one > flush-threshold-scheduled 100# > ###However, all dumps will be flushed; none will > ### be left on the holding disk. > taperflush0 > autoflush yes > > taperalgo largestfit > > dump order “SsSsSs”# so that it accumulates a bunch of small ones, as > well as big ones, > # for the Greedy Algorithm to choose from > > If the archives are going offsite, and are not available for daily file > recoveries, you may want > to set "record no”so that normal dailes don’t base their level 1’s off > THIS level 0. > (I do this, for my archive runs.) > > And of course, you prefix the run with > amadmin config force * > (I seem to need amadmin config force *.restof.siteAddressTry and see) > > > I do these once a month, but have never worried about the tape filling, > very much. > Beyond always using “largestfit” and having some DLE’s on disk before > taping starts. > If you are using a lot of tapes (you are) it might be worth fiddling with. I switch over configs (and in consequence tapes) this week: instead of running normal "daily" config, I now run "archive" to a separate set of tapes now ... same disklist, other amanda.conf The plan is to let that config "balance" somehow, collect lev0 backups on holding disk and then flush them to the WORM tapes.
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 30.11.18 um 19:40 schrieb Debra S Baddorf: > Well, from comp sci courses 20 years ago, the best algorithm to fill the > tapes (or the “Bag” in CS class) > is the Greedy Method. [..] a quick thanks for now ... quite busy here, I will report back when I find the time to try that. Stefan
Re: Dumping and taping in parallel / yearly archive runs, WORM
> On Nov 30, 2018, at 3:45 AM, Stefan G. Weichinger wrote: > > Am 28.11.18 um 20:24 schrieb Debra S Baddorf: > >> Not sure if these paragraphs are still in the example config files or not; >> I’ve hung onto them because they >> were so useful.In case they help you: > > [..] > > thanks for sharing that, this somehow corresponds to a task I am > starting over the weekend: > > at a customer we backup around 16 TB to LTO6 tapes in a 8-slot changer. > Bit of balancing needed in normal daily runs ... I let amanda skip some > bigger chunks on days 1-4 and do these on weekends, for example. > > The quality management there requests that we/I create (a set of) > archive tapes at the end of the year, and these should be WORM tapes. > > So the goal is to get lev0 backups of all DLEs into one single run. > > I fiddled with amvault last year or so, then edited the mention > threshold parameters to collect lev0-backups in the holding disk to > prepare that one big amflush to WORM tapes. > > Not to mention that the problem is to not waste WORM tapes ... > > > Today I reattack all this and add 10 RW-tapes to my config "archive", > share the disklist and let amanda do backups of that for a while to > prepare things. > > Suggestions and ideas welcome. Well, from comp sci courses 20 years ago, the best algorithm to fill the tapes (or the “Bag” in CS class) is the Greedy Method. Which is also the most obvious one. taperalgo=largestfit I.E. Pick the biggest thing that will still fit in the bag. Repeat. (Physical example: put the large stones in the jar first. Then medium, then small pebbles. Then sand, then water. Used in life-strategy classes. Determine what matters most to you, do it first.) To do this, you need to have a large choice of DLE’s dumped into your holding disk, before amanda starts putting them to tape.So it can find the most optimal “biggest, then next biggest, then ….. “ So you want to have lots of DLE’s on disk before taping starts. IFF you have the holding disk space, you might want these params: flush-threshold-dumped300 # (or perhaps only 150) ## I have NOT tested this one flush-threshold-scheduled 100# ###However, all dumps will be flushed; none will ### be left on the holding disk. taperflush0 autoflush yes taperalgo largestfit dump order “SsSsSs”# so that it accumulates a bunch of small ones, as well as big ones, # for the Greedy Algorithm to choose from If the archives are going offsite, and are not available for daily file recoveries, you may want to set "record no”so that normal dailes don’t base their level 1’s off THIS level 0. (I do this, for my archive runs.) And of course, you prefix the run with amadmin config force * (I seem to need amadmin config force *.restof.siteAddressTry and see) I do these once a month, but have never worried about the tape filling, very much. Beyond always using “largestfit” and having some DLE’s on disk before taping starts. If you are using a lot of tapes (you are) it might be worth fiddling with. Deb Baddorf Fermilab
Re: Dumping and taping in parallel / yearly archive runs, WORM
Am 28.11.18 um 20:24 schrieb Debra S Baddorf: > Not sure if these paragraphs are still in the example config files or not; > I’ve hung onto them because they > were so useful.In case they help you: [..] thanks for sharing that, this somehow corresponds to a task I am starting over the weekend: at a customer we backup around 16 TB to LTO6 tapes in a 8-slot changer. Bit of balancing needed in normal daily runs ... I let amanda skip some bigger chunks on days 1-4 and do these on weekends, for example. The quality management there requests that we/I create (a set of) archive tapes at the end of the year, and these should be WORM tapes. So the goal is to get lev0 backups of all DLEs into one single run. I fiddled with amvault last year or so, then edited the mention threshold parameters to collect lev0-backups in the holding disk to prepare that one big amflush to WORM tapes. Not to mention that the problem is to not waste WORM tapes ... Today I reattack all this and add 10 RW-tapes to my config "archive", share the disklist and let amanda do backups of that for a while to prepare things. Suggestions and ideas welcome.
