On Thursday, September 23, 2010 08:49:40 am Chris Hoogendyk did opine:
> On 9/22/10 7:03 PM, Dustin J. Mitchell wrote:
> > On Wed, Sep 22, 2010 at 10:07 AM, Jon LaBadie<[email protected]> wrote:
> >> Any thoughts on how desireable you feel be a separate
> >> copy of amanda data would be and other approaches?
> >
> > This comes up often, and I've never found a solution I'm happy enough
> > with to make the "official" solution.
> >
> > Gene's approach is the obvious one, but has a few limitations:
> > - What do you do if you run out of space on that tape?
Largely precluded by setting the tapetype size enough smaller that it can
hold the extra data corpus, in my case IF I was using tapes, about 200 megs
smaller than the tapes capacity. The corpus of data is about 185 megs, and
would be reduced by the lowering of tapecycle vs dumpcycle, here 30 vs 5,
so I can go back quite a few dumpcycle generations if I have to.
For vtapes its a never mind as long as the drive itself has the needed
space:
[r...@coyote amanda]# df -h /amandatapes
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 917G 599G 272G 69% /amandatapes
Back when I started developing my scripts, I was using a DDS2 changer, but
had less data, and I only recall overrunning a tape once in about 3 years
of using that small tape. The vtape solution is, compared to the DDS2,
about 1000% more dependable, those DDS2 tapes claim 10,000 cycles of re-
use, but I was lucky to get an average of 10 before they became unreadable.
The vtape I used last night has now been recycled 28 times. So that drive
is now in its third year. I've had one drive failure, and smartd gave me
enough warning that I didn't lose a byte.
> > Start a new
> >
> > tape? How do you reflect the use of that new tape in the catalog?
> >
I keep a 'tape' use count in a data file my script uses.
> > - How does recovery from that metadata backup work? There's a
> >
> > chicken-and-egg problem here, too - you'll need an Amanda config to
> > run any Amanda tools other than amrestore.
dd, tar, and gzip are part of a bare metal install, the rest is the use of
them to re-establish a working amanda on the new install.
> > Let's break down "metadata" into its component parts, too:
> > 1. configuration
> > 2. catalog (logdir, tapelist)
> > 3. indexes
> > 4. performance data (curinfo)
> > 5. debug logs
> >
> > Configuration (1) can be backed up like a normal DLE. The catalog (2)
> > should technically be recoverable from a scan of the tapes themselves,
> > although the tool to do this is still awaiting a happy hacker to bring
> > it to life. Indexes (3) are only required for amrecover, and if your
> > Amanda server is down, you likely want whole DLEs restored, so you
> > only need amfetchdump. Performance data (4) will automatically
> > regenerate itself over subsequent runs, so there's no need to back it
> > up. Similarly, debug logs (5) can get quite large, and generally need
> > not be backed up.
> >
> > So, to my mind, the only component that needs special handling is the
> >
> > catalog, and we have a menu of ways to handle that:
> > - append a copy of the entire catalog to the last tape in each run
> >
> > (hey, what is the "last tape" now that we have multiple simultaneous
> > tapers?)
> >
> > - append a copy of only the catalog for that run to the last tape in
> > each run
> >
> > - finally get around to writing 'amrecatalog'
> >
> > - rsync the entire catalog to another machine nightly
> >
> > I just stuck that last one in because it was my technique back when I
> > managed a fleet of Amanda servers. Each would simply rsync its config
> > and catalog to the other servers. Since they were all backing up to
> > shared storage (a SAN), I could do a restore / amfetchdump / recovery
> > of any dump on any server without trouble. It's a very
> > non-Amanda-centric solution, but it's *very* effective.
>
> The last one is something like what I do. I don't use rsync, because I
> want multiple backup copies going back over the last week. I have a
> cron job that launches in the morning and hangs waiting for the Amanda
> backups from the previous night to complete. When they seem to be
> complete, it proceeds with a backup to a local archive partition on
> another spindle that is normally mounted read-only. Then it tars that
> all up and scp's it to another server. So, if my drive fails and I need
> to recover, I have the Amanda stuff on another drive on the same
> computer. If the whole thing dies, I have it on another computer. It
> also gets backed up to tape from the other computer. So, several
> departments with Amanda backup servers backing up one another's Amanda
> configurations and catalogs.
>
> Just in case anyone is interested, I put the script at the end. It's not
> particularly parameterized for general use, but is pretty simple and
> easy to modify. Watch out for email line wrap.
>
> > /dev/null;
>
Yours s/b just as effective as mine, and its about 1/4 the size of mine. ;-)
> # for some reason, Solaris 9 doesn't allow `mount -o remount,ro
> /mountpoint` # set it back to read only with umount and then mount with
> the -r option. # unfortunately, the umount could fail if someone has a
> process in that directory. # that won't hurt the backups, but could
> leave the partition in rw mode. cd /;
> umount ${ARCH};
> mount -r ${ARCH};
I haven't tried to do the remount's to ro, mainly because the e2fsck
counter would run out, requiring manual intervention several times a year.
This discussion has been a positive, showing that there are indeed several
ways to approach this problem, all of which, when TSHTF, contain the seeds
for an expedited full recovery.
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
It's been a business doing pleasure with you.