> AFS'ers,
>
> I have a 'small' problem :-( with the afs backup on afs 3.3a.
>
> Last night I ran a full backup on all three servers located here.
> Two of them finished successfully, the third failed with 200+ volumes still to
> go.
> How can I make sure that those volumes are dumped. In other words how can I
> find out which are the volumes that have/haven't been dumped to create a
> dumpschedule for those that haven't been dumped.
>
> Or can I just restart the full backup (creating a new dump-id) so that the
> still to be dumped volumes will come on tape ?
>
> Or do I need to start an incremental dump ??
>
> The book is not really clear about this.
> Restarting the full now takes about 10 hours.
>
> TIA
>
> Fred
The procedure I use is pretty grisly. Thinking about it, you _may_ be able
to get results as good (in the data sense) by just adding an incremental.
I think, though, that this would eat a lot of tape, since backup is pretty
dumb about what needs to be dumped (it leaves the 2MB inter-vol-gap even if
there's no data).
In any case, I take the list of volumes that were slated to be dumped
(backup spits this out before it does anything), see what the last one
dumped was, and chop the list at that point (adding any volumes that dumped
with errors to the "remainder" list). I then massage this into a form that
"backup -f" will like (addvole -n apr_part2 -s ".*" -p ".*" -v volume). I
make a new volset with "backup addvols" (here, called "apr_part2") and then
run "backup -f" with the massaged remainder file to make the dump set.
Then just dump.
This whole procedure takes quite a bit of time, so it's not clear to me
whether, with only a 10 hour backup, it'd be worth it. The "addvole"
stuff, in particular, takes forever.
Good luck!
Pat Wilson
[EMAIL PROTECTED]