We have addressed the problem of the failed AFS backup my co-worker Fred
reported a yesterday in the following way:
Scenario for re-starting a full backup after it was crashed
Volumes ids are sorted in descending order before being dumped to tape
So in order to pickup where AFS backup left :
The place where backup failed can be found in /usr/afs/backup/TL...
In this file find the last volume that went OK and the one that backup
didn't like anymore,
vos lisvol afs03 | grep backup > bck_vols #get a list of vols on server
cp bck_vols bck_vols.1 # save the data
cat bck_vols | cut -c1-44 > bck_vols.2 # strip of unuseful stuff, leaving
# volume name and -id
sort -b -r +1 bck_vols.2 > bck_vols.3 # sort numerical in descending order
wc -l bck_vols.3 # check number of vols in list
cp bck_vols.3 bck_vols.4 # save work
vi bck_vols.4 #and strip everything above the volume id that failed
cat bck_vols.4 | cut -f1 -d" " > bck_vols.5 # strip-off dumpid number!
create a volset to contain the files to be dumped
backup addvolset afs03_emergency
now add
backup addvolentry -name afs03_emergency afs03 -partition ".*" -volume
to the beginning of each line in bck_vols.5, save output in add_volentry
execute add_volentry takes 60minutes for 200 entries
DON'T DO ANY BACKUP LISTVOLS WHEN THE ADDING OF VOLENTRIES TAKES PLACE!
THIS WILL CAUSE LOCKING AND SKIPPING OF ENTRIES
check output of backup listvols, against bck_vols.* !
do wc -l on the files and compare that with /usr/afs/backup/TL...
take appropriate actions on differences
dump can now be done with
backup>dump afs03_emergency /wednesday -port 5
done (after some swetting)
~
(Please SEND you reply to my official e-mail address, just "reply" might
not arrive (yet))
Met vriendelijke groeten/ Best regards
Joop Verdoes RA/14
Koninklijke/Shell Exploratie en
Productie Laboratorium
P.O. Box 60
2280 AB Rijswijk
The Netherlands
[EMAIL PROTECTED]
ext +31 (0)70 - 3112854
fax +31 (0)70 - 3113110