> > > > 3) I had figured that when restoring, amrestore has to read in a > complete > > > dump/tar file before it can extract even a single file. So if I have a > > > single DLE that's ~2TB that fits (with multiple parts) on a single > tape, > > > then to restore a single file, amrestore has to read the whole tape. > > > HOWEVER, I'm now testing restoring a single file from a large 2.1TB > DLE, > > > and the file has been restored, but the amrecover operation is still > > > running, for quite some time after restoring the file. Why might this > be > > > happening? > > Most (all?) current tape formats and drives can fast forward looking > for end of file marks. Amanda knows the position of the file on the > tape and will have to drive go at high speed to that tape file. > > For formats like LTO, which have many tracks on the tape, I think it > is even faster. I "think" a TOC records where (i.e. which track) each > file starts. So it doesn't have to fast forward and back 50 times to > get to the "tenth" file which is on the 51st track.
Jon, Olivier and Debra - thanks for reading my long post and replying. OK this makes sense about searching for eof marks from what I've read. Seems like it's a good reason to use smaller DLE's. > > > 3a) Where is the recovered dump file written to by amrecover? I can't > see > > > space being used for it on either server or client. Is it streaming and > > > untar'ing in memory, only writing the desired files to disk? > > > The tar file is not written to disk be amrecover. The desired files are > extracted as the tarchive streams. Thanks, that makes sense too from what I've seen (or not seen, actually - i.e. large temporary files). > > > So assuming all the above is true, it'd be great if amdump could > > > automatically break large DLE's into small DLE's to end up with smaller > > > dump files and faster restore of individual files. Maybe it would > happen > > > only for level 0 dumps, so that incremental dumps would still use the > same > > > sub-DLE's used by the most recent level 0 dump. > > Sure, great idea. Then all you would need to configure is one DLE > starting at "/". Amanda would break things up into sub-DLEs. > > Nope, sorry amanda asks the backup-admin to do that part of the > config. That's why you get the big bucks ;) Good point! A bit of job security there. ;) > > > Any thoughts on how I can approach this? If amanda can't do it, I > thought I > > > might try a script to create DLE's of a desired size based on > disk-usage, > > > then run the script everytime I wanted to do a new level 0 dump. That > of > > > course would mean telling amanda when I wanted to do level 0's, rather > than > > > amanda controlling it. > > Using a scheme like that, when it comes to recovering data, which DLE > was the object in last summer? Remember that when you are asked to > recover some data, you will probably be under time pressure with clients > and bosses looking over your shoulder. That's not the time you want > to fumble around trying to determine which DLE the data is in. Yes, I can see the complications. That makes me think of some things: 1) what do people do when they need to split a DLE? Just rely on notes/memory of DLE for restoring from older dumps if needed? Or just search using something like in question 3) below? 2) What happens if you split or otherwise modify a DLE during a cycle when normally the DLE would be getting an incremental dump? Will amanda do a new level 0 dump for it? 3) Is there a tool for seaching for a path or filename across all dump indecies? Or do I just grep through all the index files in /etc/amanda/config-name/<index>/ ? Thanks -M
