>This is because the amdump file you are looking at was the first
>one after moving from localhost to arthur ...
Ahh, good. I'm glad there's a simple explanation.
>The result was:
># timex ufsdump 0f - /export/dbresearch | cat > /dev/null
>...
>real 14:01.43
>user 16.45
>sys 50.94
>
>#
>> timex gtar cf - /export/dbresearch | cat > /dev/null
>
>We don't appear to have gtar so I used tar instead.
>
>The times for this was (What do these times mean/represent):
>
>real 7:16.14
>user 6.67
>sys 31.95
I used "gtar" as sort of a general name for GNU tar (people install
it with different names depending on what they want to do). Try "tar
--version". If it's GNU tar it will report what version it is. If it's
the standard Solaris one, you'll get an error.
Notice that tar ran twice as fast (7 minutes real time vs. 14 for
ufsdump). However, for a closer test to how Amanda would run it,
I should have had you do this:
# cp /dev/null /tmp/xxx
# timex gtar --create --file - --directory /export/dbresearch \
--one-file-system --listed-incremental /tmp/xxx \
--sparse --ignore-failed-read --totals . | cat > /dev/null
I don't think it will make a lot of difference to the timing, but it's
better to be accurate. The ufsdump test you ran is close enough to what
Amanda does.
>But this method also threw up lots of errors like the following ...
GNU tar might handle those better.
>> However, I don't think you'll be able to do that with ufsdump and the
>> subdirectories. That's where ufsdump will draw the line. You'll have
>> to switch to GNU tar.
>
>What do you mean?
Using ufsdump for a subdirectory instead of a whole file systems uses a
completely different backup mechanism. When doing a full file system,
ufsdump records the dump time in /etc/dumpdates and that's what it
uses to determine how to do incrementals. When doing a subdirectory,
ufsdump does not write anything into /etc/dumpdates and therefor does
not support incremental backups.
See this in the ufsdump man page:
OPERANDS
The following operand is supported:
files_to_dump Specifies the files to dump. ...
Incremental dumps (levels 1 to 9) of files changed
after a certain date only apply to a whole
file system. ... All files or directories are
dumped, which is equivalent to a level 0
dump ...
Bottom line -- using ufsdump for a subdirectory of a file system will
only give you full dumps, not incrementals.
>> Note that you can mix and match. ...
>
>This makes me worry about the restore method. At the moment we
>use amrestore and pipe it through to ufsrestore. ...
I agree that would be an issue. You would have to know which file
system restore program to use.
>But there is not
>the same sort of interactive picking which files to restore when
>using tar. Unless I use amrecover which I don't know how to use
>and have seen a thew messages on this list with people having
>problems with amrecover. ...
All of the problems with amrecover are getting it set up in the first
place. Once things are working, I don't think anyone is having trouble
with the selection process. And the setup problems are usually minor,
although sometimes it takes a while to figure out how the configuration
has been goofed.
Also, realize that almost everything you see on the mailing list is a
problem :-). You don't see the hundreds (if not thousands) of users
happily churning away with Amanda.
>How do I get in to using it is there instructions some place?
"The book chapter" covers it (bottom of the www.amanda.org web page).
I also tried to make the man page explain it. If you need to know more
than those two, just ask.
>David Flood
John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]