> >> The first thing I'd try is cranking up maxdumps to 2 and see if that helps. > > > >Tried it for tonights backups(10/04/2002). Wasn't really sure where > >to put it so I put it in the global dumptype ... > > That should be OK. You can check it with this: > > amadmin <config> disklist arthur | egrep ' disk |maxdumps'
All the disk came up with maxdumps 2
>
> >[The amdump.1 file] Should be sitting waiting for you.
>
> Got it. As I thought, things are very single threaded because of the
> maxdumps issue. It also took a long time to do the estimates (three
> hours), which maxdumps should also help.
>
> I noticed it said all the disks are "new". Any idea why it would
> think that?
This is because the amdump file you are looking at was the first
one after moving from localhost to arthur - I finally took your advice.
>
> The tape write times are pretty respectable, around 5 MBytes/s. But the
> dump times are pretty low, a few hundred KBytes/s. Part of that is
> the compression. But part may also be using dump instead of GNU tar.
>
> Try this (as root):
>
> timex ufsdump 0f - /export/dbresearch | cat > /dev/null
The result was:
# timex ufsdump 0f - /export/dbresearch | cat > /dev/null
DUMP: Writing 32 Kilobyte records
DUMP: Date of this level 0 dump: Thu Apr 11 10:49:47 2002
DUMP: Date of last level 0 dump: the epoch
DUMP: Dumping /dev/md/rdsk/d5 (arthur:/export) to standard
output.
DUMP: Mapping (Pass I) [regular files]
DUMP: Mapping (Pass II) [directories]
DUMP: Estimated 1823628 blocks (890.44MB).
DUMP: Dumping (Pass III) [directories]
DUMP: Dumping (Pass IV) [regular files]
DUMP: 90.03% done, finished in 0:01
DUMP: 1821630 blocks (889.47MB) on 1 volume at 1326 KB/sec
DUMP: DUMP IS DONE
real 14:01.43
user 16.45
sys 50.94
#
> timex gtar cf - /export/dbresearch | cat > /dev/null
We don't appear to have gtar so I used tar instead.
The times for this was (What do these times mean/represent):
real 7:16.14
user 6.67
sys 31.95
But this method also threw up lots of errors like the following - I've
attched the complete read out as a text file:
tar:
/export/dbresearch/irgroup/WebCluster/CLUSTERING_
FRAMEWORK/WORKING_DIRECTORY_G/CF_ECLAIR_INTERFACE
/Templates.DB/__0oLShallowList7l_dtv.state:
symbolic link too long
tar:
__0oLShallowList7P6dCfDocumentCollectionInterface
_ctRC6LShallowList7P6dCfDocumentCollectionInterfa
ce_.o: filename is greater than 100
> Also, I've forgotten some of the history here. Is there a reason you're
> not using incrementals? That would significantly reduce the amount of
> data moved, which should lower the total time a lot.
We do use incrementals but everything in the amdump file you
have is a level 0 because I'd changed from localhost to arthur the
night before.
> However, I don't think you'll be able to do that with ufsdump and the
> subdirectories. That's where ufsdump will draw the line. You'll have
> to switch to GNU tar.
What do you mean?
>
> Note that you can mix and match. Your "real" file systems ("/", "/var",
> "/opt" and "/usr") can still use ufsdump and the /export subdirectories
> can use GNU tar.
This makes me worry about the restore method. At the moment we
use amrestore and pipe it through to ufsrestore. But there is not
the same sort of interactive picking which files to restore when
using tar. Unless I use amrecover which I don't know how to use
and have seen a thew messages on this list with people having
problems with amrecover. How do I get in to using it is there
instructions some place?
-----------------------------------------
David Flood
Systems Administrator
[EMAIL PROTECTED]
Tel: +44 (0)1224 262721
The Robert Gordon University
School of Computing
St. Andrews Street
Aberdeen
-----------------------------------------
The following section of this message contains a file attachment
prepared for transmission using the Internet MIME message format.
If you are using Pegasus Mail, or any another MIME-compliant system,
you should be able to save it or view it from within your mailer.
If you cannot, please ask your system administrator for assistance.
---- File information -----------
File: tar.txt
Date: 11 Apr 2002, 11:24
Size: 11861 bytes.
Type: Text
tar.txt
Description: Binary data
