On Wed, 12 Mar 2003 at 3:54pm, wab wrote > One filesystem I'm trying to back up with AMANDA is really huge and I'm > encountering errors: > > This filesystem is so huge, a level 0 is taking longer than 24 hours. > Any ideas on what could be going wrong? My best guesses: > > 1. The filesystem is just too big for TAR. > 2. The filesystem is so big, its contents are changing during the tar > process and confusing it or amanda.
I backup several DLEs with tar that are rather large -- I think the biggest one is nearly 80GB. That one takes about 3 hours (no compression). Of course, that Linux server is rather fast. > /-- <server> /usr lev 0 FAILED [/usr/local/bin/tar returned 2] > sendbackup: start [<server>:/usr level 0] > sendbackup: info BACKUP=/usr/local/bin/tar > sendbackup: info RECOVER_CMD=/usr/local/bin/gzip -dc |/usr/local/bin/tar > -f... - > sendbackup: info COMPRESS_SUFFIX=.gz > sendbackup: info end > ? gtar: Read error at byte 53808128, reading 10240 bytes, in file > ./archive/www/access.0203.gz: I/O error An I/O error is bad. Look in your system logs for more info on that. > ./opt/freeware/apache/share/htdocs/Library/easmenu.lbi.LCK: No such file > or directory The rest of the stuff, yes, has to do with tarring an active filesystem. > Any ideas as to what might be causing this? Look into that I/O error. Also, is this Solaris? For whatever reason, tar seems rather slow on Solaris (at least a lot of questions on this list seem to point that way). If that's a filesystem, could you try dump? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
