On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote

> /-- xxxxxx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
> sendbackup: start [xxxxx.xxxxx.xxx.xxxxx.com:/dev/sdb1 level 0]
> sendbackup: info BACKUP=/bin/tar
> sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
> sendbackup: info COMPRESS_SUFFIX=.gz
> sendbackup: info end
> \--------
>
>       Now, I know that this isn't an issue of not having enough space on the
> tape or holding disk, both are in excess of 35G.  Some of the things I
> have tried are, upgrading tar on the server that is failing, upgrading
> the backup server from RedHat 6.2 to 7.1, and using every available
> version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
> that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
> machine.

It looks like you may be hitting Linux's filesize limitation.  What is
your chunksize set to?  Try setting it to something smaller than 2GB-32KB
for header.  There's no performance penalty, so why not try 1GB.

That may not be the issue, though, as amanda 2.4.2p2 on RedHat 7.1
"should" have large file support.  It seems to compile with the right
flags -- has anybody confirmed whether it works or not?

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University

Reply via email to