Hello Katrina,

Have you looked at your chunksize setting in your holding disk config?  I
believe that Linux has a 2GB limit on file sizes that may be causing your
problem.  Try setting this to just below 2GB (like 1999 MB) and see if that
helps.

Hope this was helpful!

Anthony Valentine


-----Original Message-----
From: Katrinka Dall [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 14, 2001 6:05 AM
To: [EMAIL PROTECTED]
Subject: "data write: File too large" ???


Hello,


        I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post this here in hopes that one of
you can help me out.  Recently I had to migrate our backup server from a
Solaris 2.5.1 machine to a Linux 6.2 machine.  In the process of doing
this, I found that I was unable to get one of the Linux machines that we
had been backing up to work properly.  I keep getting this error
whenever I try to do a dump on this machine/filesystem:


 FAILED AND STRANGE DUMP DETAILS:
 
/-- xxxxxx.p /dev/sdb1 lev 0 FAILED ["data write: File too large"]
sendbackup: start [xxxxx.xxxxx.xxx.xxxxx.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\--------     

        Now, I know that this isn't an issue of not having enough space on
the
tape or holding disk, both are in excess of 35G.  Some of the things I
have tried are, upgrading tar on the server that is failing, upgrading
the backup server from RedHat 6.2 to 7.1, and using every available
version of Amanda.  Currently I am using Amanda-2.4.2p2.  The client
that I'm having these problems on is a RedHat 5.1 (kernel 2.0.36)
machine.

If you have any suggestions I'd greatly appreciate them.  Oh, and one
more thing, when we were running these backups from the Solaris machine,
they did not produce this error, and I had about a quarter of the space
on tape and holding disk that I have now.

Thanks in advance,

Katrinka

Reply via email to