away.
Dana Bourgeois
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Byarlay, Wayne A.
Sent: Monday, November 17, 2003 7:54 AM
To: [EMAIL PROTECTED]
Subject: data write: File too large
Hi all,
I checked the archives on this problem
Thanks for the responses...
Upgrading at this time would be GREAT. Unfortunately this is not going
to happen anytime soon. I have another hard drive with RH9 and 2.4.4 on
it, but it's only half-configured, and it took a while to get to that
point.
As per the current, Irridum-dust, ancient setup,
Never Mind,
I was under the impression that, for some reason, the holdingdisk {}
section of amanda.conf could not exist in my version. But I put it
there, with an appropriate Chunksize, and ... amcheck did not complain
at all. In fact it said 4096 size requested, that's plenty.
If you don't hear
this one going to AMANDA... but if so, How do I adjust my
chunksize?
Here's from my error log:
/-- xxx/services lev 0 FAILED [data write: File too large]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
sendbackup
write: File too large]
sendbackup: start [xxx:/services level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\
The example.conf for the latest version has some section like:
holdingdisk
Byarlay, Wayne A. wrote:
Hi all,
I checked the archives on this problem... but they all suggested to
adjust the chunksize of my holdingdisk section in my amanda.conf.
However, I have ver. 2.4.1, and there's no holdingdisk section IN my
amanda.conf! Is the chunksize the problem? I've got
Pedro Aguayo wrote:
Ok, I didn't but think I do now.
Basically when amanda write to the holding disk, it rights it to a flat file
on the file system, and if that flat file is larger than 2gb then you might
encounter a problem if your filesystem has a limitation where it can only
support
This makes sense because when I ran my initial Amanda dump on that host, I
had no holding-disk defined, and it did backup the filesystem at level 0,
and that filesystem has over 24GB of data on it, albeit, they are all small
.c files and the such. I am left wondering then how chunksize fits into
On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
Could be that your holding disk space is to small, or you trying to backup a
file that is larger than 2 gigs?
Perhaps I misunderstand something here, but...
The holding disk afaik holds the entire dump of the filesystem you try
and
PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Adrian Reyer
Sent: Tuesday, January 15, 2002 4:03 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]
On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote:
Could be that your holding disk
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
But, I think Edwin doesn't have this problem, meaning he says he doesn't
have a file larger than 2gb.
I had none, either, but the filesystem was dumped into a file as a
whole, leading to a huge file, same with tar. The problem only
Ahh! I see said the blind man.
Pedro
-Original Message-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro
:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
But, I think Edwin doesn't have this problem, meaning he says he
doesn't
have
On 15-Jan-2002 Adrian Reyer wrote:
No holding-disk - no big file - no problem. (well, tape might have
to stop more often because of interruption in data-flow)
Why not define a chunksize of 500 Mb on your holdingdisk? That's what
I did. Backups go faster and there's less wear and tear on
Aguayo
Subject: Re: [data write: File too large]
I don't understand the problem. When amanda encounters a filesystem larger
than the holding disk, she AUTOMATICALLY resorts to direct tape write.
Quoting from the amanda.conf file:
# If no holding disks are specified then all dumps will be written
Pedro Aguayo wrote:
Ok, I didn't but think I do now.
Basically when amanda write to the holding disk, it rights it to a flat file
on the file system, and if that flat file is larger than 2gb then you might
encounter a problem if your filesystem has a limitation where it can only
support
, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
But, I think Edwin doesn't have this problem, meaning he says he doesn't
have a file larger than 2gb.
I had
-
From: Adrian Reyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 9:24 AM
To: Pedro Aguayo
Cc: Rivera, Edwin; [EMAIL PROTECTED]
Subject: Re: [data write: File too large]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote:
But, I think Edwin doesn't have this problem
On Tuesday 15 January 2002 09:56 am, Rivera, Edwin wrote:
is there a way, in the amanda.conf file, to specify *NOT* to use
the holding-disk for a particular filesystem?
for example, if i use amanda to backup 8 filesystems on one box
and i want 7 to use the holding-disk, but one not to.. is
hello again,
my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).
here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write: File too
large]
here is the entry in my
: File too
large]
here is the entry in my amanda.conf file
chunksize 1536 mbytes
can you suggest anything? thanks in advance.
2GB limit on files perhaps? Not sure about AIX 4.2.1-support for files
bigger 2GB, quit AIX with version 3.2.5. Might be the
holding-disk-file.
Regards,
Adrian
To: [EMAIL PROTECTED]
Subject: [data write: File too large]
hello again,
my aix 4.2.1 box running Amanda v2.4.2p2 is not backing up one of my
filesystems at level 0 anymore, it did it only once (the first amdump run).
here is the error:
aix421.us.lhsgroup.com /home4/bscs_fam lev 0 FAILED [data write
You sure you don't have a huge file in that filesystem?
Pedro
-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large]
my holding disk is 4GB
: [data write: File too large]
You sure you don't have a huge file in that filesystem?
Pedro
-Original Message-
From: Rivera, Edwin [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 14, 2002 10:55 AM
To: Pedro Aguayo; Rivera, Edwin; [EMAIL PROTECTED]
Subject: RE: [data write: File too large
On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup
On Tue, Aug 14, 2001 at 09:05:25AM -0500, Katrinka Dall wrote:
FAILED AND STRANGE DUMP DETAILS:
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
Does it fail after backing up 2 Gigabyte?
It sounds like you don't have Large File Support (LFS).
sendbackup: start
-Original Message-
From: Katrinka Dall [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 14, 2001 6:05 AM
To: [EMAIL PROTECTED]
Subject: data write: File too large ???
Hello,
I must say that I'm completely stumped, trying everything I can
possibly think of, I've decided to post
: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc |/bin/tar -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
\
Now, I know that this isn't an issue
On Tue, 14 Aug 2001 at 11:30am, Joshua Baker-LePain wrote
On Tue, 14 Aug 2001 at 9:05am, Katrinka Dall wrote
/-- xx.p /dev/sdb1 lev 0 FAILED [data write: File too large]
sendbackup: start [x.x.xxx.x.com:/dev/sdb1 level 0]
sendbackup: info BACKUP=/bin/tar
sendbackup
On 15 Mar 2001, Alexandre Oliva wrote:
and is:
chunksize 2gb
the right way to limit it?
Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
bump into the limit.
Confusing, but understood. I will try that, thanks.
Mark
--
http://www.mchang.org/
http://decss.zoy.org/
Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
bump into the limit.
Confusing, but understood. I will try that, thanks.
I think the amanda man page discusses this a bit, or else it's in the
default amanda.conf. To allow for header information on the dump file,
etc.,
Just a quick sanity check ...
server is linux w/DLT7000 and 25g free on holding disk
brain (client) is solaris 8 with 15g partition (7 gig used) to back up for
the first time.
I get this error in the raw log:
FAIL dumper brain c1t1d0s0 0 ["data write: File too large"]
sendbac
On Mar 15, 2001, "Mark L. Chang" [EMAIL PROTECTED] wrote:
Is this all because of the 2g file limit on basic kernels on Linux?
Yep.
and is:
chunksize 2gb
the right way to limit it?
Nope. Use 2000mb. That's a little bit less than 2Gb, so it won't
bump into the limit.
--
Alexandre Oliva
We just ran across this error, because we hit the 2GB filesize limit ...
However, the behavior after this error seems a bit odd. Amanda flushed the
other disk files on the holding disk to tape ok, but then also left them on
the holdingdisk. amverify confirms they're all on tape, but ls shows
By sheer (and extremely annoying :-) coincidence, one of my systems did
the same thing last night, but behaved "properly", i.e. all the images
smaller than 2 GBytes were flushed and removed from the holding disk.
So my best guess is this a problem that's fixed with a more recent
version of
We just ran across this error, because we hit the 2GB filesize limit on ext2
filesystems. After reading the man pages, we've cranked down the
holdingdisk chunksize to 2000MB, which should alleviate the problem in the
future.
However, the behavior after this error seems a bit odd. Amanda
36 matches
Mail list logo