>This makes sense because when I ran my initial Amanda dump on that host, I
>had no holding-disk defined, and it did backup the filesystem at level 0,
>and that filesystem has over 24GB of data on it, albeit, they are all small
>.c files and the such.  I am left wondering then how chunksize fits into the
>equation.  It was my understanding that this is what the chunksize was for.

If you didn't have a holding disk defined, Amanda went straight to tape.

When you do have a holding disk defined, Amanda will try (given enough
space, etc) to write the whole image into it, then when that is done,
write the image to tape.

Without chunksize, the holding disk image will be one monolithic file.
With chunksize, the image will be broken up into pieces when put into
the holding disk and then recombined when written to tape.  Chunksize was
put in to support systems with holding disks that only handled individual
files smaller than 2 GBytes.  You don't have that problem on AIX 4+.

Another possibility for the "File too large" error is your Amanda user
running into ulimit.  If you do this:

  su - <user> -c "ulimit -a"

does it have a "file(blocks)" limit?  If so, you can use smitty to
change that.

>-edwin 

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

Reply via email to