* John E Hein <[EMAIL PROTECTED]> (Tue, Sep 18, 2001 at 09:32:57AM -0600)
>> Thank you very much! We implemented the sendsize file you sent us and it has
>> cut our Estimate Time in half.
> Hmmm... it also seems to not take compression into account (we use gnu
> tar with client side compression). The estimated size is much larger
> than the actual size. amstatus gives:
No, it doesn;'t take compression into account , neither does ufsdump or
gnutar .. at least the way I have it setup.
Amanda keeps track of the compression ratios and uses those to get a
correct(er) estimate from the estimate returned by the estimate program.
e.g.:
hyperion:/volume/amanda/reports/mammoth1/curinfo/hyperion/sdb2 # less info
version: 0
command: 0
full-rate: 648.000000 53.000000
full-comp: 0.496944 0.004881
incr-rate: 992.000000 992.000000 992.000000
incr-comp: 0.987065 0.987065 0.987065
stats: 0 984320 489152 754 1000836653 11 MAM0125
last_level: 0 1
//
Are you by any chance runnign some form of tar where the tar command does
the compression ?
(like tar -z or tar -j )
> So for our 12 GB DDS-3 drive, we're getting a lot of 'full dump delayed'
> messages because of this issue.
If you know this, you can specify the tapesize to be 24G (iso 12G) and you
should get the dumps you think you were getting.
>
> Is there any way to get generic calcsize to take compression into account?
> Or is it that something else is going wrong?
>
Currently listening to: 06 - paranoid eyes
Gerhard, <@jasongeo.com> == The Acoustic Motorbiker ==
--
__O If your watch is wound, wound to run, it will
=`\<, If your time is due, due to come, it will
(=)/(=) Living this life, is like trying to learn latin
in a chines firedrill