Alexandre Oliva <[EMAIL PROTECTED]> writes:
> On Nov 16, 2000, John Goerzen <[EMAIL PROTECTED]> wrote:
>
> > The compressed representation of each block is delimited by a
> > 48-bit pattern, which makes it possible to find the block
> > boundaries with reasonable certainty. Each block also carries
> > its own 32-bit CRC, so damaged blocks can be distinguished from
> > undamaged ones.
>
> Ok, so you lose a block, bzip2 skips it and proceeds to decompress the
> succeeding block. I can believe GNU tar would be able to recover from
> the loss of the intermediate block, if you're lucky enough, but I'm
> not sure DUMP would. So beware.
Yes, I'm using GNU tar anyway, and I know from experience that it can
recover from this sort of problem! So there's a win vs. gzip.
>
> > Are there any plans to support this? I suppose I could just go into
> > the code with sed and s/gzip/bzip2/ but I'd prefer to do it more
> > elegantly if possible :-)
>
> Some people have tried bzip2 before. Invariably, they'd come back and
> agree it was indeed way too slow to be worth the extra compression of
> back ups.
Well, I'm not using it for the space savings in this case. Before
amanda, I used my tape drive's hardware compression, but that doesn't
seem to quite play nicely with amanda.
I look at it from a data integrity point of view. The only way I'd
ever use compression for backups is if read errors will not corrupt
the remainder of the backup. This is the case with bzip2 but not with
gzip, alas. This is why some people use afio, which I think will
compress each file individually, which is another approach to the
problem.
> Which is not to say I despise bzip2. On the contrary: I use it often,
> but for compressing files, not whole backups that I want to get
> finished before next day's backup starts :-)
Heh, you know -- my tape server is a P166! It is plenty fast to back
up the other machines (an alpha, a 600MHz PIII, etc) but backing up
itself is very slow!
I did notice a decrease in speed -- just to be safe, I forced level 0
dumps on all machines after the switch. It seemed to gain about 10%
space over gzip. It was noticably slower. But that's ok -- got the
holding disk for this!
>
> --
> Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
> Red Hat GCC Developer aoliva@{cygnus.com, redhat.com}
> CS PhD student at IC-Unicamp oliva@{lsd.ic.unicamp.br, gnu.org}
> Free Software Evangelist *Please* write to mailing lists, not to me
>
--
John Goerzen <[EMAIL PROTECTED]> www.complete.org
Sr. Software Developer, Progeny Linux Systems, Inc. www.progenylinux.com
#include <std_disclaimer.h> <[EMAIL PROTECTED]>