On Tue, Dec 8, 2009 at 12:42 AM, Edward Ned Harvey <[email protected]> wrote: > If you’re backing up some large data set, you’re probably not going across a > slow network link. You’re probably going locally disk to disk (or disk to > tape), and you probably want the rate of compression to keep pace with the > hardware I/O. > > ... lots about various software methods to compress backup streams ....
One concern that I've always had with compressing backup streams is what happen if there is an unrecoverable block of data in somewhere in the middle of the stream? Do you lose every bit of data after that bad block or is there some ability to resynchronize the stream so you don't lose everything? I remember looking into this at one time with various flavors of tape drive that did on-board compression and I seem to recall that at least one particular standard handled this 'correctly'. Does your software/algorithm do anything to deal with this or do you just get junk after the bad block? This is probably more relevant for archive tapes then day to day backups, but I still think it is worth discussing when people suggest compressing backups. Bill Bogstad _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
