>>> I've seen in some commercial backup systems with included
>>> deduplication (which often run under Linux :-), that files are  
>>> split in
>>> 128k or 256k chunks prior to deduplication.
>>>
>>> It's nice to improve the deduplication ratio for big log files, mbox
>>> files, binary db not often updated, etc. Only the last chunk of a  
>>> log
>>> file would create a new entry in the backuppc pool.
>>>

How long are you willing to have your backups and restores take? If  
you do more processing on the backed up files, you'll take a greater  
hit. In the long run, I'm not sure it's worth the additional cost of  
impacting recovery time objectives and putting additional abuse on  
the environment during backups.

--
Michael Barrow
michael at michaelbarrow dot name




-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to