On Wed, Apr 09, 2008 at 04:32:13PM +0200, Ludovic Drolez wrote: > I've seen in some commercial backup systems with included > deduplication (which often run under Linux :-), that files are split in > 128k or 256k chunks prior to deduplication. > > It's nice to improve the deduplication ratio for big log files, mbox > files, binary db not often updated, etc. Only the last chunk of a log > file would create a new entry in the backuppc pool. > > Anybody has idead about how this feature could be added without major > changes to backuppc ? (replacing hard links with text files containing > the list of chunks ?)
Oh no, not even more small files in the file system! :-< But the idea's not too bad. And it would need to be enabled per host (possibly per file or backed up directory) anyway. Tino. -- „Es gibt keinen Weg zum Frieden. Der Frieden ist der Weg.” (Mahatma Gandhi) www.craniosacralzentrum.de www.forteego.de ------------------------------------------------------------------------- This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
