On Fri, Nov 1, 2013 at 1:46 PM,  <backu...@kosowsky.org> wrote:
>
> This is probably not his *primary* issue since the pool is (only)
> ~3T. But when he started talking about file read errors, I was
> concerned that if the pool file reads were being truncated, then there
> likely would be pool duplicates since the byte-by-byte comparisons
> would fail for a given partial file md5sum leading to extra chain creation...

The read errors were in the RStmp files that is supposed to be the
uncompressed copy of a large compressed file so rsync can seek around
looking for a match.   I wonder if there could be a file (huge
database, mailbox,etc.) that compresses to a point that even with the
safety factor of backups not starting at 95% full, the uncompressed
copy won't fit.   Or maybe a sparse dbm type file where the original
doesn't allocate the space the length would indicate.

-- 
   Les Mikesell
      lesmikes...@gmail.com

------------------------------------------------------------------------------
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951&iu=/4140/ostg.clktrk
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to