Holger Parplies wrote at about 17:54:05 +0200 on Thursday, October 6, 2011: > Hi, > > Tim Fletcher wrote on 2011-10-06 10:17:03 +0100 [Re: [BackupPC-users] Bad > md5sums due to zero size (uncompressed) cpool files - WEIRD BUG]: > > On Wed, 2011-10-05 at 21:35 -0400, Jeffrey J. Kosowsky wrote: > > > Finally, remember it's possible that many people are having this > > > problem but just don't know it, > > perfectly possible. I was just saying what possible cause came to my mind > (any > many people *could* be running with an almost full disk). As you (Jeffrey) > said, the fact that the errors appeared only within a small time frame may or > may not be significant. I guess I don't need to ask whether you are *sure* > that the disk wasn't almost full back then.
Disk was *less* full then... > To be honest, I would *hope* that only you had these issues and everyone > else's backups are fine, i.e. that your hardware and not the BackupPC > software > was the trigger (though it would probably need some sort of software bug to > come up with the exact symptoms). > > > > since the only way one would know would be if one actually computed the > > > partial file md5sums of all the pool files and/or restored & tested ones > > > backups. > > Almost. > > > > Since the error affects only 71 out of 1.1 million files it's possible > > > that no one has ever noticed... > > Well, let's think about that for a moment. We *have* had multiple issues that > *sounded* like corrupt attrib files. What would happen, if you had an attrib > file that decompresses to "" in the reference backup? > > > > It would be interesting if other people would run a test on their > > > pools to see if they have similar such issues (remember I only tested > > > my pool in response to the recent thread of the guy who was having > > > issues with his pool)... > > > > Do you have a script or series of commands to do this check with? > > Actually, what I would propose in response to what you have found would be to > test for pool files that decompress to zero length. That should be > computationally less expensive than computing hashes - in particular, you can > stop decompressing once you have decompressed any content at all. Actually this could be made even faster since there seem to be 2 cases: 1. Files of length 8 bytes with first byte = 78 [no rsync checksums] 2. Files of length 57 bytes with first byte = d7 [rsync checksums] So, all you need to do is to stat the size and then test the first-byte ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/