> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfr...@snowman.net> wrote:
>
> I'm not actually argueing about which hash functions we should support,
> but rather what the default is and if crc32c, specifically, is actually
> a reasonable choice. Just because it's fast and we already had an
> implementation of it doesn't justify its use as the default. Given that
> it doesn't actually provide the check that is generally expected of
> CRC checksums (100% detection of single-bit errors) when the file size
> gets over 512MB makes me wonder if we should have it at all, yes, but it
> definitely makes me think it shouldn't be our default.
I don't understand your focus on the single-bit error issue. If you are
sending your backup across the wire, single bit errors during transmission
should already be detected as part of the networking protocol. The real issue
has to be detection of the kinds of errors or modifications that are most
likely to happen in practice. Which are those? People manually mucking with
the files? Bugs in backup scripts? Corruption on the storage device?
Truncated files? The more bits in the checksum (assuming a well designed
checksum algorithm), the more likely we are to detect accidental modification,
so it is no surprise if a 64-bit crc does better than 32-bit crc. But that
logic can be taken arbitrarily far. I don't see the connection between, on the
one hand, an analysis of single-bit error detection against file size, and on
the other hand, the verification of backups.
From a support perspective, I think the much more important issue is making
certain that checksums are turned on. A one in a billion chance of missing an
error seems pretty acceptable compared to the, let's say, one in two chance
that your customer didn't use checksums. Why are we even allowing this to be
turned off? Is there a usage case compelling that option?
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company