Brian Hurt wrote:
> Brian Hurt wrote:
>> Paul Schlie wrote:
>>> 
>>> ... if that doesn't fix
>>> the problem, assume a single bit error, and iteratively flip
>>> single bits until the check sum matches ...
>> This can actually be done much faster, if you're doing a CRC checksum
>> (aka modulo over GF(2^n)). Basically, an error flipping bit n will
>> always create the same xor between the computed CRC and the stored
>> CRC. So you can just store a table- for all n, an error in bit n will
>> create an xor of this value, sort the table in order of xor values,
>> and then you can do a binary search on the table, and get exactly what
>> bit was wrong.
>> 
>> This is actually probably fairly safe- for an 8K page, there are only
>> 65536 possible bit positions. Assuming a 32-bit CRC, that means that
>> larger corrupts are much more likely to hit one of the other
>> 4,294,901,760 (2^32 - 2^16) CRC values- 99.998% likely, in fact.
>> 
> 
> Actually, I think I'm going to take this back. Thinking about it, the
> table is going to be large-ish (~512K) and it assumes a fixed 8K page
> size. I think a better solution would be a tight loop, something like:
> r = 1u;
> for (i = 0; i < max_bits_per_page; ++i) {
> if (r == xor_difference) {
> return i;
> } else if ((r & 1u) == 1u) {
> r = (r >> 1) ^ CRC_POLY;
> } else {
> r >>= 1;
> }
> }

- or used a hash table indexed by the xor diff between the computed and
stored CRC (stored separately and CRC'd itself and possibly stored
redundantly) which I thought was what you meant; either way correction
performance isn't likely that important as its use should hopefully be rare.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to