Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Paul Schlie sch...@comcast.net wrote:
Sorry if I'm restating the obvious, however I don't understand the
confusion, as it seems the standard's definition isn't mysterious;
it simply requires that the resulting state from the concurrent
Sorry if I'm restating the obvious, however I don't understand the
confusion, as it seems the standard's definition isn't mysterious; it
simply requires that the resulting state from the concurrent execution
of transactions (and implicitly any subset) designated to occur at the
isolation level
Gregory Stark wrote:
Paul Schlie writes:
Heikki Linnakangas wrote:
Gregory Stark wrote:
However you still have a problem that someone could come along and set the
hint bit between calculating the CRC and actually calling write.
The double-buffering will solve that.
Or simply require
Heikki Linnakangas wrote:
Gregory Stark wrote:
However you still have a problem that someone could come along and set the
hint bit between calculating the CRC and actually calling write.
The double-buffering will solve that.
Or simply require that hint bit writes acquire a write lock on the
Alvaro Herrera wrote:
Alvaro Herrera wrote:
Alvaro Herrera wrote:
Hmm, oh I see another problem here -- the bit is not restored when
replayed heap_update's WAL record. I'm now wondering what other bits
are set without much care about correctly restoring them during replay.
I'm now
Alvaro Herrera wrote:
So this discussion died with no solution arising to the
hint-bit-setting-invalidates-the-CRC problem.
Is there no point at which a page is logically committed to
storage, past which no mutating access may be performed?
--
Sent via pgsql-hackers mailing list
Joshua D. Drake wrote:
...
ZFS is not an option; generally speaking.
Then in general, if the corruption occurred within the:
- read path, try again and hope it takes care of itself.
- write path, the best that can be hoped for is a single bit error
within the data itself which can be both
Brian Hurt wrote:
Paul Schlie wrote:
... if that doesn't fix
the problem, assume a single bit error, and iteratively flip
single bits until the check sum matches ...
This can actually be done much faster, if you're doing a CRC checksum
(aka modulo over GF(2^n)). Basically, an error
Brian Hurt wrote:
Brian Hurt wrote:
Paul Schlie wrote:
... if that doesn't fix
the problem, assume a single bit error, and iteratively flip
single bits until the check sum matches ...
This can actually be done much faster, if you're doing a CRC checksum
(aka modulo over GF(2^n
Jonah H. Harris wrote:
Tom Lane wrote:
Harald Armin Massa writes:
WHAT should happen when corrupted data is detected?
Same thing that happens now, ie, query fails with an error. This would
just be an extension of the existing validity checks done at page read
time.
Agreed.
- however it
Tom Lane wrote:
Paul Schlie writes:
- yes, if you're willing to compute true CRC's as opposed to simpler
checksums, which may be worth the price if in fact many/most data
check failures are truly caused by single bit errors somewhere in the
chain,
FWIW, not one of the corrupted-data
Kevin Grittner wrote:
Tom Lane [EMAIL PROTECTED] wrote:
Paul Schlie [EMAIL PROTECTED] writes:
- yes, if you're willing to compute true CRC's as opposed to
simpler checksums, which may be worth the price if in fact many/most
data check failures are truly caused by single bit errors somewhere
If you are concerned with data integrity (not caused by bugs in the code
itself), you may be interested in utilizing ZFS; however, be aware that I
found and reported a bug in their implementation of the Fletcher checksum
algorithm they use by default to attempt to verify the integrity of the data
13 matches
Mail list logo