On 30 Sep 2008, at 10:17 PM, Decibel! <[EMAIL PROTECTED]> wrote:

On Sep 30, 2008, at 1:48 PM, Heikki Linnakangas wrote:
This has been suggested before, and the usual objection is precisely that it only protects from errors in the storage layer, giving a false sense of security.

If you can come up with a mechanism for detecting non-storage errors as well, I'm all ears. :)

In the meantime, you're way, way more likely to experience corruption at the storage layer than anywhere else.

Fwiw this hasn't been my experience. Bad memory is extremely common and even the storage failures I've seen (excluding the drive crashes) turned out to actually be caused by bad memory.

That said I've always been interested in doing this. The main use case in my mind has actually been for data that's been restored from old backups which have been lying round and floating between machines for a while with many opportunities for bit errors to show up.


The main stumbling block I ran into was how to deal with turning the option off and on. I wanted it to be possible to turn off the option to have the database ignore any errors and to avoid the overhead.

But that means including an escape hatch value which is always considered to be correct. But that dramatically reduces the effectiveness of the scheme.

Another issue is it will make space available on each page smaller making it harder to do in place upgrades.


If you can deal with those issues and carefully deal with the contingencies so it's clear to people what to do when errra occur or they want to turn the feature on or off then I'm all for it. That despite my experience of memory errors being a lot more common than undetected storage errors.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to