Christopher Quinn <[EMAIL PROTECTED]> writes:
> I've been wondering how pgsql goes about guaranteeing data 
> integrity in the face of soft failures. In particular 
> whether it uses an alternative to the double root block 
> technique - which is writing, as a final indication of the 
> validity of new log records, to alternate disk blocks at 
> fixed disk locations some meta information including the 
> location of the last log record written.
> This is the only technique I know of - does pgsql use 
> something analogous?

The WAL log uses per-record CRCs plus sequence numbers (both per-record
and per-page) as a way of determining where valid information stops.
I don't see any need for relying on a "root block" in the sense you
describe.

> Lastly, is there any form of integrity checking on disk 
> block level data? I have vague recollections of seeing 
> mention of crc/xor in relation to Oracle or DB2.

At present we rely on the disk drive to not drop data once it's been
successfully fsync'd (at least not without detecting a read error later).
There was some discussion of adding per-page CRCs as a second-layer
check, but no one seems very excited about it.  The performance costs
would be nontrivial and we have not seen all that many reports of field
failures in which a CRC would have improved matters.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to