On 11/12/12 12:55 AM, Jesper Krogh wrote:
I'd just like some rough guard against
hardware/OS related data corruption.
and that is more likely to hit data-blocks constantly flying in and out
of the system.

I get that. I think that some of the design ideas floating around since this feature was first proposed have been innovating in the hope of finding a clever halfway point here. Ideally we'd be able to get online checksum conversion and up running easily, reliably, and without adding a lot of code. I have given up on that now though.

The approach of doing a heavy per-table conversion with more state information than we'd like seems unavoidable, if you want to do it right and allow people to (slowly but surely) reach a trustworthy state. I think we should stop searching for a clever way around and just do slog through doing it. I've resigned myself to that now, and recently set aside a good block of time to beat my head against that particular wall over the next couple of months.

But I totally agree that the scheme described with integrating it into a
autovacuum process would
be very close to ideal, even on a database as the one I'm running.

I am sadly all too familiar with how challenging it is to keep a 2TB PostgreSQL database running reliably. One of my recent catch phrases for talks is "if you have a big Postgres database, you also have a vacuum problem". I think it's unreasonable to consider online conversion solutions that don't recognize that, and allow coordinating the work with the challenges of vacuuming larger systems too.

--
Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to