On 1 May 2017 at 19:24, Andres Freund <and...@anarazel.de> wrote:
>> There is no inherent reason why the CREATE INDEX CONCURRENTLY style of
>> using multiple transactions makes it necessary to leave a mess behind
>> in the event of an error or hard crash. Is someone going to get around
>> to fixing the problem for CREATE INDEX CONCURRENTLY (e.g., having
>> extra steps to drop the useless index during recovery)? IIRC, this was
>> always the plan.
>
> Doing catalog changes in recovery is frought with problems. Essentially
> requires starting one worker per database, before allowing access.

The "plan" was to add more layers PG_TRY and transactions so that if
there was an error during building the index all the remnants of the
failed index build got cleaned up. But when I went tried to actually
do it the problem seemed to metastatize and it was going to require
two or three layers of messy nested PG_TRY and extra transactions.
Perhaps there's a cleaner way to structure it and I should look again.

I don't recall ever having a plan to do anything in recovery. I think
we did talk about why it was hard to mark hash indexes invalid during
recovery which was probably the same problem.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to