On Thu, Mar 16, 2017 at 10:34 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:
> On 17 March 2017 at 08:10, Stas Kelvich <s.kelv...@postgrespro.ru> wrote:
>> While working on this i’ve spotted quite a nasty corner case with aborted 
>> prepared
>> transaction. I have some not that great ideas how to fix it, but maybe i 
>> blurred my
>> view and missed something. So want to ask here at first.
>> Suppose we created a table, then in 2pc tx we are altering it and after that 
>> aborting tx.
>> So pg_class will have something like this:
>> xmin | xmax | relname
>> 100  | 200    | mytable
>> 200  | 0        | mytable
>> After previous abort, tuple (100,200,mytable) becomes visible and if we will 
>> alter table
>> again then xmax of first tuple will be set current xid, resulting in 
>> following table:
>> xmin | xmax | relname
>> 100  | 300    | mytable
>> 200  | 0        | mytable
>> 300  | 0        | mytable
>> In that moment we’ve lost information that first tuple was deleted by our 
>> prepared tx.
> Right. And while the prepared xact has aborted, we don't control when
> it aborts and when those overwrites can start happening. We can and
> should check if a 2pc xact is aborted before we start decoding it so
> we can skip decoding it if it's already aborted, but it could be
> aborted *while* we're decoding it, then have data needed for its
> snapshot clobbered.
> This hasn't mattered in the past because prepared xacts (and
> especially aborted 2pc xacts) have never needed snapshots, we've never
> needed to do something from the perspective of a prepared xact.
> I think we'll probably need to lock the 2PC xact so it cannot be
> aborted or committed while we're decoding it, until we finish decoding
> it. So we lock it, then check if it's already aborted/already
> committed/in progress. If it's aborted, treat it like any normal
> aborted xact. If it's committed, treat it like any normal committed
> xact. If it's in progress, keep the lock and decode it.

But that lock could need to be held for an unbounded period of time -
as long as decoding takes to complete - which seems pretty
undesirable.  Worse still, the same problem will arise if you
eventually want to start decoding ordinary, non-2PC transactions that
haven't committed yet, which I think is something we definitely want
to do eventually; the current handling of bulk loads or bulk updates
leads to significant latency.  You're not going to be able to tell an
active transaction that it isn't allowed to abort until you get done
with it, and I don't really think you should be allowed to lock out
2PC aborts for long periods of time either.  That's going to stink for

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to