On Mon, Jul 31, 2017 at 1:27 PM, Alvaro Herrera
<alvhe...@2ndquadrant.com> wrote:
> Postgres-XL seems to manage this problem by using a transaction manager
> node, which is in charge of assigning snapshots.  I don't know how that
> works, but perhaps adding that concept here could be useful too.  One
> critical point to that design is that the app connects not directly to
> the underlying Postgres server but instead to some other node which is
> or connects to the node that manages the snapshots.
>
> Maybe Michael can explain in better detail how it works, and/or how (and
> if) it could be applied here.

I suspect that if you've got a central coordinator server that is the
jumping-off point for all distributed transactions, the Postgres-XL
approach is hard to beat (at least in concept, not sure about the
implementation).  That server is brokering all of the connections to
the data nodes anyway, so it might as well tell them all what
snapshots to use while it's there.  When you scale to multiple
coordinators, though, it's less clear that it's the best approach.
Now one coordinator has to be the GTM master, and that server is
likely to become a bottleneck -- plus talking to it involves extra
network hops for all the other coordinators. When you then move the
needle a bit further and imagine a system where the idea of a
coordinator doesn't even exist, and you've just got a loosely couple
distributed system where distributed transactions might arrive on any
node, all of which are also servicing local transactions, then it
seems pretty likely that the Postgres-XL approach is not the best fit.

We might want to support multiple models.  Which one to support first
is a harder question.  The thing I like least about the Postgres-XC
approach is it seems inevitable that, as Michael says, the central
server handing out XIDs and snapshots is bound to become a bottleneck.
That type of system implicitly constructs a total order of all
distributed transactions, but we don't really need a total order.  If
two transactions don't touch the same data and there's no overlapping
transaction that can notice the commit order, then we could make those
commit decisions independently on different nodes without caring which
one "happens first".  The problem is that it might take so much
bookkeeping to figure out whether that is in fact the case in a
particular instance that it's even more expensive than having a
central server that functions as a global bottleneck.

It might be worth some study not only of Postgres-XL but also of other
databases that claim to provide distributed transactional consistency
across nodes.  I've found literature on this topic from time to time
over the years, but I'm not sure what the best practices in this area
actually are. https://en.wikipedia.org/wiki/Global_serializability
claims that a technique called Commitment Ordering (CO) is teh
awesome, but I've got my doubts about whether that's really an
objective description of the state of the art.  One clue is that the
global serialiazability article says three separate times that the
technique has been widely misunderstood.  I'm not sure exactly which
Wikipedia guideline that violates, but I think Wikipedia is supposed
to summarize the views that exist on a topic in accordance with their
prevalence, not take a position on which view is correct.
https://en.wikipedia.org/wiki/Commitment_ordering contains citations
from the papers only of one guy, Yoav Raz, which is another hint that
this may not be as widely-regarded a technique as the person who wrote
these articles thinks it should be.  Anyway, it would be good to
understand what other well-regarded systems do before we choose what
we want to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to