Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-17 Thread Kevin Grittner
On Tuesday, November 17, 2015 12:43 AM, konstantin knizhnik 
 wrote:
> On Nov 16, 2015, at 11:21 PM, Kevin Grittner wrote:

>> If you are saying that DTM tries to roll back a transaction after
>> any participating server has entered the RecordTransactionCommit()
>> critical section, then IMO it is broken.  Full stop.  That can't
>> work with any reasonable semantics as far as I can see.
>
> DTM is not trying to rollback committed transaction.
> What he tries to do is to hide this commit.
> As I already wrote, the idea was to implement "lightweight" 2PC
> because prepared transactions mechanism in PostgreSQL adds too much
> overhead and cause soe problems with recovery.

The point remains that there must be *some* "point of no return"
beyond which rollback (or "hiding" is not possible).  Until this
point, all heavyweight locks held by the transaction must be
maintained without interruption, data modification of the
transaction must not be visible, and any attempt to update or
delete data updated or deleted by the transaction must block or
throw an error.  It sounds like you are attempting to move the
point at which this "point of no return" is, but it isn't as clear
as I would like.  It seems like all participating nodes are
responsible for notifying the arbiter that they have completed, and
until then the arbiter gets involved in every visibility check,
overriding the "normal" value?

> The transaction is normally committed in xlog, so that it can
> always be recovered in case of node fault.
> But before setting correspondent bit(s) in CLOG and releasing
> locks we first contact arbiter to get global status of transaction.
> If it is successfully locally committed by all nodes, then
> arbiter approves commit and commit of transaction normally
> completed.
> Otherwise arbiter rejects commit. In this case DTM marks
> transaction as aborted in CLOG and returns error to the client.
> XLOG is not changed and in case of failure PostgreSQL will try to
> replay this transaction.
> But during recovery it also tries to restore transaction status
> in CLOG.
> And at this placeDTM contacts arbiter to know status of
> transaction.
> If it is marked as aborted in arbiter's CLOG, then it wiull be
> also marked as aborted in local CLOG.
> And according to PostgreSQL visibility rules no other transaction
> will see changes made by this transaction.

If a node goes through crash and recovery after it has written its
commit information to xlog, how are its heavyweight locks, etc.,
maintained throughout?  For example, does each arbiter node have
the complete set of heavyweight locks?  (Basically, all the
information which can be written to files in pg_twophase must be
held somewhere by all arbiter nodes, and used where appropriate.)

If a participating node is lost after some other nodes have told
the arbiter that they have committed, and the lost node will never
be able to indicate that it is committed or rolled back, what is
the mechanism for resolving that?

>>> We can not just call elog(ERROR,...) in SetTransactionStatus
>>> implementation because inside critical section it cause Postgres
>>> crash with panic message. So we have to remember that transaction is
>>> rejected and report error later after exit from critical section:
>>
>> I don't believe that is a good plan.  You should not enter the
>> critical section for recording that a commit is complete until all
>> the work for the commit is done except for telling the all the
>> servers that all servers are ready.
>
> It is good point.
> May be it is the reason of performance scalability problems we
> have noticed with DTM.

Well, certainly the first phase of two-phase commit can take place
in parallel, and once that is complete then the second phase
(commit or rollback of all the participating prepared transactions)
can take place in parallel.  There is no need to serialize that.

> Sorry, some clarification.
> We get 10x slowdown of performance caused by 2pc on very heavy
> load on the IBM system with 256 cores.
> At "normal" servers slowdown of 2pc is smaller - about 2x.

That suggests some contention point, probably on spinlocks.  Were
you able to identify the particular hot spot(s)?


On Tuesday, November 17, 2015 3:09 AM, konstantin knizhnik 
 wrote:
> On Nov 17, 2015, at 10:44 AM, Amit Kapila wrote:

>> I think the general idea is that if Commit is WAL logged, then the
>> operation is considered to committed on local node and commit should
>> happen on any node, only once prepare from all nodes is successful.
>> And after that transaction is not supposed to abort.  But I think you are
>> trying to optimize the DTM in some way to not follow that kind of protocol.
>
> DTM is still following 2PC protocol:
> First transaction is saved in WAL at all nodes and only after it
> commit is completed at all nodes.

So, essentially you are treating the traditional commit point as
phase 1 in a new approach to two-phase 

Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-17 Thread Alvaro Herrera
konstantin knizhnik wrote:

> The transaction is normally committed in xlog, so that it can always be 
> recovered in case of node fault.
> But before setting correspondent bit(s) in CLOG and releasing locks we first 
> contact arbiter to get global status of transaction.
> If it is successfully locally committed by all nodes, then arbiter approves 
> commit and commit of transaction normally completed.
> Otherwise arbiter rejects commit. In this case DTM marks transaction as 
> aborted in CLOG and returns error to the client.
> XLOG is not changed and in case of failure PostgreSQL will try to replay this 
> transaction.
> But during recovery it also tries to restore transaction status in CLOG.
> And at this placeDTM contacts arbiter to know status of transaction.
> If it is marked as aborted in arbiter's CLOG, then it wiull be also marked as 
> aborted in local CLOG.
> And according to PostgreSQL visibility rules no other transaction will see 
> changes made by this transaction.

One problem I see with this approach is that the WAL replay can happen
long after it was written; for instance you might have saved a
basebackup and WAL stream and replay it all several days or weeks later,
when the arbiter no longer has information about the XID.  Later
transactions might (will) depend on the aborted state of the transaction
in question, so this effectively corrupts the database.

In other words, while it's reasonable to require that the arbiter can
always be contacted for transaction commit/abort at run time, but it's
not reasonable to contact the arbiter during WAL replay.

I think this merits more explanation:

> The transaction is normally committed in xlog, so that it can always be 
> recovered in case of node fault.

Why would anyone want to "recover" a transaction that was aborted?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-17 Thread konstantin knizhnik

On Nov 17, 2015, at 10:44 AM, Amit Kapila wrote:

> 
> I think the general idea is that if Commit is WAL logged, then the
> operation is considered to committed on local node and commit should
> happen on any node, only once prepare from all nodes is successful.
> And after that transaction is not supposed to abort.  But I think you are
> trying to optimize the DTM in some way to not follow that kind of protocol.

DTM is still following 2PC protocol:
First transaction is saved in WAL at all nodes and only after it commit is 
completed at all nodes.
We try to avoid maintaining of separate log files for 2PC (as now for prepared 
transactions)
and do not want to change logic of work with WAL.

DTM approach is based on the assumption that PostgreSQL CLOG and visibility 
rules allows to "hide" transaction even if it is committed in WAL.


> By the way, how will arbiter does the recovery in a scenario where it
> crashes, won't it need to contact all nodes for the status of in-progress or
> prepared transactions? 

The current answer is that arbiter can not crash. To provide fault tolerance we 
spawn replicas of arbiter which are managed using Raft protocol.
If master is crashed or network is partitioned then new master is chosen.
PostgreSQL backends have list of possible arbiter addresses. Once connection 
with arbiter is broken, backend tries to reestablish connection using 
alternative addresses.
But only master accepts incomming connections.


> I think it would be better if more detailed design of DTM with respect to
> transaction management and recovery could be updated on wiki for having
> discussion on this topic.  I have seen that you have already updated many
> details of the system, but still the complete picture of DTM is not clear.

I agree.
But please notice that pg_dtm is just one of the possible implementations of 
distributed transaction management.
We also experimenting with other implementations, for example pg_tsftm based on 
timestamps. It doesn't require central arbiter and so shows much better (almost 
linear) scalability.
But recovery in case of pg_tsdtm is even more obscure.
Also performance of pg_tsdtm greatly depends on system clock synchronization 
and network delays. We git about 70k TPS on cluster with 12 nodes connected 
with 10Gbit network., 
But when we run the same test on hosts located in different geographic regions 
(several thousands km), then performance falls down to 15 TPS.
 


> 
> 
> 
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Alvaro Herrera
Konstantin Knizhnik wrote:

> But you may notice that original TransactionIdSetTreeStatus function is void
> - it is not intended to return anything.
> It is called in RecordTransactionCommit in critical section where it is not
> expected that commit may fail.
> But in case of DTM transaction may be rejected by arbiter. XTM API allows to
> control access to CLOG, so everybody will see that transaction is aborted.
> But we in any case have to somehow notify client about abort of transaction.

I think you'll need to rethink how a transaction commits rather
completely, rather than consider localized tweaks to specific functions.
For one thing, the WAL record about transaction commit has already been
written by XactLogCommitRecord much earlier than calling
TransactionIdCommitTree.  So if you were to crash at that point, it
doesn't matter how much the arbiter has rejected the transaction, WAL
replay would mark it as committed.  Also, what about the replication
origin stuff and the TransactionTreeSetCommitTsData() call?

I think you need to involve the arbiter earlier, so that the commit
process can be aborted earlier than those things.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Konstantin Knizhnik

On 11/16/2015 10:54 PM, Alvaro Herrera wrote:

Konstantin Knizhnik wrote:


But you may notice that original TransactionIdSetTreeStatus function is void
- it is not intended to return anything.
It is called in RecordTransactionCommit in critical section where it is not
expected that commit may fail.
But in case of DTM transaction may be rejected by arbiter. XTM API allows to
control access to CLOG, so everybody will see that transaction is aborted.
But we in any case have to somehow notify client about abort of transaction.

I think you'll need to rethink how a transaction commits rather
completely, rather than consider localized tweaks to specific functions.
For one thing, the WAL record about transaction commit has already been
written by XactLogCommitRecord much earlier than calling
TransactionIdCommitTree.  So if you were to crash at that point, it
doesn't matter how much the arbiter has rejected the transaction, WAL
replay would mark it as committed.


Yes, WAL replay will recover this transaction and try to mark it in CLOG as 
completed, but ... we have caught control over CLOG using XTM.
And instead of direct writing to CLOG, DTM will contact arbiter and ask his 
opinion concerning this transaction.
If arbiter doesn't think that it was committed, then it will not be marked as 
committed in local CLOG.



  Also, what about the replication
origin stuff and the TransactionTreeSetCommitTsData() call?

I think you need to involve the arbiter earlier, so that the commit
process can be aborted earlier than those things.





Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Kevin Grittner
On Monday, November 16, 2015 2:47 AM, Konstantin Knizhnik 
 wrote:

> Some time ago at PgConn.Vienna we have proposed eXtensible
>Transaction Manager API (XTM).
> The idea is to be able to provide custom implementation of
>transaction managers as standard Postgres extensions,
> primary goal is implementation of distritibuted transaction manager.
> It should not only support 2PC, but also provide consistent
>snapshots for global transaction executed at different nodes.
>
> Actually, current version of XTM API  propose any particular 2PC
>model. It can be implemented either at coordinator side
> (as it is done in our pg_tsdtm implementation based on timestamps
> and not requiring centralized arbiter), either by arbiter
> (pg_dtm).

I'm not entirely clear on what you're saying here.  I admit I've
not kept in close touch with the distributed processing discussions
lately -- is there a write-up and/or diagram to give an overview of
where we're at with this effort?

> In the last case 2PC logic is hidden under XTM
> SetTransactionStatus method:
>
>  bool (*SetTransactionStatus)(TransactionId xid, int nsubxids,
>TransactionId *subxids, XidStatus status, XLogRecPtr lsn);
>
> which encapsulates TransactionIdSetTreeStatus in clog.c.
> But you may notice that original TransactionIdSetTreeStatus function
>is void - it is not intended to return anything.
> It is called in RecordTransactionCommit in critical section where it
>is not expected that commit may fail.

This issue, though, seems clear enough.  At some point a
transaction must cross a hard line between when it is not committed
and when it is, since after commit subsequent transactions can then
see the data and modify it.  There has to be some "point of no
return" in order to have any sane semantics.  Entering that critical
section is it.

> But in case of DTM transaction may be rejected by arbiter. XTM API
>allows to control access to CLOG, so everybody will see that
>transaction is aborted. But we in any case have to somehow notify
>client about abort of transaction.

If you are saying that DTM tries to roll back a transaction after
any participating server has entered the RecordTransactionCommit()
critical section, then IMO it is broken.  Full stop.  That can't
work with any reasonable semantics as far as I can see.

> We can not just call elog(ERROR,...) in SetTransactionStatus
>implementation because inside critical section it cause Postgres
>crash with panic message. So we have to remember that transaction is
>rejected and report error later after exit from critical section:

I don't believe that is a good plan.  You should not enter the
critical section for recording that a commit is complete until all
the work for the commit is done except for telling the all the
servers that all servers are ready.

> There is one more problem - at this moment the state of transaction
>is TRANS_COMMIT.
> If ERROR handler will try to abort it, then we get yet another fatal
>error: attempt to rollback committed transaction.
> So we need to hide the fact that transaction is actually committed
>in local XLOG.

That is one of pretty much an infinite variety of problems you have
if you don't have a "hard line" for when the transaction is finally
committed.

> This approach works but looks a little bit like hacker approach. It
>requires not only to replace direct call of
>TransactionIdSetTreeStatus with indirect (though XTM API), but also
>requires  to make some non obvious changes in
>RecordTransactionCommit.
>
> So what are the alternatives?
>
> 1. Move RecordTransactionCommit to XTM. In this case we have to copy
>original RecordTransactionCommit to DTM implementation and patch it
>here. It is also not nice, because it will complicate maintenance of
>DTM implementation.
> The primary idea of XTM is to allow development of DTM as standard
>PostgreSQL extension without creating of specific clones of main
>PostgreSQL source tree. But this idea will be compromised if we have
>copy some pieces of PostgreSQL code.
> In some sense it is even worser than maintaining separate branch -
>in last case at least we have some way to perfrtom automatic merge.

You can have a call in XTM that says you want to record the the
commit on all participating servers, but I don't see where that
would involve moving anything we have now out of each participating
server -- it would just need to function like a real,
professional-quality distributed transaction manager doing the
second phase of a two-phase commit.  If any participating server
goes through the first phase and reports that all the heavy lifting
is done, and then is swallowed up in a pyroclastic flow of an
erupting volcano before phase 2 comes around, the DTM must
periodically retry until the administrator cancels the attempt.

> 2. Propose some alternative two-phase commit implementation in
>

[HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Konstantin Knizhnik

Hello,

Some time ago at PgConn.Vienna we have proposed eXtensible Transaction 
Manager API (XTM).
The idea is to be able to provide custom implementation of transaction 
managers as standard Postgres extensions,

primary goal is implementation of distritibuted transaction manager.
It should not only support 2PC, but also provide consistent snapshots 
for global transaction executed at different nodes.


Actually, current version of XTM API  propose any particular 2PC model. 
It can be implemented either at coordinator side
(as it is done in our pg_tsdtm  
implementation based on timestamps and not requiring centralized 
arbiter), either by arbiter
(pg_dtm ). In the last case 2PC 
logic is hidden under XTM SetTransactionStatus method:


 bool (*SetTransactionStatus)(TransactionId xid, int nsubxids, 
TransactionId *subxids, XidStatus status, XLogRecPtr lsn);


which encapsulates TransactionIdSetTreeStatus in clog.c.
But you may notice that original TransactionIdSetTreeStatus function is 
void - it is not intended to return anything.
It is called in RecordTransactionCommit in critical section where it is 
not expected that commit may fail.
But in case of DTM transaction may be rejected by arbiter. XTM API 
allows to control access to CLOG, so everybody will see that transaction 
is aborted. But we in any case have to somehow notify client about abort 
of transaction.


We can not just call elog(ERROR,...) in SetTransactionStatus 
implementation because inside critical section it cause Postgres crash 
with panic message. So we have to remember that transaction is rejected 
and report error later after exit from critical section:



/*
 * Now we may update the CLOG, if we wrote a COMMIT record above
 */
if (markXidCommitted) {
committed = TransactionIdCommitTree(xid, nchildren, children);
}
...
/*
 * If we entered a commit critical section, leave it now, and let
 * checkpoints proceed.
 */
if (markXidCommitted)
{
MyPgXact->delayChkpt = false;
END_CRIT_SECTION();
if (!committed) {
CurrentTransactionState->state = TRANS_ABORT;
CurrentTransactionState->blockState = TBLOCK_ABORT_PENDING;
elog(ERROR, "Transaction commit rejected by XTM");
}
}

There is one more problem - at this moment the state of transaction is 
TRANS_COMMIT.
If ERROR handler will try to abort it, then we get yet another fatal 
error: attempt to rollback committed transaction.
So we need to hide the fact that transaction is actually committed in 
local XLOG.


This approach works but looks a little bit like hacker approach. It 
requires not only to replace direct call of TransactionIdSetTreeStatus 
with indirect (though XTM API), but also requires  to make some non 
obvious changes in RecordTransactionCommit.


So what are the alternatives?

1. Move RecordTransactionCommit to XTM. In this case we have to copy 
original RecordTransactionCommit to DTM implementation and patch it 
here. It is also not nice, because it will complicate maintenance of DTM 
implementation.
The primary idea of XTM is to allow development of DTM as standard 
PostgreSQL extension without creating of specific clones of main 
PostgreSQL source tree. But this idea will be compromised if we have 
copy some pieces of PostgreSQL code.
In some sense it is even worser than maintaining separate branch - in 
last case at least we have some way to perfrtom automatic merge.


2. Propose some alternative two-phase commit implementation in 
PostgreSQL core. The main motivation for such "lightweight" 
implementation of 2PC in pg_dtm is that original mechanism of prepared 
transactions in PostgreSQL adds to much overhead.
In our benchmarks we have found that simple credit-debit banking test 
(without any DTM) works almost 10 times slower with PostgreSQL 2PC than 
without it. This is why we try to propose alternative solution (right 
now pg_dtm is 2 times slower than vanilla PostgreSQL, but it not only 
performs 2PC but also provide consistent snapshots).


May be somebody can suggest some other solution?
Or give some comments concerning current approach?

Thank in advance,
Konstantin,
Postgres Professional



Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Atri Sharma
> I think the general idea is that if Commit is WAL logged, then the
> operation is considered to committed on local node and commit should
> happen on any node, only once prepare from all nodes is successful.
> And after that transaction is not supposed to abort.  But I think you are
> trying to optimize the DTM in some way to not follow that kind of
protocol.
> By the way, how will arbiter does the recovery in a scenario where it
> crashes, won't it need to contact all nodes for the status of in-progress
or
> prepared transactions?
> I think it would be better if more detailed design of DTM with respect to
> transaction management and recovery could be updated on wiki for having
> discussion on this topic.  I have seen that you have already updated many
> details of the system, but still the complete picture of DTM is not clear.

I agree.

I have not been following this discussion but from what I have read above I
think the recovery model in this design is broken. You have to follow some
protocol, whichever you choose.

I think you can try using something like Paxos,  if you are looking at a
higher reliable model but don't want the overhead of 3PC.


Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread Amit Kapila
On Tue, Nov 17, 2015 at 12:12 PM, konstantin knizhnik <
k.knizh...@postgrespro.ru> wrote:

> Thank you for your response.
>
>
> On Nov 16, 2015, at 11:21 PM, Kevin Grittner wrote:
>
> I'm not entirely clear on what you're saying here.  I admit I've
> not kept in close touch with the distributed processing discussions
> lately -- is there a write-up and/or diagram to give an overview of
> where we're at with this effort?
>
>
> https://wiki.postgresql.org/wiki/DTM
>
>
> If you are saying that DTM tries to roll back a transaction after
> any participating server has entered the RecordTransactionCommit()
> critical section, then IMO it is broken.  Full stop.  That can't
> work with any reasonable semantics as far as I can see.
>
>
> DTM is not trying to rollback committed transaction.
> What he tries to do is to hide this commit.
> As I already wrote, the idea was to implement "lightweight" 2PC because
> prepared transactions mechanism in PostgreSQL adds too much overhead and
> cause soe problems with recovery.
>
> The transaction is normally committed in xlog, so that it can always be
> recovered in case of node fault.
> But before setting correspondent bit(s) in CLOG and releasing locks we
> first contact arbiter to get global status of transaction.
> If it is successfully locally committed by all nodes, then arbiter
> approves commit and commit of transaction normally completed.
> Otherwise arbiter rejects commit. In this case DTM marks transaction as
> aborted in CLOG and returns error to the client.
> XLOG is not changed and in case of failure PostgreSQL will try to replay
> this transaction.
> But during recovery it also tries to restore transaction status in CLOG.
> And at this placeDTM contacts arbiter to know status of transaction.
>

I think the general idea is that if Commit is WAL logged, then the
operation is considered to committed on local node and commit should
happen on any node, only once prepare from all nodes is successful.
And after that transaction is not supposed to abort.  But I think you are
trying to optimize the DTM in some way to not follow that kind of protocol.
By the way, how will arbiter does the recovery in a scenario where it
crashes, won't it need to contact all nodes for the status of in-progress or
prepared transactions?
I think it would be better if more detailed design of DTM with respect to
transaction management and recovery could be updated on wiki for having
discussion on this topic.  I have seen that you have already updated many
details of the system, but still the complete picture of DTM is not clear.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Question concerning XTM (eXtensible Transaction Manager API)

2015-11-16 Thread konstantin knizhnik
Thank you for your response.


On Nov 16, 2015, at 11:21 PM, Kevin Grittner wrote:
> I'm not entirely clear on what you're saying here.  I admit I've
> not kept in close touch with the distributed processing discussions
> lately -- is there a write-up and/or diagram to give an overview of
> where we're at with this effort?

https://wiki.postgresql.org/wiki/DTM

> 
> If you are saying that DTM tries to roll back a transaction after
> any participating server has entered the RecordTransactionCommit()
> critical section, then IMO it is broken.  Full stop.  That can't
> work with any reasonable semantics as far as I can see.

DTM is not trying to rollback committed transaction.
What he tries to do is to hide this commit.
As I already wrote, the idea was to implement "lightweight" 2PC because 
prepared transactions mechanism in PostgreSQL adds too much overhead and cause 
soe problems with recovery.

The transaction is normally committed in xlog, so that it can always be 
recovered in case of node fault.
But before setting correspondent bit(s) in CLOG and releasing locks we first 
contact arbiter to get global status of transaction.
If it is successfully locally committed by all nodes, then arbiter approves 
commit and commit of transaction normally completed.
Otherwise arbiter rejects commit. In this case DTM marks transaction as aborted 
in CLOG and returns error to the client.
XLOG is not changed and in case of failure PostgreSQL will try to replay this 
transaction.
But during recovery it also tries to restore transaction status in CLOG.
And at this placeDTM contacts arbiter to know status of transaction.
If it is marked as aborted in arbiter's CLOG, then it wiull be also marked as 
aborted in local CLOG.
And according to PostgreSQL visibility rules no other transaction will see 
changes made by this transaction.



> 
>> We can not just call elog(ERROR,...) in SetTransactionStatus
>>   implementation because inside critical section it cause Postgres
>>   crash with panic message. So we have to remember that transaction is
>>   rejected and report error later after exit from critical section:
> 
> I don't believe that is a good plan.  You should not enter the
> critical section for recording that a commit is complete until all
> the work for the commit is done except for telling the all the
> servers that all servers are ready.

It is good point. 
May be it is the reason of performance scalability problems we have noticed 
with DTM.

>> In our benchmarks we have found that simple credit-debit banking
>>   test (without any DTM) works almost 10 times slower with PostgreSQL
>>   2PC than without it. This is why we try to propose alternative
>>   solution (right now pg_dtm is 2 times slower than vanilla
>>   PostgreSQL, but it not only performs 2PC but also provide consistent
>>   snapshots).
> 
> Are you talking about 10x the latency on a commit, or that the
> overall throughput under saturation load is one tenth of running
> without something to guarantee the transactional integrity of the
> whole set of nodes?  The former would not be too surprising, while
> the latter would be rather amazing.

Sorry, some clarification.
We get 10x slowdown of performance caused by 2pc on very heavy load on the IBM 
system with 256 cores.
At "normal" servers slowdown of 2pc is smaller - about 2x.

> 
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers