From: Masahiko Sawada <masahiko.saw...@2ndquadrant.com>
> > If so, can't we stipulate that the FDW implementor should ensure that the
> commit function always returns control to the caller?
> 
> How can the FDW implementor ensure that? Since even palloc could call
> ereport(ERROR) I guess it's hard to require that to all FDW
> implementors.

I think the what FDW commit routine will do is to just call xa_commit(), or 
PQexec("COMMIT PREPARED") in postgres_fdw.


> It's still a rough idea but I think we can use TMASYNC flag and
> xa_complete explained in the XA specification. The core transaction
> manager call prepare, commit, rollback APIs with the flag, requiring
> to execute the operation asynchronously and to return a handler (e.g.,
> a socket taken by PQsocket in postgres_fdw case) to the transaction
> manager. Then the transaction manager continues polling the handler
> until it becomes readable and testing the completion using by
> xa_complete() with no wait, until all foreign servers return OK on
> xa_complete check.

Unfortunately, even Oracle and Db2 don't support XA asynchronous execution for 
years.  Our DBMS Symfoware doesn't, either.  I don't expect other DBMSs support 
it.

Hmm, I'm afraid this may be one of the FDW's intractable walls for a serious 
scale-out DBMS.  If we define asynchronous FDW routines for 2PC, postgres_fdw 
would be able to implement them by using libpq asynchronous functions.  But 
other DBMSs can't ...


> > Maybe we can consider VOLATILE functions update data.  That may be
> overreaction, though.
> 
> Sorry I don't understand that. The volatile functions are not pushed
> down to the foreign servers in the first place, no?

Ah, you're right.  Then, the choices are twofold: (1) trust users in that their 
functions don't update data or the user's claim (specification) about it, and 
(2) get notification through FE/BE protocol that the remote transaction may 
have updated data.


Regards
Takayuki Tsunakawa

Reply via email to