Re: [HACKERS] PostgreSQL-R

2002-12-26 Thread Bruce Momjian

FYI, I think we are going to need two-phase commit, at least to
implement distributed transactions.  I will add it to the TODO list.

---

Mikheev, Vadim wrote:
  http://www.cs.mcgill.ca/~kemme/papers/vldb00.html
 
 Thanks for the link, Darren, I think everyone interested
 in discussion should read it.
 First, I like approach. Second, I don't understand why
 ppl oppose pg-r  2pc. 2pc is just simple protocol to
 perform distributed commits *after* distributed conflicts
 were resolved. It says nothing about *how* to resolve
 conflicts. Commonly, distributed locks are used, pg-r uses
 GCS  kind of batch locking to order distributed transactions
 and serialize execution of conflicting ones. Actually, this
 serialization is the only drawback I see at the moment: due
 to batching of writes/locks pg-r will not allow execution
 of transactions from different sites in read committed mode -
 one of conflicting transactions will be aborted instead of
 waiting for abort/commit of another one, continuing execution
 after that. Because of resolving conflicts *before* commit
 pg-r is not async solution. But it's not true sync replication
 neither due to async commit (read Jan vs Darren discussion in
 http://archives.postgresql.org/pgsql-hackers/2002-12/msg00754.php).
 What's problem with using 2pc for commit in pg-r? We could make it
 optional (and can discuss it later).
 Next, pg-r was originally based on 6.4, so what was changed for
 current pg versions when MV is used for CC? It seems that locking
 tuples via LockTable at Phase 1 is not required anymore, right?
 Upon receiving local WS in Phase 3 local transaction should just
 check that there are no conflicting locks from remote transactions
 in LockTable and can commit after that. Remote transactions will not
 see conflicts from local ones in LockTable but will notice them
 during execution and will be able to abort local transactions.
 (I hope I didn't miss something here.) Also it seems that we could
 perform Phases 2  3 periodically during transaction execution.
 This would make WS smaller and conflicts between long running
 transactions from different sites would be resoved faster.
 
 Comments?
 
 Vadim
 
 
 _
 Sector Data, LLC, is not affiliated with Sector, Inc., or SIAC
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] PostgreSQL-R

2002-12-23 Thread Vadim Mikheev
  It seems that locking tuples via LockTable at Phase 1 is not
  required anymore, right?

 We haven't put those hooks in yet, so the current version is master/slave.

So, you are not going to use any LockTable in Phase 1 on master right
now but you still need some LockTable in Phase 3 on slaves. Are you
going to use pg lock manager table in Phase 3? Shouldn't ordering in
Phase 3 be implemented using special LockTable, totally separated from
pg lock manager? (if it's right that Phase 1 doesn't require Phase 3
LockTable at all.)

  Also it seems that we could perform Phases 2  3 periodically
  during transaction execution. This would make WS smaller and
  conflicts between long running transactions from different sites
  would be resoved faster.

And it would increase commit chances for long running transactions:
due to async notification to other nodes about changes made by
transaction, short transactions may have noticeably higher chances
to commit ... and abort conflicting long transactions.

 Seems like a good idea to me, but we won't know for sure until we
 implement the multi-master hooks.

Is it about periodic Phases 2  3 or about using Phase 3' LockTable
in Phase 1? The first one definitely can wait but the second one should
be resolved before merging pg-r code with main CVS, imo.

Vadim



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PostgreSQL-R

2002-12-21 Thread Darren Johnson



Next, pg-r was originally based on 6.4, so what was changed for
current pg versions when MV is used for CC? It seems that locking
tuples via LockTable at Phase 1 is not required anymore, right?



We haven't put those hooks in yet, so the current version is master/slave.  


Upon receiving local WS in Phase 3 local transaction should just
check that there are no conflicting locks from remote transactions
in LockTable and can commit after that. Remote transactions will not
see conflicts from local ones in LockTable but will notice them
during execution and will be able to abort local transactions.
(I hope I didn't miss something here.) Also it seems that we could
perform Phases 2  3 periodically during transaction execution.
This would make WS smaller and conflicts between long running
transactions from different sites would be resoved faster.

Comments?



Seems like a good idea to me, but we won't know for sure until we 
implement the multi-
master hooks.  

Thanks,

Darren








---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] PostgreSQL-R

2002-12-20 Thread Mikheev, Vadim
 http://www.cs.mcgill.ca/~kemme/papers/vldb00.html

Thanks for the link, Darren, I think everyone interested
in discussion should read it.
First, I like approach. Second, I don't understand why
ppl oppose pg-r  2pc. 2pc is just simple protocol to
perform distributed commits *after* distributed conflicts
were resolved. It says nothing about *how* to resolve
conflicts. Commonly, distributed locks are used, pg-r uses
GCS  kind of batch locking to order distributed transactions
and serialize execution of conflicting ones. Actually, this
serialization is the only drawback I see at the moment: due
to batching of writes/locks pg-r will not allow execution
of transactions from different sites in read committed mode -
one of conflicting transactions will be aborted instead of
waiting for abort/commit of another one, continuing execution
after that. Because of resolving conflicts *before* commit
pg-r is not async solution. But it's not true sync replication
neither due to async commit (read Jan vs Darren discussion in
http://archives.postgresql.org/pgsql-hackers/2002-12/msg00754.php).
What's problem with using 2pc for commit in pg-r? We could make it
optional (and can discuss it later).
Next, pg-r was originally based on 6.4, so what was changed for
current pg versions when MV is used for CC? It seems that locking
tuples via LockTable at Phase 1 is not required anymore, right?
Upon receiving local WS in Phase 3 local transaction should just
check that there are no conflicting locks from remote transactions
in LockTable and can commit after that. Remote transactions will not
see conflicts from local ones in LockTable but will notice them
during execution and will be able to abort local transactions.
(I hope I didn't miss something here.) Also it seems that we could
perform Phases 2  3 periodically during transaction execution.
This would make WS smaller and conflicts between long running
transactions from different sites would be resoved faster.

Comments?

Vadim


_
Sector Data, LLC, is not affiliated with Sector, Inc., or SIAC

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster