"Markus Wanner" <mar...@bluegap.ch> wrote: > What I'm more concerned is the requirement of the proposed algorithm > to keep track of the set of tuples read by any transaction and keep > that set until sometime well after the transaction committed (as > questioned by Neil). That doesn't sound like a negligible overhead. Quick summary for those who haven't read the paper: with this non-blocking technique, every serializable transaction which successfully commits must have its read locks tracked until all serializable transactions which are active at the commit also complete. In the prototype implementation, I think they periodically scanned to drop old transactions, and also did a final check right before deciding there is a conflict which requires rollback, cleaning up the transaction if it had terminated after the last scan but in time to prevent a problem. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers