Hi,

Quoting "Greg Stark" <st...@enterprisedb.com>:
No, I'm not. I'm questioning whether a serializable transaction
isolation level that makes no guarantee that it won't fire spuriously
is useful.

It would certainly be an improvement compared to our status quo, where truly serializable transactions aren't supported at all. And it seems more promising than heading for a perfect *and* scalable implementation.

Heikki proposed a list of requirements which included a requirement
that you not get spurious serialization failures

That requirement is questionable. If we get truly serializable transactions (i.e. no false negatives) with reasonably good performance, that's more than enough and a good step ahead.

Why care about a few false positives (which don't seem to matter performance wise)? We can probably reduce or eliminate them later on. But eliminating false negatives is certainly more important to start with.

What I'm more concerned is the requirement of the proposed algorithm to keep track of the set of tuples read by any transaction and keep that set until sometime well after the transaction committed (as questioned by Neil [1]). That doesn't sound like a negligible overhead.

Maybe the proposed algorithm has to be applied to pages instead of tuples, as they did it in the paper for Berkeley DB. Just to keep that overhead reasonably low.

Regards

Markus Wanner

[1]: Neil Conway's blog, Serializable Snapshot Isolation:
http://everythingisdata.wordpress.com/2009/02/25/february-25-2009/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to