a little daughter eats lots of spare cycles - among other things. Sorry
it took that long to review.
On Fri, 8 Jan 2010 20:36:44 +0100, Joachim Wieland <j...@mcknight.de>
The attached patch implements the idea of Heikki / Simon published in
I must admit I didn't read that up front, but thought your patch could
be useful for implementing parallel querying.
So, let's first concentrate on the intended use case: allowing parallel
pg_dump. To me it seems like a pragmatic and quick solution, however,
I'm not sure if requiring superuser privileges is acceptable.
The patch currently compiles (modulo some OID changes in pg_proc.h to
prevent duplicates) and the test suite runs through fine. I haven't
tested the new functions, though.
Reading the code, I'm missing the part that actually acquires the
snapshot for the transaction(s). After setting up multiple transactions
with pg_synchronize_snapshot and pg_synchronize_snapshot_taken, they
still don't have a snapshot, do they?
Also, you should probably ensure the calling transactions don't have a
snapshot already (let alone a transaction id).
In a similar vein, and answering your question in a comment: yes, I'd
say you want to ensure your transactions are in SERIALIZABLE isolation
mode. There's no other isolation level for which that kind of snapshot
serialization makes sense, is there?
Using the exposed functions in a more general sense, I think it's
important to note that the patch only intents to synchronize snapshots
at the start of the transaction, not contiguously. Thus, normal
transaction isolation applies for concurrent writes and each of the
transactions can commit or rollback independently.
The timeout is nice, but is it really required? Isn't the normal query
cancellation infrastructure sufficient?
Hope that helps. Thanks for working on this issue.
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: