On Mon, Dec 29, 2014 at 3:53 PM, Kevin Grittner kgri...@ymail.com wrote:
I tend to build out applications on top of functions and the
inability to set isolation mode inside a function confounds me
from using anything but 'read committed'.
Hey, no problem -- just set
On Mon, Dec 29, 2014 at 4:17 PM, Merlin Moncure mmonc...@gmail.com wrote:
Serialization errors only exist as a concession
to concurrency and performance. Again, they should be returned as
sparsely as possible
I think this is fuzzy thinking. Serialization *errors* themselves are
a concession
Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Dec 26, 2014 at 12:38 PM, Kevin Grittner kgri...@ymail.com wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Just for starters, a 40XXX error report will fail to provide the
duplicated key's value. This will be a functional regression,
Not if, as is
On Mon, Dec 29, 2014 at 8:03 AM, Kevin Grittner kgri...@ymail.com wrote:
Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Dec 26, 2014 at 12:38 PM, Kevin Grittner kgri...@ymail.com
wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Just for starters, a 40XXX error report will fail to provide the
Merlin Moncure mmonc...@gmail.com wrote:
Well, I'm arguing that duplicate key errors are not serialization
failures unless it's likely the insertion would succeed upon a retry;
a proper insert, not an upsert. If that's the case with what you're
proposing, then it makes sense to me. But
On Mon, Dec 29, 2014 at 9:09 AM, Kevin Grittner kgri...@ymail.com wrote:
Merlin Moncure mmonc...@gmail.com wrote:
In other words, the current behavior is:
txn A,B begin
txn A inserts
txn B inserts over A, locks, waits
txn A commits. B aborts with duplicate key error
What I'm proposing is
On Mon, Dec 29, 2014 at 3:31 PM, Merlin Moncure mmonc...@gmail.com wrote:
In that case: we don't agree. How come duplicate key errors would be
reported as serialization failures but not RI errors (for example,
inserting a record pointing to another record which a concurrent
transaction
On Mon, Dec 29, 2014 at 9:44 AM, Greg Stark st...@mit.edu wrote:
On Mon, Dec 29, 2014 at 3:31 PM, Merlin Moncure mmonc...@gmail.com wrote:
In that case: we don't agree. How come duplicate key errors would be
reported as serialization failures but not RI errors (for example,
inserting a record
Merlin Moncure mmonc...@gmail.com wrote:
Serialization errors only exist as a concession to concurrency
and performance. Again, they should be returned as sparsely as
possible because they provide absolutely (as Tom pointed
out) zero detail to the application.
That is false. They provide
[combining replies -- nikita, better not to top-post (FYI)]
I'm sorry. I don't know what you mean. I just replied to an email.
To prove your statement, you need to demonstrate how a transaction left
the database in a bad state given concurrent activity without counting
failures.
1.
I believe, the objections expressed in this thread miss a very important
point of all this: the isolation property (the I in ACID) is violated.
Here’s a quote from the Wikipedia article on ACID
http://en.wikipedia.org/wiki/ACID:
The isolation property ensures that the concurrent execution of
On Mon, Dec 29, 2014 at 10:47 AM, Nikita Volkov nikita.y.vol...@mail.ru wrote:
[combining replies -- nikita, better not to top-post (FYI)]
[combining replied again]
I'm sorry. I don't know what you mean. I just replied to an email.
http://www.idallen.com/topposting.html
To prove your
Merlin Moncure mmonc...@gmail.com wrote:
On Mon, Dec 29, 2014 at 10:53 AM, Kevin Grittner kgri...@ymail.com wrote:
The semantics are so imprecise that Tom argued that we should
document that transactions should be retried from the start when
you get the duplicate key error, since it *might*
[errata]
Kevin Grittner kgri...@ymail.com wrote:
Quoting from the peer-reviewed paper presented in Istanbul[1]:
That should have been [3], not [1].
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 12/29/14, 10:53 AM, Kevin Grittner wrote:
Merlin Moncure mmonc...@gmail.com wrote:
Serialization errors only exist as a concession to concurrency
and performance. Again, they should be returned as sparsely as
possible because they provide absolutely (as Tom pointed
out) zero detail to the
nikita.y.vol...@mail.ru nikita.y.vol...@mail.ru wrote:
Executing concurrent transactions inserting the same value of a
unique key fails with the duplicate key error under code
23505 instead of any of transaction conflict errors with a
40*** code.
This is true, and can certainly be
On 2014-12-26 17:23, Kevin Grittner wrote:
Are there any objections to generating a write conflict instead of
a duplicate key error if the duplicate key was added by a
concurrent transaction? Only for transactions at isolation level
REPEATABLE READ or higher?
Is it possible to distinguish
Kevin Grittner kgri...@ymail.com writes:
Are there any objections to generating a write conflict instead of
a duplicate key error if the duplicate key was added by a
concurrent transaction?
Yes. This will deliver a less meaningful error code, *and* break
existing code that is expecting the
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
Are there any objections to generating a write conflict instead
of a duplicate key error if the duplicate key was added by a
concurrent transaction?
Yes. This will deliver a less meaningful error code,
That
Kevin Grittner kgri...@ymail.com writes:
Tom Lane t...@sss.pgh.pa.us wrote:
Yes. This will deliver a less meaningful error code,
That depends entirely on whether you care more about whether the
problem was created by a concurrent transaction or exactly how that
concurrent transaction
Tom Lane t...@sss.pgh.pa.us wrote:
Just for starters, a 40XXX error report will fail to provide the
duplicated key's value. This will be a functional regression,
Not if, as is normally the case, the transaction is retried from
the beginning on a serialization failure. Either the code will
I'll repost my (OP) case, for the references to it to make more sense to
the others.
Having the following table:
CREATE TABLE song_artist (
song_id INT8 NOT NULL,
artist_id INT8 NOT NULL,
PRIMARY KEY (song_id, artist_id)
);
Even trying to protect from this with a
On Fri, Dec 26, 2014 at 7:23 AM, Kevin Grittner kgri...@ymail.com wrote:
Are there any objections to generating a write conflict instead of
a duplicate key error if the duplicate key was added by a
concurrent transaction? Only for transactions at isolation level
REPEATABLE READ or higher?
On Fri, Dec 26, 2014 at 12:38 PM, Kevin Grittner kgri...@ymail.com wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Just for starters, a 40XXX error report will fail to provide the
duplicated key's value. This will be a functional regression,
Not if, as is normally the case, the transaction is
24 matches
Mail list logo