Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Thom Brown
On 20 May 2015 at 17:54, Andres Freund and...@anarazel.de wrote:
 On 2015-05-20 17:44:05 +0100, Thom Brown wrote:
 On 8 May 2015 at 16:03, Andres Freund and...@anarazel.de wrote:
  So I've committed the patch yesterday evening. I'm pretty sure there'll
  be some more minor things to change. But overall I feel good about the
  current state.
 
  It'd be quite helpful if others could read the docs, specifically for
  insert, and comment whether they're understandable. I've spent a fair
  amount of time trying to make them somewhat simpler, but I do think I
  only succeeded partially.  And there's also my own brand of english to
  consider ;)

 The docs say Note that exclusion constraints are not supported with
 ON CONFLICT DO UPDATE.

 But I get the following error message text:

 ERROR:  there is no unique or exclusion constraint matching the ON
 CONFLICT specification

 This implies that an exclusion constraint is valid in the statement,
 which contradicts the docs.  Which one is correct?

 ON CONFLICT can be used for ... DO NOTHING as well.

Yes, but still confusing when not using DO NOTHING.

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Andres Freund
On 2015-05-20 17:44:05 +0100, Thom Brown wrote:
 On 8 May 2015 at 16:03, Andres Freund and...@anarazel.de wrote:
  So I've committed the patch yesterday evening. I'm pretty sure there'll
  be some more minor things to change. But overall I feel good about the
  current state.
 
  It'd be quite helpful if others could read the docs, specifically for
  insert, and comment whether they're understandable. I've spent a fair
  amount of time trying to make them somewhat simpler, but I do think I
  only succeeded partially.  And there's also my own brand of english to
  consider ;)
 
 The docs say Note that exclusion constraints are not supported with
 ON CONFLICT DO UPDATE.
 
 But I get the following error message text:
 
 ERROR:  there is no unique or exclusion constraint matching the ON
 CONFLICT specification
 
 This implies that an exclusion constraint is valid in the statement,
 which contradicts the docs.  Which one is correct?

ON CONFLICT can be used for ... DO NOTHING as well.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Andres Freund
On 2015-05-20 18:09:05 +0100, Thom Brown wrote:
 On 20 May 2015 at 17:54, Andres Freund and...@anarazel.de wrote:
  On 2015-05-20 17:44:05 +0100, Thom Brown wrote:
  The docs say Note that exclusion constraints are not supported with
  ON CONFLICT DO UPDATE.
 
  But I get the following error message text:
 
  ERROR:  there is no unique or exclusion constraint matching the ON
  CONFLICT specification
 
  This implies that an exclusion constraint is valid in the statement,
  which contradicts the docs.  Which one is correct?
 
  ON CONFLICT can be used for ... DO NOTHING as well.
 
 Yes, but still confusing when not using DO NOTHING.

I'm not sure I can follow. INSERT INTO account VALUES(...) ON CONFLICT
(email) DO NOTHING; seems to make sense to me?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Thom Brown
On 8 May 2015 at 16:03, Andres Freund and...@anarazel.de wrote:
 So I've committed the patch yesterday evening. I'm pretty sure there'll
 be some more minor things to change. But overall I feel good about the
 current state.

 It'd be quite helpful if others could read the docs, specifically for
 insert, and comment whether they're understandable. I've spent a fair
 amount of time trying to make them somewhat simpler, but I do think I
 only succeeded partially.  And there's also my own brand of english to
 consider ;)

The docs say Note that exclusion constraints are not supported with
ON CONFLICT DO UPDATE.

But I get the following error message text:

ERROR:  there is no unique or exclusion constraint matching the ON
CONFLICT specification

This implies that an exclusion constraint is valid in the statement,
which contradicts the docs.  Which one is correct?

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-05-20 18:09:05 +0100, Thom Brown wrote:
 This implies that an exclusion constraint is valid in the statement,
 which contradicts the docs.  Which one is correct?

 ON CONFLICT can be used for ... DO NOTHING as well.

 Yes, but still confusing when not using DO NOTHING.

 I'm not sure I can follow. INSERT INTO account VALUES(...) ON CONFLICT
 (email) DO NOTHING; seems to make sense to me?

Sure, but on what basis does it decide that there's a conflict?

If you can't use an exclusion constraint to support the command,
then the error message shouldn't be worded like that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Andres Freund
On 2015-05-20 13:31:57 -0400, Tom Lane wrote:
 Sure, but on what basis does it decide that there's a conflict?
 
 If you can't use an exclusion constraint to support the command,
 then the error message shouldn't be worded like that.

But you *can* use a exclusion constraint for DO NOTHING. Just not (yet)
for DO UPDATE.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Peter Geoghegan
On Wed, May 20, 2015 at 10:37 AM, Andres Freund and...@anarazel.de wrote:
 But you *can* use a exclusion constraint for DO NOTHING. Just not (yet)
 for DO UPDATE.

FWIW, I don't think exclusion constraint DO UPDATE support is ever
going to be useful.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-05-20 13:31:57 -0400, Tom Lane wrote:
 If you can't use an exclusion constraint to support the command,
 then the error message shouldn't be worded like that.

 But you *can* use a exclusion constraint for DO NOTHING. Just not (yet)
 for DO UPDATE.

Hm.  Maybe it would be worth having two different message texts for
the two cases, then?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Andres Freund
On 2015-05-20 11:24:06 -0700, Peter Geoghegan wrote:
 On Wed, May 20, 2015 at 10:37 AM, Andres Freund and...@anarazel.de wrote:
  But you *can* use a exclusion constraint for DO NOTHING. Just not (yet)
  for DO UPDATE.
 
 FWIW, I don't think exclusion constraint DO UPDATE support is ever
 going to be useful.

Why?

Even if maybe not directly under the guise of exclusion constraints
themselves, but I do think it's an interesting way to more easily allow
to implement unique constraints on !amcanunique type indexes.  Or, more
interestingly, for unique keys spanning partitions.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Andres Freund
On 2015-05-20 12:07:56 -0700, Peter Geoghegan wrote:
 You're talking about exclusion constraints as an implementation detail
 of something interesting, which I had not considered.

I did mention those two usecases a bunch of times... ;)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-20 Thread Peter Geoghegan
On Wed, May 20, 2015 at 11:26 AM, Andres Freund and...@anarazel.de wrote:
 Even if maybe not directly under the guise of exclusion constraints
 themselves, but I do think it's an interesting way to more easily allow
 to implement unique constraints on !amcanunique type indexes.  Or, more
 interestingly, for unique keys spanning partitions

Alright, then. It's just that at one point people seemed to think that
upsert should support exclusion constraints, and that position was, at
the time, lacking a good justification IMV. What you talk about here
seems much more practical than generalizing upsert to work with
exclusion constraints. You're talking about exclusion constraints as
an implementation detail of something interesting, which I had not
considered.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
So I've committed the patch yesterday evening. I'm pretty sure there'll
be some more minor things to change. But overall I feel good about the
current state.

It'd be quite helpful if others could read the docs, specifically for
insert, and comment whether they're understandable. I've spent a fair
amount of time trying to make them somewhat simpler, but I do think I
only succeeded partially.  And there's also my own brand of english to
consider ;)

Phew.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Geoff Winkless
On 8 May 2015 at 16:03, Andres Freund and...@anarazel.de wrote:

 So I've committed the patch yesterday evening. I'm pretty sure there'll
 be some more minor things to change. But overall I feel good about the
 current state.

 It'd be quite helpful if others could read the docs, specifically for
 insert, and comment whether they're understandable. I've spent a fair
 amount of time trying to make them somewhat simpler, but I do think I
 only succeeded partially.  And there's also my own brand of english to
 consider ;)


​
Omitted only has one m.

There's an extra space in error . (See.

Otherwise it reads fine to me, although I've only skimmed it.

I may have misunderstood: there is only one ON CONFLICT action allowed? I
thought the previous version suggested multiple possible targets and
actions, this suggests that while there can be multiple targets the action
is always the same.

So I thought I could have

INSERT INTO distributors (did, dname)
  ON CONFLICT (did) DO UPDATE dname=target.dname
  ON CONFLICT (dname) DO NOTHING;

Did I misunderstand?

Finally there's no

INSERT INTO distributors (did, dname)
  SELECT did, dname FROM otherdists
ON CONFLICT (did) DO NOTHING;

example (or similar); do we think people will be smart enough to realise
that's possible without one?​


​Geoff​


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Geoff Winkless
On 8 May 2015 at 16:51, Andres Freund and...@anarazel.de wrote:

 On 2015-05-08 16:36:07 +0100, Geoff Winkless wrote:
  I thought the previous version suggested multiple possible targets and
  actions, this suggests that while there can be multiple targets the
  action is always the same.

 I don't think any version of the patch included that functionality. I
 can see it being useful, but it'd make a bunch of things more
 complicated, so I doubt we'll get there for 9.5.


​I'm not particularly bothered by it - I only see it ever being useful on
the extreme edge case, and
I certainly wouldn't want
​it
to hold up the release - but it was the only reason I could see for
requiring a target_clause in the first place (I
​expect that's why I
misread the docs previously!). If you can't specify different actions based
on the target_clause, what's the point in forcing the user to enumerate it?

 example (or similar); do we think people will be smart enough to realise
  that's possible without one?​

 Hm. I'm tempted to say that the synopis makes that clear enough.


​Unless I'm missing it, it's really only in This happens on a row-by-row
basis and in the deterministic paragraph that you even mention
multi-line inserts in the ON CONFLICT section.​
​​

 I
 ​ ​
 personally never check such examples though, so maybe I'm the wrong
 person to judge.


​I'm afraid I'm the sort of person who goes straight to the examples :)

Maybe I'll just suggest then that there's a _potential for confusion_ if
you only give examples of the first kind - people might place some
inference on the fact that the examples only show single-row INSERTs.

Geoff


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 12:32:10 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  So I've committed the patch yesterday evening. I'm pretty sure there'll
  be some more minor things to change. But overall I feel good about the
  current state.
 
 Looks like there's a CLOBBER_CACHE_ALWAYS issue ...
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundidt=2015-05-08%2011%3A52%3A00

Hm. I can't immediately think of anything CCA relevant in the patch. I
do wonder if it's possible that this is a consequence of
indcheckxmin. The unique index is created just before the failing
statement, after the table has had a couple updates. And the following
statement succeeds.

Currently index inferrence ignores indexes that aren't yet valid
according to checkxmin. I'm tempted to think that is a mistake. I think
we should throw an error instead; possbily at execution rather than plan
time. It'll be rather confusing for users if a INSERT ... ON CONFLICT
fails shortly after an index creation, but not later.  Not that an error
message will make them happy, but it'll be much less confusing.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 So I've committed the patch yesterday evening. I'm pretty sure there'll
 be some more minor things to change. But overall I feel good about the
 current state.

Looks like there's a CLOBBER_CACHE_ALWAYS issue ...
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundidt=2015-05-08%2011%3A52%3A00

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 19:32:02 +0200, Andres Freund wrote:
 If the failure is indeed caused by checkxmin (trying to reproduce
 right now), we can just remove the updates in that subsection of the
 tests. They're not relevant.

Hm.  Or easier and uglier, replace the CREATE INDEX statements with
CREATE INDEX CONCURRENTLY.  There's a fair bunch of tests in that file,
and keeping them update free will be annoying.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Heikki Linnakangas

On 05/08/2015 08:25 PM, Tom Lane wrote:

Andres Freund and...@anarazel.de writes:

On 2015-05-08 12:32:10 -0400, Tom Lane wrote:

Looks like there's a CLOBBER_CACHE_ALWAYS issue ...
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundidt=2015-05-08%2011%3A52%3A00



Currently index inferrence ignores indexes that aren't yet valid
according to checkxmin. I'm tempted to think that is a mistake. I think
we should throw an error instead; possbily at execution rather than plan
time.


If that's what is happening, then throwing an error is not going to fix
the problem of the regression test results being unstable; it'll just be
a different error that might or might not get thrown depending on timing.


Why does INSERT ON CONFLICT pay attention to indcheckxmin? Uniqueness 
check only cares about the most recent committed version of the tuple, 
and the index good for that use immediately. If there was a problem 
there, the uniqueness check in a normal insert have the same problem.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 16:36:07 +0100, Geoff Winkless wrote:
 Omitted only has one m.
 
 There's an extra space in error . (See.
 
 Otherwise it reads fine to me, although I've only skimmed it.

Thanks, I'll push fixes for those.

 I may have misunderstood: there is only one ON CONFLICT action
 allowed?

Yes.

 I thought the previous version suggested multiple possible targets and
 actions, this suggests that while there can be multiple targets the
 action is always the same.

I don't think any version of the patch included that functionality. I
can see it being useful, but it'd make a bunch of things more
complicated, so I doubt we'll get there for 9.5.

 So I thought I could have
 
 INSERT INTO distributors (did, dname)
   ON CONFLICT (did) DO UPDATE dname=target.dname
   ON CONFLICT (dname) DO NOTHING;
 
 Did I misunderstand?
 
 Finally there's no
 
 INSERT INTO distributors (did, dname)
   SELECT did, dname FROM otherdists
 ON CONFLICT (did) DO NOTHING;
 
 example (or similar); do we think people will be smart enough to realise
 that's possible without one?​

Hm. I'm tempted to say that the synopis makes that clear enough.  I
personally never check such examples though, so maybe I'm the wrong
person to judge.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-05-08 12:32:10 -0400, Tom Lane wrote:
 Looks like there's a CLOBBER_CACHE_ALWAYS issue ...
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundidt=2015-05-08%2011%3A52%3A00

 Currently index inferrence ignores indexes that aren't yet valid
 according to checkxmin. I'm tempted to think that is a mistake. I think
 we should throw an error instead; possbily at execution rather than plan
 time.

If that's what is happening, then throwing an error is not going to fix
the problem of the regression test results being unstable; it'll just be
a different error that might or might not get thrown depending on timing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 13:25:22 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  On 2015-05-08 12:32:10 -0400, Tom Lane wrote:
  Looks like there's a CLOBBER_CACHE_ALWAYS issue ...
  http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguarundidt=2015-05-08%2011%3A52%3A00
 
  Currently index inferrence ignores indexes that aren't yet valid
  according to checkxmin. I'm tempted to think that is a mistake. I think
  we should throw an error instead; possbily at execution rather than plan
  time.
 
 If that's what is happening, then throwing an error is not going to fix
 the problem of the regression test results being unstable; it'll just be
 a different error that might or might not get thrown depending on timing.

Yep, that was more a comment about how the behaviour generally should
be.  If the failure is indeed caused by checkxmin (trying to reproduce
right now), we can just remove the updates in that subsection of the
tests. They're not relevant.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 On 2015-05-08 11:10:00 -0700, Peter Geoghegan wrote:
 +1. I knew we should have done this before commit.

 Hrmpf.

 I couldn't hit the problem with CCA unfortunately, even after a bunch of
 tries; quite possibly it's too fast on my laptop.

Maybe just hold an open transaction in another session while you do what
the regression test does?  I think this is probably not a matter of CCA
per se but just timing.  It's unfortunate that the test in question is
run serially without other transactions occurring concurrently, as that
would likely have exposed the problem far sooner.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 I think Peter (on IM) just found a more likely explanation than mine.
   index_close(idxRel, NoLock);
   heap_close(relation, NoLock);
   candidates = lappend_oid(candidates, 
 idxForm-indexrelid);
 ...
 Yes. idxForm points into idxRel-rd_index.

Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
Or is the candidates list fairly noncritical?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 11:10:00 -0700, Peter Geoghegan wrote:
 +1. I knew we should have done this before commit.

Hrmpf.

I couldn't hit the problem with CCA unfortunately, even after a bunch of
tries; quite possibly it's too fast on my laptop.  So I'll just have
remove the check and we'll see whether it makes jaguarundi happy over
the next week or so.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 14:59:22 -0400, Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  I think Peter (on IM) just found a more likely explanation than mine.
  index_close(idxRel, NoLock);
  heap_close(relation, NoLock);
  candidates = lappend_oid(candidates, 
  idxForm-indexrelid);
  ...
  Yes. idxForm points into idxRel-rd_index.
 
 Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
 Or is the candidates list fairly noncritical?

Afaics RELCACHE_FORCE_RELEASE is independent of CCA (should we change
that?).  So this probably a bug, just not a relevant bug. I can't see
how it'd take affect in this case.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 14:30:46 -0400, Tom Lane wrote:
 Maybe just hold an open transaction in another session while you do what
 the regression test does?  I think this is probably not a matter of CCA
 per se but just timing.  It's unfortunate that the test in question is
 run serially without other transactions occurring concurrently, as that
 would likely have exposed the problem far sooner.

I think Peter (on IM) just found a more likely explanation than mine.

index_close(idxRel, NoLock);
heap_close(relation, NoLock);
candidates = lappend_oid(candidates, 
idxForm-indexrelid);
...

Yes. idxForm points into idxRel-rd_index.

He's working on a fix for both this and removal of the still unnecessary
indcheckxmin check.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Peter Geoghegan
On Fri, May 8, 2015 at 11:35 AM, Andres Freund and...@anarazel.de wrote:
 I think Peter (on IM) just found a more likely explanation than mine.

 index_close(idxRel, NoLock);
 heap_close(relation, NoLock);
 candidates = lappend_oid(candidates, 
 idxForm-indexrelid);
 ...

 Yes. idxForm points into idxRel-rd_index.

 He's working on a fix for both this and removal of the still unnecessary
 indcheckxmin check.

This was my (last minute) blunder, in case it matters.

Attached patch fixes the bug, and removes the unnecessary indcheckxmin check.

-- 
Peter Geoghegan
diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c
index 8bcc506..1d555ed 100644
--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -547,15 +547,11 @@ infer_arbiter_indexes(PlannerInfo *root)
 			goto next;
 
 		/*
-		 * If the index is valid, but cannot yet be used, ignore it. See
-		 * src/backend/access/heap/README.HOT for discussion.
-		 */
-		if (idxForm-indcheckxmin 
-			!TransactionIdPrecedes(HeapTupleHeaderGetXmin(idxRel-rd_indextuple-t_data),
-   TransactionXmin))
-			goto next;
-
-		/*
+		 * Note that we do not perform a get_relation_info()-style check
+		 * against indcheckxmin here to eliminate candidates, because
+		 * uniqueness checking only cares about the most recently committed
+		 * tuple versions.
+		 *
 		 * Look for match on ON constraint_name variant, which may not be
 		 * unique constraint.  This can only be a constraint name.
 		 */
@@ -566,10 +562,10 @@ infer_arbiter_indexes(PlannerInfo *root)
 		(errcode(ERRCODE_WRONG_OBJECT_TYPE),
 		 errmsg(ON CONFLICT DO UPDATE not supported with exclusion constraints)));
 
+			candidates = lappend_oid(candidates, idxForm-indexrelid);
 			list_free(indexList);
 			index_close(idxRel, NoLock);
 			heap_close(relation, NoLock);
-			candidates = lappend_oid(candidates, idxForm-indexrelid);
 			return candidates;
 		}
 		else if (indexOidFromConstraint != InvalidOid)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Peter Geoghegan
On Fri, May 8, 2015 at 12:00 PM, Peter Geoghegan p...@heroku.com wrote:
 On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
 Or is the candidates list fairly noncritical?

 The candidates list is absolutely critical.

However, the problematic code path is only a couple of times in the
regression test.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 I wrote:
  Peter Geoghegan p...@heroku.com writes:
  On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
  Or is the candidates list fairly noncritical?
 
  The candidates list is absolutely critical.
 
  Oh, I was confusing CCA with RELCACHE_FORCE_RELEASE, which does something
  a bit different.
 
 Actually, looking closer, the quoted code is simply not broken without
 RELCACHE_FORCE_RELEASE: without that, neither heap_close nor index_close
 will do anything that could cause a cache flush.  So while it's certainly
 good pratice to move that lappend_oid call up, it does not explain the
 observed symptoms.  We still need some more investigation here.

Couldn't a cache flush request come from another backend?  Although this
isn't being run in a parallel group, is it?  Maybe a delayed signal that
happens to show up late at just the right time?  Dunno if we've ever
actually seen that but the thought occured to me.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Peter Geoghegan
On Fri, May 8, 2015 at 11:06 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-05-08 20:37:15 +0300, Heikki Linnakangas wrote:
 Why does INSERT ON CONFLICT pay attention to indcheckxmin? Uniqueness check
 only cares about the most recent committed version of the tuple, and the
 index good for that use immediately. If there was a problem there, the
 uniqueness check in a normal insert have the same problem.

 Yea, that's a good angle to view this from.  That'd not be the case, I
 think, if we'd allow this to be used on system relations. But since
 neither creating an index on system relations, nor using INSERT ON
 CONFLICT on them is supported...


+1. I knew we should have done this before commit.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Peter Geoghegan
On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
 Or is the candidates list fairly noncritical?

The candidates list is absolutely critical.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Peter Geoghegan p...@heroku.com writes:
 On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
 Or is the candidates list fairly noncritical?

 The candidates list is absolutely critical.

Oh, I was confusing CCA with RELCACHE_FORCE_RELEASE, which does something
a bit different.  I wonder whether we should get rid of that symbol and
just drive the test in RelationClose off CLOBBER_CACHE_ALWAYS.
(Ditto for CATCACHE_FORCE_RELEASE.)  Or maybe make CLOBBER_CACHE_ALWAYS
#define the other two symbols.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
 On Fri, May 8, 2015 at 12:00 PM, Peter Geoghegan p...@heroku.com wrote:
  On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
  Or is the candidates list fairly noncritical?
 
  The candidates list is absolutely critical.
 
 However, the problematic code path is only a couple of times in the
 regression test.

To Tom's point, it shouldn't actually matter how many times it's in the
regression test, should it?  I'm not saying you're wrong about the
cause, but it's certainly interesting that it didn't consistently blow
up with CCA..

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
I wrote:
 Peter Geoghegan p...@heroku.com writes:
 On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
 Or is the candidates list fairly noncritical?

 The candidates list is absolutely critical.

 Oh, I was confusing CCA with RELCACHE_FORCE_RELEASE, which does something
 a bit different.

Actually, looking closer, the quoted code is simply not broken without
RELCACHE_FORCE_RELEASE: without that, neither heap_close nor index_close
will do anything that could cause a cache flush.  So while it's certainly
good pratice to move that lappend_oid call up, it does not explain the
observed symptoms.  We still need some more investigation here.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 20:37:15 +0300, Heikki Linnakangas wrote:
 Why does INSERT ON CONFLICT pay attention to indcheckxmin? Uniqueness check
 only cares about the most recent committed version of the tuple, and the
 index good for that use immediately. If there was a problem there, the
 uniqueness check in a normal insert have the same problem.

Yea, that's a good angle to view this from.  That'd not be the case, I
think, if we'd allow this to be used on system relations. But since
neither creating an index on system relations, nor using INSERT ON
CONFLICT on them is supported...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Peter Geoghegan p...@heroku.com writes:
  On Fri, May 8, 2015 at 11:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Ooops.  But shouldn't that have failed 100% of the time in a CCA build?
  Or is the candidates list fairly noncritical?
 
  The candidates list is absolutely critical.
 
 Oh, I was confusing CCA with RELCACHE_FORCE_RELEASE, which does something
 a bit different.  I wonder whether we should get rid of that symbol and
 just drive the test in RelationClose off CLOBBER_CACHE_ALWAYS.
 (Ditto for CATCACHE_FORCE_RELEASE.)  Or maybe make CLOBBER_CACHE_ALWAYS
 #define the other two symbols.

Ah.  Seems like that'd make sense to me, though I guess I'd prefer just
driving it all off of CCA.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
 Actually, looking closer, the quoted code is simply not broken without
 RELCACHE_FORCE_RELEASE: without that, neither heap_close nor index_close
 will do anything that could cause a cache flush.  So while it's certainly
 good pratice to move that lappend_oid call up, it does not explain the
 observed symptoms.  We still need some more investigation here.

 Couldn't a cache flush request come from another backend?  Although this
 isn't being run in a parallel group, is it?  Maybe a delayed signal that
 happens to show up late at just the right time?  Dunno if we've ever
 actually seen that but the thought occured to me.

A signal could certainly have arrived in that interval, but it wouldn't be
serviced in that interval --- or if it was, we have far worse problems
than this one.  Nothing interesting should happen except at (at least) a
CHECK_FOR_INTERRUPTS() point, and there is none in this code sequence.

I'm back to suspecting that the indcheckxmin issue is the true cause of
the buildfarm failure, though we lack an explanation why Andres failed
to reproduce it ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 15:22:09 -0400, Tom Lane wrote:
 I'm back to suspecting that the indcheckxmin issue is the true cause of
 the buildfarm failure

Me too.

 though we lack an explanation why Andres failed to reproduce it ...

My laptop is probably a good bit faster than jaguarundi, particularly in
a single threaded workload, as that test, due to turbo
boost. Friarbird didn't trigger it either so far.  It's quite possible
that it requires a concurrent analyze or so to run to occur in the wrong
moment.

I've pushed the fixes to those two. I guess we'll have to wait a couple
(slow) buildfarm cycles to see whether it fixes things.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Andres Freund
On 2015-05-08 22:29:47 +0200, Andres Freund wrote:
 On 2015-05-08 15:22:09 -0400, Tom Lane wrote:
  I'm back to suspecting that the indcheckxmin issue is the true cause of
  the buildfarm failure

  though we lack an explanation why Andres failed to reproduce it ...
 
 My laptop is probably a good bit faster than jaguarundi, particularly in
 a single threaded workload, as that test, due to turbo
 boost. Friarbird didn't trigger it either so far.  It's quite possible
 that it requires a concurrent analyze or so to run to occur in the wrong
 moment.

prairiedog, without CCA, failed as well
http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=prairiedogdt=2015-05-08%2019%3A55%3A11
different test, but again directly after index creation. So I hope it's
indeed the indcheckxmin thing.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-08 Thread Tom Lane
Andres Freund and...@anarazel.de writes:
 prairiedog, without CCA, failed as well
 http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=prairiedogdt=2015-05-08%2019%3A55%3A11
 different test, but again directly after index creation. So I hope it's
 indeed the indcheckxmin thing.

Oh, interesting.  That definitely suggests it's about machine speed/timing
not CCA per se (prairiedog being quite slow).  Which would fit with the
indcheckxmin theory.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Andres Freund
On 2015-05-06 22:51:43 +0300, Heikki Linnakangas wrote:
 Yeah, I agree that DO NOTHING should not lock the rows. It might make sense
 to have a DO LOCK variant, which locks the rows, although I don't
 immediately see what the use case would be.

If you want to do something more complicated with the row than what you
can do in the UPDATE. To do that right now you either need to do the DO
UPDATE SET ... WHERE false; and refetch the tuple which might not be
easy, or do a DO UPDATE SET pkey = target.pkey which produces
bloat. Consider e.g. you're applying incoming data and in case of
conflict you want to call user defined function deciding which row
should survive.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Peter Geoghegan
On Tue, May 5, 2015 at 10:31 AM, Andres Freund and...@anarazel.de wrote:
 Another thing I'm wondering about is dealing with deferrable
 constraints/deferred indexes.

 a) Why does ExecCheckIndexConstraints() check for indisimmediate for
*all* indexes and not just when it's an arbiter index? That seems
needlessly restrictive.

I guess it's a legacy of the time when it was all or nothing. But
supporting that would involve further complicating the interface with
ExecCheckIndexConstraints() so that we cared about the distinction
between conflicts for one purpose and conflicts for another. It could
be done, though.

 b) There's a doc addition to set_constraints.sgml
 +   constraints are affected by this setting.  Note that constraints
 +   that are literalDEFERRED/literal cannot be used as arbiters by
 +   the literalON CONFLICT/ clause that commandINSERT/
 +   supports.

 which I don't think makes sense: SET CONSTRAINTS will never change
 anything relevant for ON CONFLICT, right?

You're right.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Heikki Linnakangas

On 05/06/2015 10:47 PM, Peter Geoghegan wrote:

On Wed, May 6, 2015 at 8:20 AM, Andres Freund and...@anarazel.de wrote:

On 2015-05-05 15:00:56 -0700, Peter Geoghegan wrote:

Locking the row is not nothing, though. If you want to lock the row,
use an UPSERT with a tautologically false WHERE clause (like WHERE
false).


That's not the same. For one it breaks RETURNING which is a death
knell, for another it's not exactly obvious.


DO NOTHING already doesn't project non-inserted tuples, in a way that
fits with the way we won't do that when a before trigger returns NULL.
So I don't know what you mean about RETURNING behavior.

It may not be all that obvious, but then the requirement that you
mention isn't either. I really strongly feel that DO NOTHING should do
nothing. For the pgloader use-case, which is what I have in mind with
that variant, that could literally make the difference between
dirtying an enormous number of buffers and dirtying only a few. This
will *frequently* be the case. And it's not as if the idea of an
INSERT IGNORE is new or in any way novel. As I mentioned, many systems
have a comparable command.

So, yes, DO NOTHING does very little - and that is its appeal.
Supporting this behavior does not short change those who actually care
about the existing tuple sticking around for the duration of their
transaction - they have a way of doing that. If you want to document
INSERT IGNORE as being primarily an ETL-orientated thing, that would
make sense, but let's not hobble that use case.


Yeah, I agree that DO NOTHING should not lock the rows. It might make 
sense to have a DO LOCK variant, which locks the rows, although I don't 
immediately see what the use case would be.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Peter Geoghegan
On Wed, May 6, 2015 at 8:20 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-05-05 15:00:56 -0700, Peter Geoghegan wrote:
 Locking the row is not nothing, though. If you want to lock the row,
 use an UPSERT with a tautologically false WHERE clause (like WHERE
 false).

 That's not the same. For one it breaks RETURNING which is a death
 knell, for another it's not exactly obvious.

DO NOTHING already doesn't project non-inserted tuples, in a way that
fits with the way we won't do that when a before trigger returns NULL.
So I don't know what you mean about RETURNING behavior.

It may not be all that obvious, but then the requirement that you
mention isn't either. I really strongly feel that DO NOTHING should do
nothing. For the pgloader use-case, which is what I have in mind with
that variant, that could literally make the difference between
dirtying an enormous number of buffers and dirtying only a few. This
will *frequently* be the case. And it's not as if the idea of an
INSERT IGNORE is new or in any way novel. As I mentioned, many systems
have a comparable command.

So, yes, DO NOTHING does very little - and that is its appeal.
Supporting this behavior does not short change those who actually care
about the existing tuple sticking around for the duration of their
transaction - they have a way of doing that. If you want to document
INSERT IGNORE as being primarily an ETL-orientated thing, that would
make sense, but let's not hobble that use case.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Andreas Karlsson

On 05/06/2015 09:51 PM, Heikki Linnakangas wrote:

So, yes, DO NOTHING does very little - and that is its appeal.
Supporting this behavior does not short change those who actually care
about the existing tuple sticking around for the duration of their
transaction - they have a way of doing that. If you want to document
INSERT IGNORE as being primarily an ETL-orientated thing, that would
make sense, but let's not hobble that use case.


Yeah, I agree that DO NOTHING should not lock the rows. It might make
sense to have a DO LOCK variant, which locks the rows, although I don't
immediately see what the use case would be.


It seems like a very useful feature to me, if you want to upsert 
something into a table with a serial column and get the value of the 
serial column in a RETURNING clause (but not update any fields if there 
is a conflict). Then I am pretty sure I want to lock the row.


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-06 Thread Andres Freund
On 2015-05-05 15:00:56 -0700, Peter Geoghegan wrote:
 Locking the row is not nothing, though. If you want to lock the row,
 use an UPSERT with a tautologically false WHERE clause (like WHERE
 false).

That's not the same. For one it breaks RETURNING which is a death
knell, for another it's not exactly obvious.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-05 Thread Peter Geoghegan
On Tue, May 5, 2015 at 8:40 AM, Andres Freund and...@anarazel.de wrote:
 One additional thing I'm wondering about is the following: Right now
 INSERT ... ON CONFLICT NOTHING does not acquire a row level lock on the
 'target' tuple. Are we really ok with that? Because in this form ON
 CONFLICT NOTHING really doesn't guarantee much, the conflicting tuple
 could just be deleted directly after the check.  ISTM we should just
 acquire the lock in the same way ExecOnConflictUpdate does. In the
 majority of the cases that'll be what users actually expect
 behaviourally.

Locking the row is not nothing, though. If you want to lock the row,
use an UPSERT with a tautologically false WHERE clause (like WHERE
false).

This is how other similar ignore features work in other systems,
including MySQL, SQLite, and Oracle (which surprisingly has a hint
that does this - surprising only because hints don't usually change
the meaning of statements). It could make a big difference with a
large bulk loading session, because just locking the rows will
generate WAL and dirty pages. ETL is really the use-case for DO
NOTHING.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-05 Thread Andres Freund
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
 Remaining challenges
 =

Another thing I'm wondering about is dealing with deferrable
constraints/deferred indexes.

a) Why does ExecCheckIndexConstraints() check for indisimmediate for
   *all* indexes and not just when it's an arbiter index? That seems
   needlessly restrictive.

b) There's a doc addition to set_constraints.sgml
+   constraints are affected by this setting.  Note that constraints
+   that are literalDEFERRED/literal cannot be used as arbiters by
+   the literalON CONFLICT/ clause that commandINSERT/
+   supports.

which I don't think makes sense: SET CONSTRAINTS will never change
anything relevant for ON CONFLICT, right?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-05 Thread Andres Freund
Hi,

On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
 Remaining challenges
 =

One additional thing I'm wondering about is the following: Right now
INSERT ... ON CONFLICT NOTHING does not acquire a row level lock on the
'target' tuple. Are we really ok with that? Because in this form ON
CONFLICT NOTHING really doesn't guarantee much, the conflicting tuple
could just be deleted directly after the check.  ISTM we should just
acquire the lock in the same way ExecOnConflictUpdate does. In the
majority of the cases that'll be what users actually expect
behaviourally.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-04 Thread Peter Geoghegan
On Fri, May 1, 2015 at 7:49 AM, Andres Freund and...@anarazel.de wrote:
 seems weird for both the BEFORE INSERT and BEFORE UPDATE triggers to
 get a crack at the same tuple, so your way might be better after all.
 But on the other hand, the BEFORE INSERT trigger might have had side
 effects, so we can't just pretend it didn't happen.

 Well, I think it's pretty unavoidable to fire both. On that part I think
 were pretty lengthy discussions a year or so back.

 One idea is to decide that an INSERT with an ON CONFLICT UPDATE
 handler is still an INSERT.  Period.  So the INSERT triggers run, the
 UPDATE triggers don't, and that's it.

 I think that'd be much worse.

A question has come up about RTEs, column-level privileges and BEFORE
triggers. This commit message gives a summary:

https://github.com/petergeoghegan/postgres/commit/87b9f27055e81d1396db3d10a5e9d01c52603783

I'm pretty sure that I'll have to require SELECT privileges of the
EXCLUDED.* RTE too (i.e. revert that commit, and probably document the
new behavior of requiring that). I think it's worth getting feedback
on this point, though, since it is a somewhat subtle issue.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-04 Thread Andres Freund
On 2015-05-04 19:13:27 -0700, Peter Geoghegan wrote:
 A question has come up about RTEs, column-level privileges and BEFORE
 triggers. This commit message gives a summary:
 
 https://github.com/petergeoghegan/postgres/commit/87b9f27055e81d1396db3d10a5e9d01c52603783
 
 I'm pretty sure that I'll have to require SELECT privileges of the
 EXCLUDED.* RTE too (i.e. revert that commit, and probably document the
 new behavior of requiring that). I think it's worth getting feedback
 on this point, though, since it is a somewhat subtle issue.

I think it's pretty clear that we'll have to require that.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-04 Thread Peter Geoghegan
On Mon, May 4, 2015 at 9:00 PM, Andres Freund and...@anarazel.de wrote:
 I think it's pretty clear that we'll have to require that.

Okay, then. I'll push out revised testing of column-level privileges
later. (Andres rebased, and we're now pushing code to:
https://github.com/petergeoghegan/postgres/commits/insert_conflict_5).

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Andres Freund
On 2015-05-01 10:06:42 -0400, Robert Haas wrote:
 On Fri, May 1, 2015 at 9:58 AM, Andres Freund and...@anarazel.de wrote:
  would you rather have EXCLUDED.data refer to the tuple version from
  VALUES (or a SELECT or ...) or to version from the BEFORE trigger?
 
 I think it would be completely shocking if it didn't refer to the
 version returned by the BEFORE trigger.  My interpretation of the
 semantics of BEFORE triggers is that, once the trigger has fired and
 returned a new tuple, things should proceed just as if that new tuple
 were the one originally provided by the user.

Well, it's a BEFORE INSERT trigger, not a BEFORE UPDATE, that's why I'm
not so sure that argument applies.

Peter also leaned in your direction and you apparently have a strong
opinion on it... So I guess that'll be it unless somebody else weighs
in.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Robert Haas
On Fri, May 1, 2015 at 10:10 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-05-01 10:06:42 -0400, Robert Haas wrote:
 On Fri, May 1, 2015 at 9:58 AM, Andres Freund and...@anarazel.de wrote:
  would you rather have EXCLUDED.data refer to the tuple version from
  VALUES (or a SELECT or ...) or to version from the BEFORE trigger?

 I think it would be completely shocking if it didn't refer to the
 version returned by the BEFORE trigger.  My interpretation of the
 semantics of BEFORE triggers is that, once the trigger has fired and
 returned a new tuple, things should proceed just as if that new tuple
 were the one originally provided by the user.

 Well, it's a BEFORE INSERT trigger, not a BEFORE UPDATE, that's why I'm
 not so sure that argument applies.

Would the BEFORE UPDATE trigger even fire in this case?

The thing is, suppose somebody puts a BEFORE INSERT trigger and a
BEFORE UPDATE trigger on the table, and each of those triggers does
this:

NEW.last_updated_time = clock_timestamp();
return NEW;

That should work, and should cover all cases, even if you're using UPSERT.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Heikki Linnakangas

On 04/30/2015 11:09 PM, Peter Geoghegan wrote:

I've been unable to reproduce the unprincipled deadlock using the same
test case as before. However, the exclusion constraint code now
livelocks. Here is example output from a stress-testing session:

...

[Fri May  1 04:45:35 2015] normal exit at 1430455535 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:58 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:58 2015] init done at count_upsert_exclusion.pl line 106.


(I ssh into server, check progress). Then, due to some issue with the
scheduler or something, progress continues:

[Fri May  1 05:17:57 2015] sum is 462
[Fri May  1 05:17:57 2015] count is 8904
[Fri May  1 05:17:58 2015] normal exit at 1430457478 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:


Hmm, so it was stuck for half an hour at that point? Why do you think it 
was a livelock?



This is the same server that I shared credentials with you for. Feel
free to ssh in and investigate it yourself.


I logged in, but the system seems very unresponsive in general. I just 
started apt-get install gdb on it, to investigate what the backends 
are stuck at. It's been running for about 30 minutes now, and I'm still 
waiting...


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Robert Haas
On Fri, May 1, 2015 at 10:49 AM, Andres Freund and...@anarazel.de wrote:
 One idea is to decide that an INSERT with an ON CONFLICT UPDATE
 handler is still an INSERT.  Period.  So the INSERT triggers run, the
 UPDATE triggers don't, and that's it.

 I think that'd be much worse.

OK.  Well, in that case, I guess I'm inclined to think that we
shouldn't throw away the tuple the BEFORE trigger constructs.

 One question, that I was wondering about earlier, that also might
 influence this discussion, is what happens if you change the key we're
 using for resolution during either the BEFORE trigger or the ON CONFLICT
 ... SET.  Right now the conflict will be on the version *after* the
 BEFORE INSERT, which I think think is the only reasonable option.

 And if you're mad enough to change the key in the ON CONFLICT ... SET
 ... you'll possibly get conflicts. I was, over IM, arguing for that to
 be forbidden to protect users against themselves, but Heikki said people
 might e.g. want to set a column in a key to NULL in that case.

I'm not a big fan of trying to protect users against themselves.  I'm
fine with stupid user decisions resulting in errors.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 OK.  In that case, I'm a lot less sure what the right decision is.  It
 seems weird for both the BEFORE INSERT and BEFORE UPDATE triggers to
 get a crack at the same tuple, so your way might be better after all.
 But on the other hand, the BEFORE INSERT trigger might have had side
 effects, so we can't just pretend it didn't happen.

I agree that having the before-insert and before-update triggers both
fire is a bit odd, but I also think it's the right thing to do.  If the
statements were independent instead of an INSERT .. ON CONFLICT, then
both sets of before triggers would very clearly fire.

 One idea is to decide that an INSERT with an ON CONFLICT UPDATE
 handler is still an INSERT.  Period.  So the INSERT triggers run, the
 UPDATE triggers don't, and that's it.

Another option would be to have an independent INSERT-ON-CONFLICT
before trigger which fires..  but, for my 2c, I'm happy to just fire
both.

I definitely feel that the EXCLUDED tuple should refer to the post
before-insert trigger; having it refer to a tuple that may not have
actually conflicted doesn't seem correct to me.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Andres Freund
On 2015-05-01 10:21:27 -0400, Robert Haas wrote:
 On Fri, May 1, 2015 at 10:10 AM, Andres Freund and...@anarazel.de wrote:
  On 2015-05-01 10:06:42 -0400, Robert Haas wrote:
  On Fri, May 1, 2015 at 9:58 AM, Andres Freund and...@anarazel.de wrote:
   would you rather have EXCLUDED.data refer to the tuple version from
   VALUES (or a SELECT or ...) or to version from the BEFORE trigger?
 
  I think it would be completely shocking if it didn't refer to the
  version returned by the BEFORE trigger.  My interpretation of the
  semantics of BEFORE triggers is that, once the trigger has fired and
  returned a new tuple, things should proceed just as if that new tuple
  were the one originally provided by the user.
 
  Well, it's a BEFORE INSERT trigger, not a BEFORE UPDATE, that's why I'm
  not so sure that argument applies.
 
 Would the BEFORE UPDATE trigger even fire in this case?

BEFORE UPDATE triggers fire for INSERT ... ON CONFLICT UPDATE iff
there's a conflict, yes.

 The thing is, suppose somebody puts a BEFORE INSERT trigger and a
 BEFORE UPDATE trigger on the table, and each of those triggers does
 this:
 
 NEW.last_updated_time = clock_timestamp();
 return NEW;
 
 That should work, and should cover all cases, even if you're using UPSERT.

The BEFORE UPDATE would catch things in this case.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Robert Haas
On Fri, May 1, 2015 at 10:24 AM, Andres Freund and...@anarazel.de wrote:
  Well, it's a BEFORE INSERT trigger, not a BEFORE UPDATE, that's why I'm
  not so sure that argument applies.

 Would the BEFORE UPDATE trigger even fire in this case?

 BEFORE UPDATE triggers fire for INSERT ... ON CONFLICT UPDATE iff
 there's a conflict, yes.

 The thing is, suppose somebody puts a BEFORE INSERT trigger and a
 BEFORE UPDATE trigger on the table, and each of those triggers does
 this:

 NEW.last_updated_time = clock_timestamp();
 return NEW;

 That should work, and should cover all cases, even if you're using UPSERT.

 The BEFORE UPDATE would catch things in this case.

OK.  In that case, I'm a lot less sure what the right decision is.  It
seems weird for both the BEFORE INSERT and BEFORE UPDATE triggers to
get a crack at the same tuple, so your way might be better after all.
But on the other hand, the BEFORE INSERT trigger might have had side
effects, so we can't just pretend it didn't happen.

One idea is to decide that an INSERT with an ON CONFLICT UPDATE
handler is still an INSERT.  Period.  So the INSERT triggers run, the
UPDATE triggers don't, and that's it.

Not sure.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Andres Freund
On 2015-05-01 10:39:35 -0400, Robert Haas wrote:
 On Fri, May 1, 2015 at 10:24 AM, Andres Freund and...@anarazel.de wrote:
  The BEFORE UPDATE would catch things in this case.
 
 OK.  In that case, I'm a lot less sure what the right decision is.  It
 seems weird for both the BEFORE INSERT and BEFORE UPDATE triggers to
 get a crack at the same tuple, so your way might be better after all.
 But on the other hand, the BEFORE INSERT trigger might have had side
 effects, so we can't just pretend it didn't happen.

Well, I think it's pretty unavoidable to fire both. On that part I think
were pretty lengthy discussions a year or so back.

 One idea is to decide that an INSERT with an ON CONFLICT UPDATE
 handler is still an INSERT.  Period.  So the INSERT triggers run, the
 UPDATE triggers don't, and that's it.

I think that'd be much worse.


One question, that I was wondering about earlier, that also might
influence this discussion, is what happens if you change the key we're
using for resolution during either the BEFORE trigger or the ON CONFLICT
... SET.  Right now the conflict will be on the version *after* the
BEFORE INSERT, which I think think is the only reasonable option.

And if you're mad enough to change the key in the ON CONFLICT ... SET
... you'll possibly get conflicts. I was, over IM, arguing for that to
be forbidden to protect users against themselves, but Heikki said people
might e.g. want to set a column in a key to NULL in that case.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Peter Geoghegan
On Thu, Apr 30, 2015 at 7:00 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 To fix that, we need to fix the livelock insurance check so that A does
 not wait for B here. Because B is not a speculative insertion, A should
 cancel its speculative insertion and retry instead. (I pushed the one-line
 fix for that to your github repository)

I've been unable to reproduce the unprincipled deadlock using the same
test case as before. However, the exclusion constraint code now
livelocks. Here is example output from a stress-testing session:

trying 128 clients:
[Fri May  1 04:45:15 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:15 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 04:45:19 2015] sum is 96
[Fri May  1 04:45:19 2015] count is 8906
[Fri May  1 04:45:19 2015] normal exit at 1430455519 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:19 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:19 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 04:45:23 2015] sum is -610
[Fri May  1 04:45:23 2015] count is 8867
[Fri May  1 04:45:23 2015] normal exit at 1430455523 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:23 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:23 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 04:45:27 2015] sum is 352
[Fri May  1 04:45:27 2015] count is 8861
[Fri May  1 04:45:27 2015] normal exit at 1430455527 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:27 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:27 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 04:45:31 2015] sum is 190
[Fri May  1 04:45:31 2015] count is 8895
[Fri May  1 04:45:31 2015] normal exit at 1430455531 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:31 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:31 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 04:45:35 2015] sum is -76
[Fri May  1 04:45:35 2015] count is 8833
[Fri May  1 04:45:35 2015] normal exit at 1430455535 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 04:45:58 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 04:45:58 2015] init done at count_upsert_exclusion.pl line 106.


(I ssh into server, check progress). Then, due to some issue with the
scheduler or something, progress continues:

[Fri May  1 05:17:57 2015] sum is 462
[Fri May  1 05:17:57 2015] count is 8904
[Fri May  1 05:17:58 2015] normal exit at 1430457478 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 05:17:58 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 05:17:58 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 05:18:01 2015] sum is 316
[Fri May  1 05:18:01 2015] count is 8839
[Fri May  1 05:18:01 2015] normal exit at 1430457481 after 128000
items processed at count_upsert_exclusion.pl line 192.

Note that it's much easier to see non-uniform durations for each run
for no good reason, rather than livelock per say. Most runs are 3-4
seconds, but then every so often one will last 25 seconds for no good
reason. So maybe this is better described as a lock starvation issue:

trying 128 clients:
[Fri May  1 05:41:16 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 05:41:16 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 05:41:19 2015] sum is -264
[Fri May  1 05:41:19 2015] count is 8886
[Fri May  1 05:41:20 2015] normal exit at 1430458880 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 05:41:20 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 05:41:20 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 05:41:43 2015] sum is -14
[Fri May  1 05:41:43 2015] count is 8894
[Fri May  1 05:41:43 2015] normal exit at 1430458903 after 128000
items processed at count_upsert_exclusion.pl line 192.
trying 128 clients:
[Fri May  1 05:41:43 2015] NOTICE:  extension btree_gist already
exists, skipping
[Fri May  1 05:41:43 2015] init done at count_upsert_exclusion.pl line 106.
[Fri May  1 05:41:47 2015] sum is -338
[Fri May  1 05:41:47 2015] count is 8867
[Fri May  1 05:41:47 2015] normal exit at 1430458907 after 128000
items processed at count_upsert_exclusion.pl line 192.

If I look at the Postgres server log itself, I see very long running
queries that match the following pattern:

2015-05-01 06:06:42 UTC [ 0 ]: LOG:  duration: 21855.521 ms
statement: delete from upsert_race_test where index='9399' and count=0
2015-05-01 06:06:42 UTC [ 0 ]: LOG:  

Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Andres Freund
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
 Remaining challenges
 =

So I did the executor changes I'd mentioned downthread, and Peter agreed
that it'd quite workable.

Right now this, besides cleanup, docs and syntax leaves only one real
issue I know of. Which is the question what EXCLUDED actually refers to.

Consider a table
blarg(key int primary key, data text); with a BEFORE INSERT
trigger that modifies 'data'. For the statement

INSERT INTO blarg(key, data) VALUES(1, 'whatever')
ON CONFLICT (key) DO UPDATE SET data = EXCLUDED.data;

would you rather have EXCLUDED.data refer to the tuple version from
VALUES (or a SELECT or ...) or to version from the BEFORE trigger?


To me it seems better to have it refer to version *not* affected by the
trigger (which will be called either way). I think it'd be slightly
easier to understand and more flexible.  But I could live with either
decision.

This isn't a technical problem, we just have to decide what we want to
do.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Robert Haas
On Fri, May 1, 2015 at 9:58 AM, Andres Freund and...@anarazel.de wrote:
 Right now this, besides cleanup, docs and syntax leaves only one real
 issue I know of. Which is the question what EXCLUDED actually refers to.

 Consider a table
 blarg(key int primary key, data text); with a BEFORE INSERT
 trigger that modifies 'data'. For the statement

 INSERT INTO blarg(key, data) VALUES(1, 'whatever')
 ON CONFLICT (key) DO UPDATE SET data = EXCLUDED.data;

 would you rather have EXCLUDED.data refer to the tuple version from
 VALUES (or a SELECT or ...) or to version from the BEFORE trigger?

I think it would be completely shocking if it didn't refer to the
version returned by the BEFORE trigger.  My interpretation of the
semantics of BEFORE triggers is that, once the trigger has fired and
returned a new tuple, things should proceed just as if that new tuple
were the one originally provided by the user.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Petr Jelinek

On 01/05/15 16:10, Andres Freund wrote:

On 2015-05-01 10:06:42 -0400, Robert Haas wrote:

On Fri, May 1, 2015 at 9:58 AM, Andres Freund and...@anarazel.de wrote:

would you rather have EXCLUDED.data refer to the tuple version from
VALUES (or a SELECT or ...) or to version from the BEFORE trigger?


I think it would be completely shocking if it didn't refer to the
version returned by the BEFORE trigger.  My interpretation of the
semantics of BEFORE triggers is that, once the trigger has fired and
returned a new tuple, things should proceed just as if that new tuple
were the one originally provided by the user.


Well, it's a BEFORE INSERT trigger, not a BEFORE UPDATE, that's why I'm
not so sure that argument applies.

Peter also leaned in your direction and you apparently have a strong
opinion on it... So I guess that'll be it unless somebody else weighs
in.



FWIW I am also leaning towards what Robert says.

--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-05-01 Thread Peter Geoghegan
On Fri, May 1, 2015 at 7:47 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
 Hmm, so it was stuck for half an hour at that point? Why do you think it was
 a livelock?

 This is the same server that I shared credentials with you for. Feel
 free to ssh in and investigate it yourself.


 I logged in, but the system seems very unresponsive in general. I just
 started apt-get install gdb on it, to investigate what the backends are
 stuck at. It's been running for about 30 minutes now, and I'm still
 waiting...

To be honest, I just assumed it was a livelock - maybe I should have
avoided the word. It's possible that there was an issue on the AWS
instance, too. It was disconcerting how each run of the test could
take 4 seconds or 30 seconds, with no particular pattern to it. This
was something I saw consistently.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-30 Thread Heikki Linnakangas

On 04/27/2015 11:02 PM, Peter Geoghegan wrote:

On Mon, Apr 27, 2015 at 8:31 PM, Heikki Linnakangas hlinn...@iki.fi wrote:

I thought we had an ironclad scheme to prevent deadlocks like this, so I'd
like to understand why that happens.



Okay. I think I know how it happens (I was always skeptical of the
idea that this would be 100% reliable), but I'll be able to show you
exactly how tomorrow. I'll have pg_xlogdump output then.


I was able to reproduce this, using two sessions, so that on session 
does a regular INSERT, and another does INSERT ON CONFLICT, after adding 
a sleep(5) to a strategic place. So this was indeed a live bug, 
reproducible even without the hack you had to allow ON CONFLICT UPDATE 
with exclusion constraints. Fortunately this is easy to fix.


Here's how to reproduce:

1. Insert sleep(5) into ExecInsertIndexTuples, just after the 
index_insert() call.


2. Create the test table and index:

create extension btree_gist;
create table foo (id int4, constraint foo_x exclude using gist (id with 
=) );


3. Launch two psql sessions, A and B. Do the following:

A: set deadlock_timeout='10s';
B: set deadlock_timeout='20s';
A: begin; select txid_current();
B: begin; select txid_current();

A: insert into foo values (1) on conflict do nothing;
(the insert will hit the sleep(5) - quickly perform the second insert 
quickly: )

B: insert into foo values (1);

At this point, both transactions have already inserted the tuple to the 
heap. A has done so speculatively, but B has done a regular insert. B 
will find A's tuple and wait until A's speculative insertion completes. 
A will find B's tuple, and wait until B completes, and you get the 
deadlock. Thanks to the way the deadlock_timeout's are set, A will 
detect the deadlock first and abort. That's not cool with ON CONFLICT 
IGNORE.


To fix that, we need to fix the livelock insurance check so that A 
does not wait for B here. Because B is not a speculative insertion, A 
should cancel its speculative insertion and retry instead. (I pushed the 
one-line fix for that to your github repository)


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 3:38 AM, Andres Freund and...@anarazel.de wrote:
 The more I look at approach taken in the executor, the less I like it.
 I think the fundamental structural problem is that you've chosen to
 represent the ON CONFLICT UPDATE part as fully separate plan tree node;
 planned nearly like a normal UPDATE. But it really isn't. That's what
 then requires you to coordinate knowedge around with p_is_speculative,
 have these auxiliary trees, have update nodes without a relation, and
 other similar things.

The update node without a relation thing is misleading. There is an
UpdateStmt parse node that briefly lacks a RangeVar, but it takes its
target RTE from the parent without bothering with a RangeVar. From
there on in, it has an RTE (shared with the parent), just as the
executor has the two share their ResultRelation.

It is a separate node - it's displayed in EXPLAIN output as a separate
node. It's not the first type of node to have to supply its own
instrumentation because of the way its executed. I don't know how you
can say it isn't a separate plan node when it is displayed as such in
EXPLAIN, and is separately instrumented as one.

 My feeling is that this will look much less ugly if there's simply no
 UpdateStmt built. I mean, all we need is the target list and the where
 clause.

Yes, that's all that is needed - most of the structure of a
conventional UPDATE! There isn't much that you can't do that you can
with a regular UPDATE. Where are you going to cut?

 As far as I can see so far that'll get rid of a lot of
 structural uglyness. There'll still be some more localized stuff, but I
 don't think it'll be that bad.

You're going to need a new targetlist just for this case. So you've
just added a new field to save one within Query, etc.

 I'm actually even wondering if it'd not better off *not* to reuse
 ExecUpdate(), but that's essentially a separate concern.

I think that makes no sense. ExecUpdate() has to do many things that
are applicable to both. The *only* thing that it does that's
superfluous for ON CONFLICT DO UPDATE is walking the update chain.
That is literally the only thing.

I think that you're uncomfortable with the way that the structure is
different to anything that exists at the moment, which is
understandable. But this is UPSERT - why would the representation of
what is more or less a hybrid DML query type not have a novel new
representation? What I've done works with most existing abstractions
without too much extra code. The code reuse that this approach allows
seems like a major advantage.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-28 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 6:43 AM, Andres Freund and...@anarazel.de wrote:
 Could you please add the tests for the logical decoding code you added?
 I presume you have some already/

Most of the tests I used for logical decoding were stress tests (i.e.
prominently involved my favorite tool, jjanes_upsert). There is a
regression test added, but it's not especially sophisticated. Which
may be a problem.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-28 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 8:31 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 I thought we had an ironclad scheme to prevent deadlocks like this, so I'd
 like to understand why that happens.


Okay. I think I know how it happens (I was always skeptical of the
idea that this would be 100% reliable), but I'll be able to show you
exactly how tomorrow. I'll have pg_xlogdump output then.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-28 Thread Andres Freund
On 2015-04-27 23:52:58 +0200, Andres Freund wrote:
 On 2015-04-27 16:28:49 +0200, Andres Freund wrote:
  On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
   * So far, there has been a lack of scrutiny about what the patch does
   in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias
   expression) and optimizer (the whole concept of an auxiliary
   query/plan that share a target RTE, and later target ResultRelation).
   If someone took a close look at that, it would be most helpful.
   ruleutils.c is also modified for the benefit of EXPLAIN output. This
   all applies only to the ON CONFLICT UPDATE patch. A committer could
   push out the IGNORE patch before this was 100% firm.
 
  I'm far from an expert on the relevant regions; but I'm starting to look
  nonetheless. I have to say that on a preliminary look it looks more
  complicated than it has to.
 
 So, I'm looking. And I've a few questions:
 * Why do we need to spread knowledge about speculative inserts that wide?
   It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That
   seems a bit wide - and as far as I see not really necessary. That e.g.
   transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems
   wrong.
 
 * afaics 'auxiliary query' isn't something we have under that
   name. This isn't really different to what wCTEs do, so I don't think
   we need this term here.
 
 * You refer to 'join like syntax' in a couple places. I don't really see
   the current syntax being that join like? Is that just a different
   perception of terminology or is that just remnants of earlier
   approaches?
 
 * I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I
   don't really see why we need it at all? Can't we 'just' make those
   plain vars referring to the normal targetlist entry? I feel like I'm
   missing something here.
 
 * The whole dealing with the update clause doesn't seem super
   clean. I'm not sure yet how to do it nicer though :(

The more I look at approach taken in the executor, the less I like it.
I think the fundamental structural problem is that you've chosen to
represent the ON CONFLICT UPDATE part as fully separate plan tree node;
planned nearly like a normal UPDATE. But it really isn't. That's what
then requires you to coordinate knowedge around with p_is_speculative,
have these auxiliary trees, have update nodes without a relation, and
other similar things.

My feeling is that this will look much less ugly if there's simply no
UpdateStmt built. I mean, all we need is the target list and the where
clause. As far as I can see so far that'll get rid of a lot of
structural uglyness. There'll still be some more localized stuff, but I
don't think it'll be that bad.

I'm actually even wondering if it'd not better off *not* to reuse
ExecUpdate(), but that's essentially a separate concern.

I'll try to rejigger things roughly in that directions. If you haven't
heard of me in three days, call the police.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-27 Thread Andres Freund
On 2015-04-27 16:28:49 +0200, Andres Freund wrote:
 On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
  * So far, there has been a lack of scrutiny about what the patch does
  in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias
  expression) and optimizer (the whole concept of an auxiliary
  query/plan that share a target RTE, and later target ResultRelation).
  If someone took a close look at that, it would be most helpful.
  ruleutils.c is also modified for the benefit of EXPLAIN output. This
  all applies only to the ON CONFLICT UPDATE patch. A committer could
  push out the IGNORE patch before this was 100% firm.

 I'm far from an expert on the relevant regions; but I'm starting to look
 nonetheless. I have to say that on a preliminary look it looks more
 complicated than it has to.

So, I'm looking. And I've a few questions:
* Why do we need to spread knowledge about speculative inserts that wide?
  It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That
  seems a bit wide - and as far as I see not really necessary. That e.g.
  transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems
  wrong.

* afaics 'auxiliary query' isn't something we have under that
  name. This isn't really different to what wCTEs do, so I don't think
  we need this term here.

* You refer to 'join like syntax' in a couple places. I don't really see
  the current syntax being that join like? Is that just a different
  perception of terminology or is that just remnants of earlier
  approaches?

* I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I
  don't really see why we need it at all? Can't we 'just' make those
  plain vars referring to the normal targetlist entry? I feel like I'm
  missing something here.

* The whole dealing with the update clause doesn't seem super
  clean. I'm not sure yet how to do it nicer though :(

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-27 Thread Andres Freund
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
 It's make-or-break time for this patch. Please help me get it over the
 line in time.

Could you please add the tests for the logical decoding code you added?
I presume you have some already/

 Heikki is in Northern California this week, and I think
 we'll have time to talk about the patch

I'll be there for a while as well. Starting the week after, arriving on
the 5th. ;)

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-27 Thread Andres Freund
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
 * So far, there has been a lack of scrutiny about what the patch does
 in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias
 expression) and optimizer (the whole concept of an auxiliary
 query/plan that share a target RTE, and later target ResultRelation).
 If someone took a close look at that, it would be most helpful.
 ruleutils.c is also modified for the benefit of EXPLAIN output. This
 all applies only to the ON CONFLICT UPDATE patch. A committer could
 push out the IGNORE patch before this was 100% firm.

I'm far from an expert on the relevant regions; but I'm starting to look
nonetheless. I have to say that on a preliminary look it looks more
complicated than it has to.

Tom, any chance you could have a look at the parse-analsys-executor
transformations?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-27 Thread Peter Geoghegan
On Sun, Apr 26, 2015 at 6:02 PM, Peter Geoghegan p...@heroku.com wrote:
 * I privately pointed out to Heikki what I'd said publicly about 6
 weeks ago: that there is still a *very* small chance of exclusion
 constraints exhibiting unprincipled deadlocks (he missed it at the
 time). I think that this risk is likely to be acceptable, since it
 takes so much to see it happen (and ON CONFLICT UPDATE/nbtree is
 unaffected). But let's better characterize the risks, particularly in
 light of the changes to store speculative tokens in the c_ctid field
 on newly inserted (speculative) tuples. I think that that probably
 made the problem significantly less severe, and perhaps it's now
 entirely theoretical, but I want to make sure. I'm going to try and
 characterize the risks with the patch here today.

So, this can still happen, but is now happening less often than
before, I believe. On a 16 core server, with continual 128 client
jjanes_upsert exclusion constraint only runs, with fsync=off, I
started at this time:

2015-04-27 21:22:28 UTC [ 0 ]: LOG:  database system was shut down at
2015-04-27 21:22:25 UTC
2015-04-27 21:22:28 UTC [ 0 ]: LOG:  database system is ready to
accept connections
2015-04-27 22:47:20 UTC [ 0 ]: LOG:  autovacuum launcher started
2015-04-27 22:47:21 UTC [ 0 ]: LOG:  autovacuum launcher started

Finally, with ON CONFLICT UPDATE (which we don't intend to support
with exclusion constraints anyway), the torture testing finally
produces a deadlock several hours later (due to having livelock
insurance [1]):

2015-04-28 00:22:06 UTC [ 0 ]: LOG:  autovacuum launcher started
2015-04-28 00:37:24 UTC [ 432432057 ]: ERROR:  deadlock detected
2015-04-28 00:37:24 UTC [ 432432057 ]: DETAIL:  Process 130628 waits
for ShareLock on transaction 432432127; blocked by process 130589.
Process 130589 waits for ShareLock on speculative token 13 of
transaction 432432057; blocked by process 130628.
Process 130628: insert into upsert_race_test (index, count)
values ('7566','-1') on conflict
  update set count=TARGET.count + EXCLUDED.count
  where TARGET.index = EXCLUDED.index
  returning count
Process 130589: insert into upsert_race_test (index, count)
values ('7566','1') on conflict
  update set count=TARGET.count + EXCLUDED.count
  where TARGET.index = EXCLUDED.index
  returning count
2015-04-28 00:37:24 UTC [ 432432057 ]: HINT:  See server log for query details.
2015-04-28 00:37:24 UTC [ 432432057 ]: CONTEXT:  while checking
exclusion constraint on tuple (3,36) in relation upsert_race_test
2015-04-28 00:37:24 UTC [ 432432057 ]: STATEMENT:  insert into
upsert_race_test (index, count) values ('7566','-1') on conflict
  update set count=TARGET.count + EXCLUDED.count
  where TARGET.index = EXCLUDED.index
  returning count

ON CONFLICT UPDATE will only ever use unique indexes, and so is not affected.

Given that exclusion constraints can only be used with IGNORE, and
given that this is so hard to recreate, I'm inclined to conclude that
it's acceptable. It's certainly way better than risking livelocks by
not having deadlock insurance. This is a ridiculously CPU-bound
workload, with extreme and constant contention. I'd be surprised if
there were any real complaints from the field in practice.

Do you think that this is acceptable, Heikki?

[1] 
https://github.com/petergeoghegan/postgres/commit/c842c798e4a9e31dce06b4836b2bdcbafe1155d6#diff-51288d1b75a37ac3b32717ec50b66c23R87
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-27 Thread Heikki Linnakangas

On 04/27/2015 07:02 PM, Peter Geoghegan wrote:

So, this can still happen, but is now happening less often than
before, I believe. On a 16 core server, with continual 128 client
jjanes_upsert exclusion constraint only runs, with fsync=off, I
started at this time:

2015-04-27 21:22:28 UTC [ 0 ]: LOG:  database system was shut down at
2015-04-27 21:22:25 UTC
2015-04-27 21:22:28 UTC [ 0 ]: LOG:  database system is ready to
accept connections
2015-04-27 22:47:20 UTC [ 0 ]: LOG:  autovacuum launcher started
2015-04-27 22:47:21 UTC [ 0 ]: LOG:  autovacuum launcher started

Finally, with ON CONFLICT UPDATE (which we don't intend to support
with exclusion constraints anyway), the torture testing finally
produces a deadlock several hours later (due to having livelock
insurance [1]):

2015-04-28 00:22:06 UTC [ 0 ]: LOG:  autovacuum launcher started
2015-04-28 00:37:24 UTC [ 432432057 ]: ERROR:  deadlock detected
2015-04-28 00:37:24 UTC [ 432432057 ]: DETAIL:  Process 130628 waits
for ShareLock on transaction 432432127; blocked by process 130589.
 Process 130589 waits for ShareLock on speculative token 13 of
transaction 432432057; blocked by process 130628.
 Process 130628: insert into upsert_race_test (index, count)
values ('7566','-1') on conflict
   update set count=TARGET.count + EXCLUDED.count
   where TARGET.index = EXCLUDED.index
   returning count
 Process 130589: insert into upsert_race_test (index, count)
values ('7566','1') on conflict
   update set count=TARGET.count + EXCLUDED.count
   where TARGET.index = EXCLUDED.index
   returning count
2015-04-28 00:37:24 UTC [ 432432057 ]: HINT:  See server log for query details.
2015-04-28 00:37:24 UTC [ 432432057 ]: CONTEXT:  while checking
exclusion constraint on tuple (3,36) in relation upsert_race_test
2015-04-28 00:37:24 UTC [ 432432057 ]: STATEMENT:  insert into
upsert_race_test (index, count) values ('7566','-1') on conflict
   update set count=TARGET.count + EXCLUDED.count
   where TARGET.index = EXCLUDED.index
   returning count

ON CONFLICT UPDATE will only ever use unique indexes, and so is not affected.

Given that exclusion constraints can only be used with IGNORE, and
given that this is so hard to recreate, I'm inclined to conclude that
it's acceptable. It's certainly way better than risking livelocks by
not having deadlock insurance. This is a ridiculously CPU-bound
workload, with extreme and constant contention. I'd be surprised if
there were any real complaints from the field in practice.

Do you think that this is acceptable, Heikki?


I thought we had an ironclad scheme to prevent deadlocks like this, so 
I'd like to understand why that happens.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-27 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 7:02 PM, Peter Geoghegan p...@heroku.com wrote:
 Given that exclusion constraints can only be used with IGNORE, and
 given that this is so hard to recreate, I'm inclined to conclude that
 it's acceptable. It's certainly way better than risking livelocks by
 not having deadlock insurance.

Uh, I mean livelock insurance here, of course.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-27 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 2:52 PM, Andres Freund and...@anarazel.de wrote:
 So, I'm looking. And I've a few questions:
 * Why do we need to spread knowledge about speculative inserts that wide?
   It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That
   seems a bit wide - and as far as I see not really necessary. That e.g.
   transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems
   wrong.

Maybe it shouldn't be spelt p_is_speculative, because speculative
insertion is a low-level mechanism. Maybe it should be something like
p_is_auxiliary...but you don't like that word either.  :-)

If setTargetTable() took the RangeVar NULL-ness as a proxy for a child
auxiliary query, and if free_parsestate() took a flag to indicate that
it shouldn't perform a heap_close() when an auxiliary statement's
parent must do it instead, then it wouldn't be nescessary to add a
p_is_speculative field. But that seems worse.

 * afaics 'auxiliary query' isn't something we have under that
   name. This isn't really different to what wCTEs do, so I don't think
   we need this term here.

Well, I guess auxiliary is just a descriptive word, as opposed to one
that describes a technical mechanism peculiar to Postgres/this patch.
I don't feel that it's important either way.

 * You refer to 'join like syntax' in a couple places. I don't really see
   the current syntax being that join like? Is that just a different
   perception of terminology or is that just remnants of earlier
   approaches?

No. It is join-like. It looks the same as and behaves similarly to
UPDATE  FROM ... . There is one difference, though -- excluded.*
is not visible from RETURNING (because of the ambiguity of doing it
the UPDATE way, where, as with a self-join, both tuples have identical
column names. And because the INSERT RETURNING behavior should be
dominant). That's all I mean by join-like. People thought that
looked nicer than a straight expression that operates on a Var (which
FWIW is kind of what MySQL does), so that's the way it works.

 * I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I
   don't really see why we need it at all? Can't we 'just' make those
   plain vars referring to the normal targetlist entry? I feel like I'm
   missing something here.

If you allow multiple RTEs by not having the rewriter swap out
EXCLUDED vars for ExcludedExpr(), what you end up with is an UPDATE
subquery with a join (and, if you change nothing else in the patch, a
segfault). Fixing that segfault is probably going to require more
complexity in the optimizer, and certainly will require more in the
executor. Imagine teaching ModifyTable nodes about rescan to make a
row lock conflict handled from its parent (if we had a real join).
Alternatively, maybe you could have the EPQ stuff install a tuple and
then execute a single EvalPlanQualNext() against a scantuple (which
you'd also have to install using EPQ). You can install multiple
tuples, which is used for SELECT FOR UPDATE's EPQ stuff. But that
seems very significantly more complicated than what I actually did.

Think of ExcludedExpr as a way of pretending that an expression on the
target's Vars is a Var referencing a distinct RTE, simply because
people think that looks nicer. The EXCLUDED.* tuple legitimately
originates form the same tuple context as the target tuple, which the
structure of the code reflects.

 * The whole dealing with the update clause doesn't seem super
   clean. I'm not sure yet how to do it nicer though :(

At least it isn't all that much code. I think that the main thing that
this design has to recommend it is code re-use. The patch actually
isn't all that big in terms of lines of code.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-26 Thread Peter Geoghegan
I have pushed my patch, newly rebased, to a new branch on my personal
Github account (branch: insert_conflict_4):

https://github.com/petergeoghegan/postgres/commits/insert_conflict_4

I'm not going to attach a patch here at all. Andres and Heikki should
now push their changes to that branch (or alternatively, Andres can
ask me to merge his stuff into it).

It's make-or-break time for this patch. Please help me get it over the
line in time. Heikki is in Northern California this week, and I think
we'll have time to talk about the patch, which I expect will help (an
in-person chat with Andres in NYC certainly helped *a lot*). But that
also means that he's going to be travelling a long distance, and we
can assume will have reduced availability for the next couple of days.

Notable changes
=

* Work of Heikki, myself and Andres from the last week or so rebased
to be cumulative (as before, ON CONFLICT IGNORE - RTE changes - ON
CONFLICT UPDATE). Would apply cleanly to today's git master branch.

* Improved INSERT documentation [1].

* Minor style tweaks to RTE change commit.

* Improved commit messages. Importantly, these have attribution that I
think fairly reflects everyone's individual contribution. Please let
me know if I missed something.

* Most importantly, RLS changes.

The RLS patch is now significantly simpler than before. In general,
I'm very happy with how the new approach to RLS enforcement, and the
new WCO kind field has resulted in a far simpler RLS patch. This
needs the scrutiny of a subject matter expert like Stephen or Dean. I
would be happy to grant either git access to my personal branch, to
push out code changes or tests as they see fit. Just e-mail me
privately with the relevant details.

Remaining challenges
=

* As discussed in a dedicated thread, we're probably going to have to
tweak the syntax a bit. No need to hold what I have here up for that,
though (provisionally, this version still puts inference WHERE clause
to infer partial indexes in parens). Let's confine the discussion of
that to its dedicated thread. These issues probably apply equally to
the IGNORE variant, and so can be considered a blocker to its commit
(and not just ON CONFLICT UPDATE).

* So far, there has been a lack of scrutiny about what the patch does
in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias
expression) and optimizer (the whole concept of an auxiliary
query/plan that share a target RTE, and later target ResultRelation).
If someone took a close look at that, it would be most helpful.
ruleutils.c is also modified for the benefit of EXPLAIN output. This
all applies only to the ON CONFLICT UPDATE patch. A committer could
push out the IGNORE patch before this was 100% firm.

* I privately pointed out to Heikki what I'd said publicly about 6
weeks ago: that there is still a *very* small chance of exclusion
constraints exhibiting unprincipled deadlocks (he missed it at the
time). I think that this risk is likely to be acceptable, since it
takes so much to see it happen (and ON CONFLICT UPDATE/nbtree is
unaffected). But let's better characterize the risks, particularly in
light of the changes to store speculative tokens in the c_ctid field
on newly inserted (speculative) tuples. I think that that probably
made the problem significantly less severe, and perhaps it's now
entirely theoretical, but I want to make sure. I'm going to try and
characterize the risks with the patch here today.

Thanks

[1] 
http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/on-conflict-docs/sql-insert.html
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-26 Thread Peter Geoghegan
On Sun, Apr 26, 2015 at 6:02 PM, Peter Geoghegan p...@heroku.com wrote:
 Remaining challenges
 =

I may have forgotten one: Andres asked me to make logical decoding
discriminate against speculative confirmation records/changes, as
opposed to merely looking for the absence of a super-deletion. We'll
need to firm up on that, but it doesn't seem like a big deal.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-03 Thread Peter Geoghegan
On Tue, Mar 3, 2015 at 12:05 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
 My experimental branch works just fine (with a variant jjanes_upsert
 with subxact looping), until I need to restart an update after a
 failed heap_update() that still returned HeapTupleMayBeUpdated
 (having super deleted within an ExecUpdate() call). There is no good
 way to do that for ExecUpdate() that I can see, because an existing,
 visible row is affected (unlike with ExecInsert()). Even if it was
 possible, it would be hugely invasive to already very complicated code
 paths.

 Ah, so we can't easily use super-deletion to back out an UPDATE. I had not
 considered that.

Yeah. When I got into considering making EvalPlanQualFetch() look at
speculative tokens, it became abundantly clear that that code would
never be committed, even if I could make it work.

 I continue to believe that the best way forward is to incrementally
 commit the work by committing ON CONFLICT IGNORE first. That way,
 speculative tokens will remain strictly the concern of UPSERTers or
 sessions doing INSERT ... ON CONFLICT IGNORE.


 Ok, let's try that. Can you cut a patch that does just ON CONFLICT IGNORE,
 please?

Of course. I'll have that for your shortly.

Thanks
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-03 Thread Heikki Linnakangas

On 03/02/2015 11:21 PM, Peter Geoghegan wrote:

On Mon, Mar 2, 2015 at 12:15 PM, Heikki Linnakangas hlinn...@iki.fi wrote:

Hmm. I used a b-tree to estimate the effect that the locking would have in
the UPSERT case, for UPSERT into a table with a b-tree index. But you're
right that for the question of whether this is acceptable for the case of
regular insert into a table with exclusion constraints, other indexams are
more interesting. In a very quick test, the overhead with a single GiST
index seems to be about 5%. IMHO that's still a noticeable slowdown, so
let's try to find a way to eliminate it.


Honestly, I don't know why you're so optimistic that this can be
fixed, even with that heavyweight lock (for regular inserters +
exclusion constraints).


We already concluded that it can be fixed, with the heavyweight lock. 
See http://www.postgresql.org/message-id/54f4a0e0.4070...@iki.fi. Do you 
see some new problem with that that hasn't been discussed yet? To 
eliminate the heavy-weight lock, we'll need some new ideas, but it 
doesn't seem that hard.



My experimental branch, which I showed you privately shows big
problems when I simulate upserting with exclusion constraints (so we
insert first, handle exclusion violations using the traditional upsert
subxact pattern that does work with B-Trees). It's much harder to back
out of a heap_update() than it is to back out of a heap_insert(),
since we might well be updated a tuple visible to some other MVCC
snapshot. Whereas we can super delete a tuple knowing that only a
dirty snapshot might have seen it, which bounds the complexity (only
wait sites for B-Trees + exclusion constraints need to worry about
speculative insertion tokens and so on).

My experimental branch works just fine (with a variant jjanes_upsert
with subxact looping), until I need to restart an update after a
failed heap_update() that still returned HeapTupleMayBeUpdated
(having super deleted within an ExecUpdate() call). There is no good
way to do that for ExecUpdate() that I can see, because an existing,
visible row is affected (unlike with ExecInsert()). Even if it was
possible, it would be hugely invasive to already very complicated code
paths.


Ah, so we can't easily use super-deletion to back out an UPDATE. I had 
not considered that.



I continue to believe that the best way forward is to incrementally
commit the work by committing ON CONFLICT IGNORE first. That way,
speculative tokens will remain strictly the concern of UPSERTers or
sessions doing INSERT ... ON CONFLICT IGNORE.


Ok, let's try that. Can you cut a patch that does just ON CONFLICT 
IGNORE, please?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-02 Thread Peter Geoghegan
On Mon, Mar 2, 2015 at 12:15 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 Hmm. I used a b-tree to estimate the effect that the locking would have in
 the UPSERT case, for UPSERT into a table with a b-tree index. But you're
 right that for the question of whether this is acceptable for the case of
 regular insert into a table with exclusion constraints, other indexams are
 more interesting. In a very quick test, the overhead with a single GiST
 index seems to be about 5%. IMHO that's still a noticeable slowdown, so
 let's try to find a way to eliminate it.

Honestly, I don't know why you're so optimistic that this can be
fixed, even with that heavyweight lock (for regular inserters +
exclusion constraints).

My experimental branch, which I showed you privately shows big
problems when I simulate upserting with exclusion constraints (so we
insert first, handle exclusion violations using the traditional upsert
subxact pattern that does work with B-Trees). It's much harder to back
out of a heap_update() than it is to back out of a heap_insert(),
since we might well be updated a tuple visible to some other MVCC
snapshot. Whereas we can super delete a tuple knowing that only a
dirty snapshot might have seen it, which bounds the complexity (only
wait sites for B-Trees + exclusion constraints need to worry about
speculative insertion tokens and so on).

My experimental branch works just fine (with a variant jjanes_upsert
with subxact looping), until I need to restart an update after a
failed heap_update() that still returned HeapTupleMayBeUpdated
(having super deleted within an ExecUpdate() call). There is no good
way to do that for ExecUpdate() that I can see, because an existing,
visible row is affected (unlike with ExecInsert()). Even if it was
possible, it would be hugely invasive to already very complicated code
paths.

I continue to believe that the best way forward is to incrementally
commit the work by committing ON CONFLICT IGNORE first. That way,
speculative tokens will remain strictly the concern of UPSERTers or
sessions doing INSERT ... ON CONFLICT IGNORE. Unless, you're happy to
have UPDATEs still deadlock, and only fix unprincipled deadlocks for
the INSERT case. I don't know why you want to make the patch
incremental by fixing unprincipled deadlocks in the first place,
since you've said that you don't really care about it as a real world
problem, and because it now appears to have significant additional
difficulties that were not anticipated.

Please, let's focus on getting ON CONFLICT IGNORE into a commitable
state - that's the best way of incrementally committing this work.
Fixing unprincipled deadlocks is not a useful way of incrementally
committing this work. I've spent several days producing a prototype
that shows the exact nature of the problem. If you think I'm mistaken,
and that fixing unprincipled deadlocks first is the right thing to do,
please explain why with reference to that prototype. AFAICT, doing
things your way is going to add significant additional complexity for
*no appreciable benefit*. You've already gotten exactly what you were
looking for in every other regard. In particular, ON CONFLICT UPDATE
could work with exclusion constraints without any of these problems,
because of the way we do the auxiliary update there (we lock the row
ahead of updating/qual evaluation). I've bent over backwards to make
sure that is the case. Please, throw me a bone here.

Thank you
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-02 Thread Heikki Linnakangas

On 02/21/2015 10:41 PM, Peter Geoghegan wrote:

On Sat, Feb 21, 2015 at 11:15 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

What I had in mind is that the winning inserter waits on the other
inserter's token, without super-deleting. Like all inserts do today. So the
above scenario becomes:

* Session 1 physically inserts, and then checks for a conflict.

* Session 2 physically inserts, and then checks for a conflict.

* Session 1 sees session 2's conflicting TID. Session 1's XID is older, so
it wins. It waits for session 2's token, without super-deleting.

* Session 2 sees session 1's conflicting TID. It super deletes,
releases token, and sleeps on session 1's token.

* Session 1 wakes up. It looks at session 2's tuple again and sees that it
was super-deleted. There are no further conflicts, so the insertion is
complete, and it releases the token.

* Session 2 wakes up. It looks at session 1's tuple again and sees that it's
still there. It goes back to sleep, this time on session 2's XID.

* Session 1 commits. Session 2 wakes up, sees that the tuple is still there,
and throws a contraint violation error.


I think we're actually 100% in agreement, then. I just prefer to have
the second last step (the check without a promise tuple visible to
anyone made by the loser) occur as part of the pre-check that
happens anyway with ON CONFLICT IGNORE. Otherwise, you'll end up with
some much more complicated control flow that has to care about not
doing that twice for ON CONFLICT IGNORE...and for the benefit of what?
For regular inserters, that we don't actually care about fixing
unprincipled deadlocks for?


Right. That will allow me to review and test the locking part of the 
patch separately.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-02 Thread Heikki Linnakangas

On 02/17/2015 02:11 AM, Peter Geoghegan wrote:



Whatever works, really. I can't say that the performance implications
of acquiring that hwlock are at the forefront of my mind. I never
found that to be a big problem on an 8 core box, relative to vanilla
INSERTs, FWIW - lock contention is not normal, and may be where any
heavweight lock costs would really be encountered.


Oh, cool. I guess the fast-path in lmgr.c kicks ass, then :-).

Seems that way. But even if that wasn't true, it wouldn't matter,
since I don't see that we have a choice.


I did some quick performance testing on this. For easy testing, I used a 
checkout of git master, and simply added LockAcquire + LockRelease calls 
to ExecInsert, around the heap_insert() call. The test case I used was:


psql -c create table footest (id serial primary key);

echo insert into footest select from generate_series(1, 1);  
inserts.sql


pgbench -n -f inserts.sql postgres -T100 -c4

With the extra lock calls, I got 56 tps on my laptop. With unpatched git 
master, I got 60 tps. I also looked at the profile with perf, and 
indeed about 10% of the CPU time was spent in LockAcquire and 
LockRelease together.


So the extra locking incurs about 10% overhead. I think this was pretty 
ḿuch a worst case scenario, but not a hugely unrealistic one - many 
real-world tables have only a few columns, and few indexes. With more 
CPUs you would probably start to see contention, in addition to just the 
extra overhead.


Are we OK with a 10% overhead, caused by the locking? That's probably 
acceptable if that's what it takes to get UPSERT. But it's not OK just 
to solve the deadlock issue with regular insertions into a table with 
exclusion constraints. Can we find a scheme to eliminate that overhead?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-02 Thread Peter Geoghegan
On Mon, Mar 2, 2015 at 11:20 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
 Are we OK with a 10% overhead, caused by the locking? That's probably
 acceptable if that's what it takes to get UPSERT. But it's not OK just to
 solve the deadlock issue with regular insertions into a table with exclusion
 constraints. Can we find a scheme to eliminate that overhead?

Looks like you tested a B-Tree index here. That doesn't seem
particularly representative of what you'd see with exclusion
constraints.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-03-02 Thread Heikki Linnakangas

On 03/02/2015 09:29 PM, Peter Geoghegan wrote:

On Mon, Mar 2, 2015 at 11:20 AM, Heikki Linnakangas hlinn...@iki.fi wrote:

Are we OK with a 10% overhead, caused by the locking? That's probably
acceptable if that's what it takes to get UPSERT. But it's not OK just to
solve the deadlock issue with regular insertions into a table with exclusion
constraints. Can we find a scheme to eliminate that overhead?


Looks like you tested a B-Tree index here. That doesn't seem
particularly representative of what you'd see with exclusion
constraints.


Hmm. I used a b-tree to estimate the effect that the locking would have 
in the UPSERT case, for UPSERT into a table with a b-tree index. But 
you're right that for the question of whether this is acceptable for the 
case of regular insert into a table with exclusion constraints, other 
indexams are more interesting. In a very quick test, the overhead with a 
single GiST index seems to be about 5%. IMHO that's still a noticeable 
slowdown, so let's try to find a way to eliminate it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-21 Thread Heikki Linnakangas

On 02/21/2015 12:15 AM, Peter Geoghegan wrote:

On Fri, Feb 20, 2015 at 1:07 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

Then I refuse to believe that the livelock hazard exists, without the
pre-check. If you have a livelock scenario in mind, it really shouldn't be
that difficult to write down the list of steps.


I just meant practical, recreatable steps - a test case. I should
emphasize that what I'm saying is not that important. Even if I am
wrong, I'm not suggesting that we do anything that we don't both agree
is needed anyway. If I'm right, that is merely an impediment to
incrementally committing the work by fixing exclusion constraints,
AFAICT. Ultimately, that isn't all that important. Anyway, here is how
I think livelocks could happen, in theory, with regular insertion into
a table with exclusion constraints, with your patch [1] applied (which
has no pre-check), this can happen:

* Session 1 physically inserts, and then checks for a conflict.

* Session 2 physically inserts, and then checks for a conflict.

* Session 1 sees session 2's conflicting TID, then super deletes and
releases token.

* Session 2 sees session 1's conflicting TID, then super deletes and
releases token.

* Session 1 waits or tries to wait on session 2's token. It isn't held
anymore, or is only held for an instant.

* Session 2 waits or tries to wait on session 1's token. It isn't held
anymore, or is only held for an instant.

* Session 1 restarts from scratch, having made no useful progress in
respect of the slot being inserted.

* Session 2 restarts from scratch, having made no useful progress in
respect of the slot being inserted.

(Livelock)

If there is a tie-break on XID (you omitted this from your patch [1]
but acknowledge it as an omission), than that doesn't really change
anything (without adding a pre-check, too). That's because: What do we
actually do or not do when we're the oldest XID, that gets to win?


Ah, ok, I can see the confusion now.


Do we not wait on anything, and just declare that we're done? Then I
think that breaks exclusion constraint enforcement, because we need to
rescan the index to do that (i.e., goto retry). Do we wait on their
token, as my most recent revision does, but *without* a pre-check, for
regular inserters? Then I think that our old tuple could keep multiple
other sessions spinning indefinitely.


What I had in mind is that the winning inserter waits on the other 
inserter's token, without super-deleting. Like all inserts do today. So 
the above scenario becomes:


* Session 1 physically inserts, and then checks for a conflict.

* Session 2 physically inserts, and then checks for a conflict.

* Session 1 sees session 2's conflicting TID. Session 1's XID is older, 
so it wins. It waits for session 2's token, without super-deleting.


* Session 2 sees session 1's conflicting TID. It super deletes,
releases token, and sleeps on session 1's token.

* Session 1 wakes up. It looks at session 2's tuple again and sees that 
it was super-deleted. There are no further conflicts, so the insertion 
is complete, and it releases the token.


* Session 2 wakes up. It looks at session 1's tuple again and sees that 
it's still there. It goes back to sleep, this time on session 2's XID.


* Session 1 commits. Session 2 wakes up, sees that the tuple is still 
there, and throws a contraint violation error.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-21 Thread Peter Geoghegan
On Sat, Feb 21, 2015 at 11:15 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 Ah, ok, I can see the confusion now.

Cool.

 Do we not wait on anything, and just declare that we're done? Then I
 think that breaks exclusion constraint enforcement, because we need to
 rescan the index to do that (i.e., goto retry). Do we wait on their
 token, as my most recent revision does, but *without* a pre-check, for
 regular inserters? Then I think that our old tuple could keep multiple
 other sessions spinning indefinitely.

 What I had in mind is that the winning inserter waits on the other
 inserter's token, without super-deleting. Like all inserts do today. So the
 above scenario becomes:

 * Session 1 physically inserts, and then checks for a conflict.

 * Session 2 physically inserts, and then checks for a conflict.

 * Session 1 sees session 2's conflicting TID. Session 1's XID is older, so
 it wins. It waits for session 2's token, without super-deleting.

 * Session 2 sees session 1's conflicting TID. It super deletes,
 releases token, and sleeps on session 1's token.

 * Session 1 wakes up. It looks at session 2's tuple again and sees that it
 was super-deleted. There are no further conflicts, so the insertion is
 complete, and it releases the token.

 * Session 2 wakes up. It looks at session 1's tuple again and sees that it's
 still there. It goes back to sleep, this time on session 2's XID.

 * Session 1 commits. Session 2 wakes up, sees that the tuple is still there,
 and throws a contraint violation error.

I think we're actually 100% in agreement, then. I just prefer to have
the second last step (the check without a promise tuple visible to
anyone made by the loser) occur as part of the pre-check that
happens anyway with ON CONFLICT IGNORE. Otherwise, you'll end up with
some much more complicated control flow that has to care about not
doing that twice for ON CONFLICT IGNORE...and for the benefit of what?
For regular inserters, that we don't actually care about fixing
unprincipled deadlocks for?

Are we on the same page now?
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-20 Thread Peter Geoghegan
On Fri, Feb 20, 2015 at 1:07 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 Then I refuse to believe that the livelock hazard exists, without the
 pre-check. If you have a livelock scenario in mind, it really shouldn't be
 that difficult to write down the list of steps.

I just meant practical, recreatable steps - a test case. I should
emphasize that what I'm saying is not that important. Even if I am
wrong, I'm not suggesting that we do anything that we don't both agree
is needed anyway. If I'm right, that is merely an impediment to
incrementally committing the work by fixing exclusion constraints,
AFAICT. Ultimately, that isn't all that important. Anyway, here is how
I think livelocks could happen, in theory, with regular insertion into
a table with exclusion constraints, with your patch [1] applied (which
has no pre-check), this can happen:

* Session 1 physically inserts, and then checks for a conflict.

* Session 2 physically inserts, and then checks for a conflict.

* Session 1 sees session 2's conflicting TID, then super deletes and
releases token.

* Session 2 sees session 1's conflicting TID, then super deletes and
releases token.

* Session 1 waits or tries to wait on session 2's token. It isn't held
anymore, or is only held for an instant.

* Session 2 waits or tries to wait on session 1's token. It isn't held
anymore, or is only held for an instant.

* Session 1 restarts from scratch, having made no useful progress in
respect of the slot being inserted.

* Session 2 restarts from scratch, having made no useful progress in
respect of the slot being inserted.

(Livelock)

If there is a tie-break on XID (you omitted this from your patch [1]
but acknowledge it as an omission), than that doesn't really change
anything (without adding a pre-check, too). That's because: What do we
actually do or not do when we're the oldest XID, that gets to win?
Do we not wait on anything, and just declare that we're done? Then I
think that breaks exclusion constraint enforcement, because we need to
rescan the index to do that (i.e., goto retry). Do we wait on their
token, as my most recent revision does, but *without* a pre-check, for
regular inserters? Then I think that our old tuple could keep multiple
other sessions spinning indefinitely. Only by checking for conflicts
*first*, without a non-super-deleted physical index tuple can these
other sessions notice that there is a conflict *without creating more
conflicts*, which is what I believe is really needed. At the very
least it's something I'm much more comfortable with, and that seems
like reason enough to do it that way, given that we don't actually
care about unprincipled deadlocks with regular inserters with
exclusion constraints. Why take the chance with livelocking such
inserters, though?

I hope that we don't get bogged down on this, because, as I said, it
doesn't matter too much. I'm tempted to concede the point for that
reason, since the livelock is probably virtually impossible. I'm just
giving you my opinion on how to make the handling of exclusion
constraints as reliable as possible.

Thanks

[1] http://www.postgresql.org/message-id/54dfc6f8.5050...@vmware.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-20 Thread Heikki Linnakangas

On 02/20/2015 10:39 PM, Peter Geoghegan wrote:

On Fri, Feb 20, 2015 at 11:34 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

So, um, are you agreeing that there is no problem? Or did I misunderstand?
If you see a potential issue here, can you explain it as a simple list of
steps, please.


Yes. I'm saying that AFAICT, there is no livelock hazard provided
other sessions must do the pre-check (as they must for ON CONFLICT
IGNORE). So I continue to believe that they must pre-check, which you
questioned.
...
Hard to break down the problem into steps, since it isn't a problem
that I was able to recreate (as a noticeable livelock).


Then I refuse to believe that the livelock hazard exists, without the 
pre-check. If you have a livelock scenario in mind, it really shouldn't 
be that difficult to write down the list of steps.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-20 Thread Peter Geoghegan
On Fri, Feb 20, 2015 at 11:34 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 So, um, are you agreeing that there is no problem? Or did I misunderstand?
 If you see a potential issue here, can you explain it as a simple list of
 steps, please.

Yes. I'm saying that AFAICT, there is no livelock hazard provided
other sessions must do the pre-check (as they must for ON CONFLICT
IGNORE). So I continue to believe that they must pre-check, which you
questioned.

The only real downside here (with not doing the pre-check for regular
inserters with exclusion constraints) is that we can't fix
unprincipled deadlocks for general exclusion constraint inserters
(since we don't want to make them pre-check), but we don't actually
care about that directly. But it also means that it's hard to see how
we can incrementally commit such a fix to break down the patch into
more manageable chunks, which is a little unfortunate.

Hard to break down the problem into steps, since it isn't a problem
that I was able to recreate (as a noticeable livelock). But the point
of what I was saying is that the first phase pre-check allows us to
notice that the one session that got stuck waiting in the second phase
(other conflicters notice its tuple, and so don't insert a new tuple
this iteration).

Conventional insertion with exclusion constraints insert first and
only then looks for conflicts (since there is no pre-check). More
concretely, if you're the transaction that does not break here,
within check_exclusion_or_unique_constraint(), and so unexpectedly
waits in the second phase:

+ /*
+  * At this point we have either a conflict or a potential conflict. If
+  * we're not supposed to raise error, just return the fact of the
+  * potential conflict without waiting to see if it's real.
+  */
+ if (violationOK  !wait)
+ {
+ /*
+  * For unique indexes, detecting conflict is coupled with physical
+  * index tuple insertion, so we won't be called for recheck
+  */
+ Assert(!indexInfo-ii_Unique);
+
+ conflict = true;
+ if (conflictTid)
+ *conflictTid = tup-t_self;
+
+ /*
+  * Livelock insurance.
+  *
+  * When doing a speculative insertion pre-check, we cannot have an
+  * unprincipled deadlock with another session, fundamentally
+  * because there is no possible mutual dependency, since we only
+  * hold a lock on our token, without attempting to lock anything
+  * else (maybe this is not the first iteration, but no matter;
+  * we'll have super deleted and released insertion token lock if
+  * so, and all locks needed are already held.  Also, our XID lock
+  * is irrelevant.)
+  *
+  * In the second phase, where there is a re-check for conflicts, we
+  * can't deadlock either (we never lock another thing, since we
+  * don't wait in that phase).  However, a theoretical livelock
+  * hazard exists:  Two sessions could each see each other's
+  * conflicting tuple, and each could go and delete, retrying
+  * forever.
+  *
+  * To break the mutual dependency, we may wait on the other xact
+  * here over our caller's request to not do so (in the second
+  * phase).  This does not imply the risk of unprincipled deadlocks
+  * either, because if we end up unexpectedly waiting, the other
+  * session will super delete its own tuple *before* releasing its
+  * token lock and freeing us, and without attempting to wait on us
+  * to release our token lock.  We'll take another iteration here,
+  * after waiting on the other session's token, not find a conflict
+  * this time, and then proceed (assuming we're the oldest XID).
+  *
+  * N.B.:  Unprincipled deadlocks are still theoretically possible
+  * with non-speculative insertion with exclusion constraints, but
+  * this seems inconsequential, since an error was inevitable for
+  * one of the sessions anyway.  We only worry about speculative
+  * insertion's problems, since they're likely with idiomatic usage.
+  */
+ if (TransactionIdPrecedes(xwait, GetCurrentTransactionId()))
+ break;  /* go and super delete/restart speculative insertion */
+ }
+

Then you must successfully insert when you finally goto retry and do
another iteration within check_exclusion_or_unique_constraint(). The
other conflicters can't have failed to notice your pre-existing tuple,
and can't have failed to super delete their own tuples before you are
woken (provided you really are the single oldest XID).

Is that any clearer?
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-20 Thread Heikki Linnakangas

On 02/19/2015 10:09 PM, Peter Geoghegan wrote:

On Thu, Feb 19, 2015 at 11:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

I fully agree with your summary here. However, why should we suppose
that while we wait, the other backends don't both delete and then
re-insert their tuple? They need the pre-check to know not to
re-insert their tuple (seeing our tuple, immediately after we wake as
the preferred backend with the older XID) in order to break the race.
But today, exclusion constraints are optimistic in that the insert
happens first, and only then the check. The pre-check turns that the
other way around, in a limited though necessary sense.


I'm not sure I understand exactly what you're saying, but AFAICS the
pre-check doesn't completely solve that either. It's entirely possible that
the other backend deletes its tuple, our backend then performs the
pre-check, and the other backend re-inserts its tuple again. Sure, the
pre-check reduces the chances, but we're talking about a rare condition to
begin with, so I don't think it makes sense to add much code just to reduce
the chances further.


But super deletion occurs *before* releasing the token lock, which is
the last thing we do before looping around and starting again. So iff
we're the oldest XID, the one that gets to win by unexpectedly
waiting on another's token in our second phase (second call to
check_exclusion_or_unique_constraint()), we will not, in fact, see
anyone else's tuple, because they'll all be forced to go through the
first phase and find our pre-existing, never-deleted tuple, so we
can't see any new tuple from them. And, because they super delete
before releasing their token, they'll definitely have super deleted
when we're woken up, so we can't see any old/existing tuple either. We
have our tuple inserted this whole time - ergo, we do, in fact, win
reliably.


So, um, are you agreeing that there is no problem? Or did I 
misunderstand? If you see a potential issue here, can you explain it as 
a simple list of steps, please.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-19 Thread Peter Geoghegan
On Thu, Feb 19, 2015 at 11:10 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I fully agree with your summary here. However, why should we suppose
 that while we wait, the other backends don't both delete and then
 re-insert their tuple? They need the pre-check to know not to
 re-insert their tuple (seeing our tuple, immediately after we wake as
 the preferred backend with the older XID) in order to break the race.
 But today, exclusion constraints are optimistic in that the insert
 happens first, and only then the check. The pre-check turns that the
 other way around, in a limited though necessary sense.

 I'm not sure I understand exactly what you're saying, but AFAICS the
 pre-check doesn't completely solve that either. It's entirely possible that
 the other backend deletes its tuple, our backend then performs the
 pre-check, and the other backend re-inserts its tuple again. Sure, the
 pre-check reduces the chances, but we're talking about a rare condition to
 begin with, so I don't think it makes sense to add much code just to reduce
 the chances further.

But super deletion occurs *before* releasing the token lock, which is
the last thing we do before looping around and starting again. So iff
we're the oldest XID, the one that gets to win by unexpectedly
waiting on another's token in our second phase (second call to
check_exclusion_or_unique_constraint()), we will not, in fact, see
anyone else's tuple, because they'll all be forced to go through the
first phase and find our pre-existing, never-deleted tuple, so we
can't see any new tuple from them. And, because they super delete
before releasing their token, they'll definitely have super deleted
when we're woken up, so we can't see any old/existing tuple either. We
have our tuple inserted this whole time - ergo, we do, in fact, win
reliably.

The fly in the ointment is regular inserters, perhaps, but we've
agreed that they're not too important here, and even when that happens
we're in deadlock land, not livelock land, which is obviously a
nicer place to live.

Does that make sense?
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-19 Thread Heikki Linnakangas

On 02/18/2015 11:43 PM, Peter Geoghegan wrote:

Heikki seemed to think that the deadlock problems were not really
worth fixing independently of ON CONFLICT UPDATE support, but rather
represented a useful way of committing code incrementally. Do I have
that right?


Yes.


The way I chose to break the livelock (what I call livelock
insurance) does indeed involve comparing XIDs, which Heikki thought
most promising. But it also involves having the oldest XID wait on
another session's speculative token in the second phase, which
ordinarily does not occur in the second
phase/check_exclusion_or_unique_constraint() call. The idea is that
one session is guaranteed to be the waiter that has a second iteration
within its second-phase check_exclusion_or_unique_constraint() call,
where (following the super deletion of conflict TIDs by the other
conflicting session or sessions) reliably finds that it can proceed
(those other sessions are denied the opportunity to reach their second
phase by our second phase waiter's still-not-super-deleted tuple).

However, it's difficult to see how to map this on to general exclusion
constraint insertion + enforcement. In Heikki's recent sketch of this
[1], there is no pre-check, since that is considered an UPSERT thing
deferred until a later patch, and therefore my scheme here cannot work
(recall that for plain inserts with exclusion constraints, we insert
first and check last). I have a hard time justifying adding the
pre-check for plain exclusion constraint inserters given the total
lack of complaints from the field regarding unprincipled deadlocks,
and given the fact that it would probably make the code more
complicated than it needs to be. It is critical that there be a
pre-check to prevent livelock, though, because otherwise the
restarting sessions can go straight to their second phase
(check_exclusion_or_unique_constraint() call), without ever realizing
that they should not do so.


Hmm. I haven't looked at your latest patch, but I don't think you need 
to pre-check for this to work. To recap, the situation is that two 
backends have already inserted the heap tuple, and then see that the 
other backend's tuple conflicts. To avoid a livelock, it's enough that 
one backend super-deletes its own tuple first, before waiting for the 
other to complete, while the other other backend waits without 
super-deleting. No?



It seems like the livelock insurance is pretty close to or actually
free, and doesn't imply that a second phase wait for token needs to
happen too often. With heavy contention on a small number of possible
tuples (100), and 8 clients deleting and inserting, I can see it
happening only once every couple of hundred milliseconds on my laptop.
It's not hard to imagine why the code didn't obviously appear to be
broken before now, as the window for an actual livelock must have been
small. Also, deadlocks bring about more deadlocks (since the deadlock
detector kicks in), whereas livelocks do not bring about more
livelocks.


It might be easier to provoke the livelocks with a GiST opclass that's 
unusually slow. I wrote the attached opclass for the purpose of testing 
this a while ago, but I haven't actually gotten around to do much with 
it. It's called useless_gist, because it's a GiST opclass for 
integers, like btree_gist, but the penalty and picksplit functions are 
totally dumb. The result is that the tuples are put to the index in 
pretty much random order, and every scan has to scan the whole index. 
I'm posting it here, in the hope that it happens to be useful, but I 
don't really know if it is.


- Heikki



useless_gist.tar.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-19 Thread Peter Geoghegan
On Thu, Feb 19, 2015 at 5:21 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 Hmm. I haven't looked at your latest patch, but I don't think you need to
 pre-check for this to work. To recap, the situation is that two backends
 have already inserted the heap tuple, and then see that the other backend's
 tuple conflicts. To avoid a livelock, it's enough that one backend
 super-deletes its own tuple first, before waiting for the other to complete,
 while the other other backend waits without super-deleting. No?

I fully agree with your summary here. However, why should we suppose
that while we wait, the other backends don't both delete and then
re-insert their tuple? They need the pre-check to know not to
re-insert their tuple (seeing our tuple, immediately after we wake as
the preferred backend with the older XID) in order to break the race.
But today, exclusion constraints are optimistic in that the insert
happens first, and only then the check. The pre-check turns that the
other way around, in a limited though necessary sense.

Granted, it's unlikely that we'd livelock due to one session always
deleting and then nullifying that by re-inserting in time, but the
theoretical risk seems real. Therefore, I'm not inclined to bother
committing something without a pre-check, particularly since we're not
really interested in fixing unprincipled deadlocks for ordinary
exclusion constraint inserters (useful to know that you also think
that doesn't matter, BTW). Does that make sense?

This is explained within livelock insurance new-to-V2.3 comments in
check_exclusion_or_unique_constraint().  (Not that I think that
explanation is easier to follow than this one).

 It might be easier to provoke the livelocks with a GiST opclass that's
 unusually slow. I wrote the attached opclass for the purpose of testing this
 a while ago, but I haven't actually gotten around to do much with it. It's
 called useless_gist, because it's a GiST opclass for integers, like
 btree_gist, but the penalty and picksplit functions are totally dumb. The
 result is that the tuples are put to the index in pretty much random order,
 and every scan has to scan the whole index. I'm posting it here, in the hope
 that it happens to be useful, but I don't really know if it is.

Thanks. I'll try and use this for testing. Haven't been able to break
exclusion constraints with the jjanes_upsert test suite in a long
time, now.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-19 Thread Heikki Linnakangas

On 02/19/2015 08:16 PM, Peter Geoghegan wrote:

On Thu, Feb 19, 2015 at 5:21 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

Hmm. I haven't looked at your latest patch, but I don't think you need to
pre-check for this to work. To recap, the situation is that two backends
have already inserted the heap tuple, and then see that the other backend's
tuple conflicts. To avoid a livelock, it's enough that one backend
super-deletes its own tuple first, before waiting for the other to complete,
while the other other backend waits without super-deleting. No?


I fully agree with your summary here. However, why should we suppose
that while we wait, the other backends don't both delete and then
re-insert their tuple? They need the pre-check to know not to
re-insert their tuple (seeing our tuple, immediately after we wake as
the preferred backend with the older XID) in order to break the race.
But today, exclusion constraints are optimistic in that the insert
happens first, and only then the check. The pre-check turns that the
other way around, in a limited though necessary sense.


I'm not sure I understand exactly what you're saying, but AFAICS the 
pre-check doesn't completely solve that either. It's entirely possible 
that the other backend deletes its tuple, our backend then performs the 
pre-check, and the other backend re-inserts its tuple again. Sure, the 
pre-check reduces the chances, but we're talking about a rare condition 
to begin with, so I don't think it makes sense to add much code just to 
reduce the chances further.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-19 Thread Heikki Linnakangas

On 02/16/2015 11:31 AM, Andres Freund wrote:

On 2015-02-16 10:00:24 +0200, Heikki Linnakangas wrote:

I'm starting to think that we should bite the bullet and consume an infomask
bit for this. The infomask bits are a scarce resource, but we should use
them when it makes sense. It would be good for forensic purposes too, to
leave a trace that a super-deletion happened.


Well, we IIRC don't have any left right now. We could reuse
MOVED_IN|MOVED_OUT, as that never was set in the past. But it'd
essentially use two infomask bits forever...


t_infomask is all used, but t_infomask2 has two bits left:


/*
 * information stored in t_infomask2:
 */
#define HEAP_NATTS_MASK 0x07FF  /* 11 bits for number of 
attributes */
/* bits 0x1800 are available */
#define HEAP_KEYS_UPDATED   0x2000  /* tuple was updated and key 
cols

 * modified, or tuple deleted */
#define HEAP_HOT_UPDATED0x4000  /* tuple was HOT-updated */
#define HEAP_ONLY_TUPLE 0x8000  /* this is heap-only tuple */

#define HEAP2_XACT_MASK 0xE000  /* visibility-related bits */


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-16 Thread Peter Geoghegan
On Mon, Feb 16, 2015 at 12:00 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 So INSERT ON CONFLICT IGNORE on a table with an exclusion constraint might
 fail. I don't like that. The point of having the command in the first place
 is to deal with concurrency issues. If it sometimes doesn't work, it's
 broken.

I don't like it either, although I think it wouldn't come up very
often with exclusion constraints - basically, it would rarely be
noticed due to the different use cases. To be honest, in suggesting
this idea I was hedging against us not figuring out a solution to that
problem in time. You didn't like my suggestion of dropping IGNORE
entirely, either. I'll do my best to come up with something, but I'm
uncomfortable that at this late stage, ON CONFLICT IGNORE support for
exclusion constraints seems like a risk to the entire project.

I ask that if push comes to shove you show some flexibility here. I'll
try my best to ensure that you don't have to even consider committing
something with a notable omission. You don't have to give me an answer
to this now.

 The idea of comparing the TIDs of the tuples as a tie-breaker seems most
 promising to me. If the conflicting tuple's TID is smaller than the inserted
 tuple's, super-delete and wait. Otherwise, wait without super-deletion.
 That's really simple. Do you see a problem with that?

No. I'll work on that, and see how it stands up to stress testing.
Come to think of it, that does seem most promising.

 BTW, it would good to explain somewhere in comments or a README the term
 unprincipled deadlock. It's been a useful concept, and hard to grasp. If
 you defined it once, with examples and everything, then you could just have
 See .../README in other places that need to refer it.

Agreed. I have described those in the revised executor README, in case
you missed that. Do you think they ought to have their own section?
Note that hash indexes have unprincipled deadlock issues, but no one
has bothered to fix them.

 Whatever works, really. I can't say that the performance implications
 of acquiring that hwlock are at the forefront of my mind. I never
 found that to be a big problem on an 8 core box, relative to vanilla
 INSERTs, FWIW - lock contention is not normal, and may be where any
 heavweight lock costs would really be encountered.

 Oh, cool. I guess the fast-path in lmgr.c kicks ass, then :-).

Seems that way. But even if that wasn't true, it wouldn't matter,
since I don't see that we have a choice.

 I don't see how storing the promise token in heap tuples buys us not
 having to involve heavyweight locking of tokens. (I think you may have
 made a thinko in suggesting otherwise)

 It wouldn't get rid of heavyweight locks, but it would allow getting rid of
 the procarray changes. The inserter could generate a token, then acquire the
 hw-lock on the token, and lastly insert the heap tuple with the correct
 token.

Do you really think that's worth the disk overhead? I generally agree
with the zero overhead principle, which is that anyone not using the
feature shouldn't pay no price for it (or vanishingly close to no
price). Can't say that I have a good sense of the added distributed
cost (if any) to be paid by adding new fields to the PGPROC struct
(since the PGXACT struct was added in 9.2). Is your only concern that
the PGPROC array will now have more fields, making it more
complicated? Surely that's a price worth paying to avoid making these
heap tuples artificially somewhat larger. You're probably left with
tuples that are at least 8 bytes larger, once alignment is taken into
consideration. That doesn't seem great.

 That couldn't work without further HeapTupleSatisfiesDirty() logic.

 Why not?

Just meant that it wasn't sufficient to check xmin == xmax, while
allowing that something like that could work with extra work (e.g. the
use of infomask bits)...

 Ok, so we can't unconditionally always ignore tuples with xmin==xmax. We
 would need an infomask bit to indicate super-deletion, and only ignore it if
 the bit is set.

...which is what you say here.

 I'm starting to think that we should bite the bullet and consume an infomask
 bit for this. The infomask bits are a scarce resource, but we should use
 them when it makes sense. It would be good for forensic purposes too, to
 leave a trace that a super-deletion happened.

 I too think the tqual.c changes are ugly. I can't see a way around
 using a token lock, though - I would only consider (and only consider)
 refining the interface by which a waiter becomes aware that it must
 wait on the outcome of the inserting xact's speculative
 insertion/value lock (and in particular, that is should not wait on
 xact outcome). We clearly need something to wait on that isn't an
 XID...heavyweight locking of a token value is the obvious way of doing
 that.


 Yeah.

 Jim Nasby said something about setting the HEAP_XMIN_INVALID hint bit.
 Maybe he is right...if that can be made to be reliable (always
 

Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-16 Thread Peter Geoghegan
On Mon, Feb 16, 2015 at 4:11 PM, Peter Geoghegan p...@heroku.com wrote:
 Jim Nasby said something about setting the HEAP_XMIN_INVALID hint bit.
 Maybe he is right...if that can be made to be reliable (always
 WAL-logged), it could be marginally better than setting xmin to
 invalidTransactionId.


 I'm not a big fan of that. The xmin-invalid bit is currently always just a
 hint.

 Well, Andres makes the point that that isn't quite so.

Hmm. So the tqual.c routines actually check if
(HeapTupleHeaderXminInvalid(tuple)). Which is:

#define HeapTupleHeaderXminInvalid(tup) \
( \
((tup)-t_infomask  (HEAP_XMIN_COMMITTED|HEAP_XMIN_INVALID)) == \
HEAP_XMIN_INVALID \
)

What appears to happen if I try to only use the HEAP_XMIN_INVALID bit
(and not explicitly set xmin to InvalidTransactionId, and not
explicitly check that xmin isn't InvalidTransactionId in each
HeapTupleSatisfies* routine) is that after a little while, Jeff Janes'
tool shows up spurious unique violations, as if some tuple has become
visible when it shouldn't have. I guess this is because the
HEAP_XMIN_COMMITTED hint bit can still be set, which in effect
invalidates the HEAP_XMIN_INVALID hint bit.

It takes about 2 minutes before this happens for the first time when
fsync = off, following a fresh initdb, which is unacceptable. I'm not
sure if it's worth trying to figure out how HEAP_XMIN_COMMITTED gets
set. Not that I'm 100% sure that that's what this really is; it just
seems very likely.

Attached broken patch (broken_visible.patch) shows the changes made to
reveal this problem. Only the changes to tqual.c and heap_delete()
should matter here, since I did not test recovery.

I also thought about unifying the check for if
(TransactionIdEquals(HeapTupleHeaderGetRawXmin(tuple),
InvalidTransactionId)) with the conventional
HeapTupleHeaderXminInvalid() macro, and leaving everything else as-is.
This is no good either, and for similar reasons - control often won't
reach the macro, which is behind a check of if
(!HeapTupleHeaderXminCommitted(tuple)).

The best I think we can hope for is having a dedicated if
(TransactionIdEquals(HeapTupleHeaderGetRawXmin(tuple),
InvalidTransactionId)) macro HeapTupleHeaderSuperDeleted() to do the
work each time, which does not need to be checked so often. A second
attached patch (compact_tqual_works.patch - which is non-broken,
AFAICT) shows how this is possible, while also moving the check
further down within each tqual.c routine (which seems more in keeping
with the fact that super deletion is a relatively obscure concept). I
haven't been able to break this variant using my existing test suite,
so this seems promising as a way of reducing tqual.c clutter. However,
as I said the other day, this is basically just polish.

-- 
Peter Geoghegan
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0aa3e57..b777c56 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2899,7 +2899,7 @@ l1:
 	}
 	else
 	{
-		HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
+		HeapTupleHeaderSetXminInvalid(tp.t_data);
 	}
 
 	/* Make sure there is no forward chain link in t_ctid */
@@ -7382,12 +7382,12 @@ heap_xlog_delete(XLogReaderState *record)
 		htup-t_infomask = ~(HEAP_XMAX_BITS | HEAP_MOVED);
 		htup-t_infomask2 = ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
+		if (xlrec-flags  XLOG_HEAP_KILLED_SPECULATIVE_TUPLE)
+			HeapTupleHeaderSetXminInvalid(htup);
+
 		fix_infomask_from_infobits(xlrec-infobits_set,
    htup-t_infomask, htup-t_infomask2);
-		if (!(xlrec-flags  XLOG_HEAP_KILLED_SPECULATIVE_TUPLE))
-			HeapTupleHeaderSetXmax(htup, xlrec-xmax);
-		else
-			HeapTupleHeaderSetXmin(htup, InvalidTransactionId);
+		HeapTupleHeaderSetXmax(htup, xlrec-xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 
 		/* Mark the page as a candidate for pruning */
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 99bb417..fd857b1 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -170,13 +170,6 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 	Assert(ItemPointerIsValid(htup-t_self));
 	Assert(htup-t_tableOid != InvalidOid);
 
-	/*
-	 * Never return super-deleted tuples
-	 */
-	if (TransactionIdEquals(HeapTupleHeaderGetRawXmin(tuple),
-			InvalidTransactionId))
-		return false;
-
 	if (!HeapTupleHeaderXminCommitted(tuple))
 	{
 		if (HeapTupleHeaderXminInvalid(tuple))
@@ -735,15 +728,6 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 	snapshot-xmin = snapshot-xmax = InvalidTransactionId;
 	snapshot-speculativeToken = 0;
 
-	/*
-	 * Never return super-deleted tuples
-	 *
-	 * XXX:  Comment this code out and you'll get conflicts within
-	 * ExecLockUpdateTuple(), which result in an infinite loop.
-	 */
-	if (TransactionIdEquals(HeapTupleHeaderGetRawXmin(tuple),
-			InvalidTransactionId))
-		return false;
 
 	if 

Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE} 2.0

2015-02-16 Thread Andres Freund
On 2015-02-16 10:00:24 +0200, Heikki Linnakangas wrote:
 On 02/16/2015 02:44 AM, Peter Geoghegan wrote:
 Are we happy with acquiring the SpeculativeInsertLock heavy-weight lock for
 every insertion? That seems bad for performance reasons. Also, are we happy
 with adding the new fields to the proc array? Another alternative that was
 discussed was storing the speculative insertion token on the heap tuple
 itself. (See
 http://www.postgresql.org/message-id/52d00d2d.6030...@vmware.com)
 
 Whatever works, really. I can't say that the performance implications
 of acquiring that hwlock are at the forefront of my mind. I never
 found that to be a big problem on an 8 core box, relative to vanilla
 INSERTs, FWIW - lock contention is not normal, and may be where any
 heavweight lock costs would really be encountered.
 
 Oh, cool. I guess the fast-path in lmgr.c kicks ass, then :-).

I don't think it actually uses the fast path, does it? IIRC that's
restricted to LOCKTAG_RELATION when done via LockAcquireExtended + open
coded for the VirtualXactLock table.

I'm not super worried atm either. Worth checking, but probably not worth
obsessing over.

 Besides, why should one transaction be entitled to insert a
 conflicting value tuple just because a still running transaction
 deleted it (having also inserted the tuple itself)? Didn't one
 prototype version of value locking #2 have that as a bug [1]? In fact,
 originally, didn't the xmin set to invalid thing come immediately
 from realizing that that wasn't workable?
 
 Ah, right. So the problem was that some code might assume that if you insert
 a row, delete it in the same transaction, and then insert the same value
 again, the 2nd insertion will always succeed because the previous insertion
 effectively acquired a value-lock on the key.
 
 Ok, so we can't unconditionally always ignore tuples with xmin==xmax. We
 would need an infomask bit to indicate super-deletion, and only ignore it if
 the bit is set.
 
 I'm starting to think that we should bite the bullet and consume an infomask
 bit for this. The infomask bits are a scarce resource, but we should use
 them when it makes sense. It would be good for forensic purposes too, to
 leave a trace that a super-deletion happened.

Well, we IIRC don't have any left right now. We could reuse
MOVED_IN|MOVED_OUT, as that never was set in the past. But it'd
essentially use two infomask bits forever...

 Jim Nasby said something about setting the HEAP_XMIN_INVALID hint bit.
 Maybe he is right...if that can be made to be reliable (always
 WAL-logged), it could be marginally better than setting xmin to
 invalidTransactionId.
 
 I'm not a big fan of that. The xmin-invalid bit is currently always just a
 hint.

We already rely on XMIN_INVALID being set correctly for
freezing. C.f. HeapTupleHeaderXminFrozen, HeapTupleHeaderXminInvalid, et
al. So it'd not necessarily end up being that bad. And the super
deletion could easily just set it in the course of it's WAL logging.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   >