Re: [HACKERS] Fwd: Proper query implementation for Postgresql driver

2014-09-30 Thread Shay Rojansky
Thanks for all the answers.

Tom:

 FWIW, I'd go with text results, especially if you already have code to
 deal with that.  PG's on-the-wire binary formats are more efficient to
 process in some absolute sense, but they're hardly free: you need to
 consider byte endianness for integers and floats, integer vs float
 encoding for timestamps, the difference between PG's timestamp
 representation and whatever native timestamps are on your platform,
 etc etc.  It's not a trivial amount of code to deal with.  And in the
 end I think the efficiency gain is pretty marginal compared to the raw
 costs of data transfer, especially if you're not on the same physical
 machine as the server.

In my mind data transfer was actually also a good reason to switch to
binary. I don't know how PG's binary timestamp looks like, but isn't it a
safe assumption that it's much more economical (and easy to parse) than any
textual representation? By the way, we already encode numbers and floats as
binary (an handle endianness) in certain contexts (parameters of prepared
statements). I don't underestimate the effort of binary implementation, but
if it's a one-time systematic effort it seems to be worth it?

 Having said that, there has been some talk of letting client libraries
 supply a whitelist of data types that they'd like to receive in binary.
 We could do that (modulo questions of whether it's worth incurring a
 protocol version change for), but I'm unclear on what the use case really
 is for that.  Wouldn't you then have to provide an inconsistent API to
 users of your driver, that is some things are presented in text and others
 not?  Is that really a great thing?

Whitelisting binary types would solve the issue entirely for us here. As a
driver, I wouldn't be exposing the whitelist to users in any way; I would
simply use it to tell Postgresql (on a session level ideally) which types
we want to receive as binary. The user would access the data in the usual
way, no perceivable API change as far as I can see.

Abhijit:

 If you're willing to hand the user an unprocessed string, why can't that
 be the binary encoding just as well as text?

I might be mistaken, but in the case of textual encoding I can just hand
over the text, as I got it from PG, to the user and let them deal with it.
With binary, I get a blob of bytes that has no meaning to anyone. I can
hand it over to the user, but they can't be expected to be able to do
anything with it...

Tom and Atri:

 TW, libpqtypes (http://libpqtypes.esilo.com) might be worth
 studying as well.  I've not used it myself, but it claims to
 offer datatype-extensible processing of binary formats.

Thanks for the suggestion, I'll take a look. Since we're a pure .NET
implementation actual use of libpqtypes won't be possible, but it's
definitely possible to learn here. Although given how the protocol
currently looks like, I can't really see what could be done to support
magical support of binary encoding of arbitrary, unknown types...?

Craig:

 Even if you can't get rid of text support, dropping simple query
 protocol support and the need to support client-side parameter binding
 may well be a pleasant improvement.

I definitely agree when it comes to dropping client-side parameter binding
(and there seems to be an agreement on that between the devs). But for the
case non-parameterized queries, there doesn't seem to be any benefit of
using the extended protocol over the simple one (if you're still doing
text), is there?

 It's also possible for a type not to have send/recv functions, i.e. to
 support text-only use.

In that case, what would be the behavior of selecting such a type with an
extended query that specifies all results in binary? A PG error?

 You could reasonably require that all user defined extension types must
 support binary I/O. This will probably be fine in practice. As you said,
 though, users would then have to install plugin for nPgSQL for each
 custom type they wished to use because nPgSQL won't otherwise know what
 to do with the binary data.

This is the only true remaining point of difficulty for me. We could bite
the bullet, sit down and implement binary for everything built-in; but
eliminating the free use of extensions seems like a no-go.

 That's pretty much what PgJDBC does, playing with extra_float_digits,
 client_encoding, TimeZone, etc.
 It's not lovely.

It definitely isn't... And we have user complaints on several counts as
well. The problem is that messing around with extra_float_digits,
lc_monetary and the rest also affect some Postgresql functions which do
text conversion... With extra_float_digits, SELECT format('%s',
0.28::double precision) returns 0.280027 rather than 0.28. The
point is that the hacks we're doing to support textual *wire* encoding also
impact non-wire functionality and affecting users in unwanted ways.

 I would like to be able to specify a top-level option at Bind/Execute
 time that asks the server to send binary for 

Re: [HACKERS] Fwd: Proper query implementation for Postgresql driver

2014-09-30 Thread Tom Lane
[ too tired to respond to the other points, but: ]

Shay Rojansky r...@roji.org writes:
 It's also possible for a type not to have send/recv functions, i.e. to
 support text-only use.

 In that case, what would be the behavior of selecting such a type with an
 extended query that specifies all results in binary? A PG error?

Yup.

if (!OidIsValid(pt-typsend))
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_FUNCTION),
 errmsg(no binary output function available for type %s,
format_type_be(type;

There's an exactly parallel error if you try to send a parameter in
binary when its datatype hasn't got a typreceive function.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.5] Custom Plan API

2014-09-30 Thread Kouhei Kaigai
 On Mon, Sep 29, 2014 at 9:04 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
  Do we need to match the prototype of wrapper function with callback?
 
 Yes.
 
OK, I fixed up the patch part-2, to fit declaration of GetSpecialCustomVar()
with corresponding callback.

Also, a noise in the part-3 patch, by git-pull, was removed.

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com


pgsql-v9.5-custom-scan.part-3.v12.patch
Description: pgsql-v9.5-custom-scan.part-3.v12.patch


pgsql-v9.5-custom-scan.part-2.v12.patch
Description: pgsql-v9.5-custom-scan.part-2.v12.patch


pgsql-v9.5-custom-scan.part-1.v12.patch
Description: pgsql-v9.5-custom-scan.part-1.v12.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-09-30 Thread Kyotaro HORIGUCHI
Thank you for reviewing. I'll look close to the patch tomorrow.

 I must say this scares the heck out of me. The current code goes
 through some trouble to not throw an error while in a recv()
 send(). For example, you removed the DoingCommandRead check from
 prepare_for_client_read(). There's an comment in postgres.c that says
 this:
 
  /*
   * (2) Allow asynchronous signals to be executed immediately
   * if they
   * come in while we are waiting for client input. (This must
   * be
   * conditional since we don't want, say, reads on behalf of
   * COPY FROM
   * STDIN doing the same thing.)
   */
  DoingCommandRead = true;

Hmm. Sorry. That's my fault that I skipped over the issues about
COPY FROM STDIN.

 With the patch, we do allow asynchronous signals to be processed while
 blocked in COPY FROM STDIN. Maybe that's OK, but I don't feel
 comfortable just changing it. (the comment is now wrong, of course)

I don't see actual problem but I agree that the behavior should
not be chenged.

 This patch also enables processing query cancel signals while blocked,
 not just SIGTERM. That's not good; we might be in the middle of
 sending a message, and we cannot just error out of that or we might
 violate the fe/be protocol. That's OK with a SIGTERM as you're
 terminating the connection anyway, and we have the PqCommBusy
 safeguard in place that prevents us from sending broken messages to
 the client, but that's not good enough if we wanted to keep the
 backend alive, as we won't be able to send anything to the client
 anymore.

Ok, since what I want is escaping from blocked send() only by
SIGTERM, it needs another mechanism from current
prepare_for_client_read().

 BTW, we've been talking about blocking in send(), but this patch also
 let's a recv() in e.g. COPY FROM STDIN to be interrupted. That's
 probably a good thing; surely you have exactly the same issues with
 that as with send(). But I didn't realize we had a problem with that
 too.

I see. (But it is mere a side-effect of my carelessness, as you know:)

 In summary, this patch is not ready as it is, but I think we can fix
 it. The key question is: is it safe to handle SIGTERM in the signal
 handler, calling the exit-handlers and exiting the backend, when
 blocked in a recv() or send()? It's a change in the pqcomm.c API; most
 pqcomm.c functions have not thrown errors or processed interrupts
 before. But looking at the callers, I think it's safe, and there isn't
 actually any comments explicitly saying that pqcomm.c will never throw
 errors.
 
 I propose the attached patch. It adds a new flag ImmediateDieOK, which
 is a weaker form of ImmediateInterruptOK that only allows handling a
 pending die-signal in the signal handler.
 
 Robert, others, do you see a problem with this?

The patch seems excluding all problems menthioned in the message,
I have no objection to it.

 Over IM, Robert pointed out that it's not safe to jump out of a signal
 handler with siglongjmp, when we're inside library calls, like in a
 callback called by OpenSSL. But even with current master branch,
 that's exactly what we do. In secure_raw_read(), we set
 ImmediateInterruptOK = true, which means that any incoming signal will
 be handled directly in the signal handler, which can mean
 elog(ERROR). Should we be worried? OpenSSL might get confused if
 control never returns to the SSL_read() or SSL_write() function that
 called secure_raw_read().

IMHO, it will soon die even if OpenSSL is confused. It seems a
bit brute that sudden cutoff occurs even when the socket is *not*
blocked, but the backend will soon die and frontend will
immediately get ECONNRESET (..hmm it is not seen in manpages of
recv/read(2)) and should safely exit from OpenSSL.

I cannot run this patch right now, but it seems to be no problem.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-09-30 Thread Kyotaro HORIGUCHI
Wow, thank you for the patch.

   0001: Allows WaitLatchOrSocket(WL_WRITABLE) without WL_READABLE. I've
 tested the poll() and select() implementations on linux and
 blindly patched up windows.
   0002: Put the socket the backend uses to communicate with the client
 into nonblocking mode as soon as latches are ready and use latches
 to wait. This probably doesn't work correctly without 0003, but
 seems easier to review separately.
   0003: Don't do sinval catchup and notify processing in signal
 handlers. It's quite cool that it worked that well so far, but it
 requires some complicated code and is rather fragile. 0002 allows
 to move that out of signal handlers and just use a latch
 there. This seems remarkably simpler:
  4 files changed, 69 insertions(+), 229 deletions(-)
   
   These aren't ready for commit, especially not 0003, but I think they are
   quite a good foundation for getting rid of the blocking in send(). I
   haven't added any interrupt processing after interrupted writes, but
   marked the relevant places with XXXs.
   
   With regard to 0002, I dislike the current need to do interrupt
   processing both in be-secure.c and be-secure-openssl.c. I guess we could
   solve that by returning something like EINTR from the ssl routines when
   they need further reads/writes and do all the processing in one place in
   be-secure.c.
   
   There's also some cleanup in 0002/0003 needed:
   prepare_for_client_read()/client_read_ended() aren't needed in that form
   anymore and should probably rather be something like
   CHECK_FOR_READ_INTERRUPT() or similar.  Similarly the
   EnableCatchupInterrupt()/DisableCatchupInterrupt() in autovacuum.c is
   pretty ugly.
   
   Btw, be-secure.c is really not a good name anymore...
   
   What do you think?
  
  I've invested some more time in this:
  0002 now makes sense on its own and doesn't change anything around the
   interrupt handling. Oh, and it compiles without 0003.
  0003 Sinval/notify processing got simplified further. There really isn't
   any need for DisableNotifyInterrupt/DisableCatchupInterrupt
   anymore. Also begin_client_read/client_read_ended don't make much
   sense anymore. Instead introduce ProcessClientReadInterrupt (which
   wants a better name).
  There's also a very WIP
  0004 Allows secure_read/write be interrupted when ProcDiePending is
   set. All of that happens via the latch mechanism, nothing happens
   inside signal handlers. So I do think it's quite an improvement
   over what's been discussed in this thread.
   But it (and the other approaches) do noticeably increase the
   likelihood of clients not getting the error message if the client
   isn't actually dead. The likelihood of write() being blocked
   *temporarily* due to normal bandwidth constraints is quite high
   when you consider COPY FROM and similar. Right now we'll wait till
   we can put the error message into the socket afaics.
  
  1-3 need some serious comment work, but I think the approach is
  basically sound. I'm much, much less sure about allowing send() to be
  interrupted.
 
 Kyatoro, could you check whether you can achieve what you want using
 0004?
 
 It's imo pretty clear that a fair amount of base work needs to be done
 and there's been a fair amount of progress made this fest. I think this
 can now be marked returned with feedback.

Myself is satisfied by Heikki's solution, and it seems ready for
commit. But I agree with the temporarily blocked state is seen
often and it breaks even non-blocked socket. If we want to/should
avoid breaking *temporarily or not* blocked socket even for
SIGTERM, this mechanism should be used.

Which way should we take?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-09-30 Thread Kyotaro HORIGUCHI
By the way,

 Sorry, I missed this message and only cought up when reading your CF
 status mail. I've attached three patches:

Could let me know how to get the CF status mail?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-09-30 Thread Andres Freund
On 2014-09-26 21:02:16 +0300, Heikki Linnakangas wrote:
 I propose the attached patch. It adds a new flag ImmediateDieOK, which is a
 weaker form of ImmediateInterruptOK that only allows handling a pending
 die-signal in the signal handler.
 
 Robert, others, do you see a problem with this?

Per se I don't have a problem with it. There does exist the problem that
the user doesn't get a error message in more cases though. On the other
hand it's bad if any user can prevent the database from restarting.

 Over IM, Robert pointed out that it's not safe to jump out of a signal
 handler with siglongjmp, when we're inside library calls, like in a callback
 called by OpenSSL. But even with current master branch, that's exactly what
 we do. In secure_raw_read(), we set ImmediateInterruptOK = true, which means
 that any incoming signal will be handled directly in the signal handler,
 which can mean elog(ERROR). Should we be worried? OpenSSL might get confused
 if control never returns to the SSL_read() or SSL_write() function that
 called secure_raw_read().

But this is imo prohibitive. Yes, we're doing it for a long while. But
no, that's not ok. It actually prompoted me into prototyping the latch
thing (in some other thread). I don't think existing practice justifies
expanding it further.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to support SEMI and ANTI join removal

2014-09-30 Thread David Rowley
On Tue, Sep 30, 2014 at 12:42 AM, Andres Freund and...@2ndquadrant.com
wrote:

 On 2014-09-29 22:42:57 +1300, David Rowley wrote:

  I've made a change to the patch locally to ignore foreign
  keys that are marked as deferrable.

 I have serious doubts about the general usefulness if this is onlyu
 going to be useable in a very restricted set of circumstances (only one
 time plans, no deferrable keys). I think it'd be awesome to have the
 capability, but I don't think it's ok to restrict it that much.


I had a look to see what Oracle does in this situation and I was quite
shocked to see that they're blatantly just ignoring the fact that the
foreign key is being deferred. I tested by deferring the foreign key in a
transaction then updating the referenced record and I see that Oracle just
return the wrong results as they're just blindly removing the join. So it
appears that they've not solved this one very well.


 To me that means you can't make the decision at plan time, but need to
 move it to execution time. It really doesn't sound that hard to short
 circuit the semi joins whenever, at execution time, there's no entries
 in the deferred trigger queue. It's a bit annoying to have to add code
 to all of nestloop/hashjoin/mergejoin to not check the outer tuple if
 there's no such entry. But I don't think it'll be too bad. That'd mean
 it can be used in prepared statements.


I'm starting to think about how this might be done, but I'm a bit confused
and I don't know if it's something you've overlooked or something I've
misunderstood.

I've not quite gotten my head around how we might stop the unneeded
relation from being the primary path to join the other inner relations,
i.e. what would stop the planner making a plan that hashed all the other
relations and planned to perform a sequence scan on the relation that we
have no need to scan (because the foreign key tells us the join is
pointless). If we were not use use that relation then we'd just have a
bunch of hash tables with no path to join them up. If we did anything to
force the planner into creating a plan that would suit skipping relations,
then we could possibly be throwing away the optimal plan. Right?

Regards

David Rowley


Re: [HACKERS] Patch to support SEMI and ANTI join removal

2014-09-30 Thread Andres Freund
On 2014-09-30 23:25:45 +1300, David Rowley wrote:
 On Tue, Sep 30, 2014 at 12:42 AM, Andres Freund and...@2ndquadrant.com
 wrote:
 
  On 2014-09-29 22:42:57 +1300, David Rowley wrote:
 
   I've made a change to the patch locally to ignore foreign
   keys that are marked as deferrable.
 
  I have serious doubts about the general usefulness if this is onlyu
  going to be useable in a very restricted set of circumstances (only one
  time plans, no deferrable keys). I think it'd be awesome to have the
  capability, but I don't think it's ok to restrict it that much.
 
 
 I had a look to see what Oracle does in this situation and I was quite
 shocked to see that they're blatantly just ignoring the fact that the
 foreign key is being deferred. I tested by deferring the foreign key in a
 transaction then updating the referenced record and I see that Oracle just
 return the wrong results as they're just blindly removing the join. So it
 appears that they've not solved this one very well.

Ick. I'm pretty strongly against going that way.

  To me that means you can't make the decision at plan time, but need to
  move it to execution time. It really doesn't sound that hard to short
  circuit the semi joins whenever, at execution time, there's no entries
  in the deferred trigger queue. It's a bit annoying to have to add code
  to all of nestloop/hashjoin/mergejoin to not check the outer tuple if
  there's no such entry. But I don't think it'll be too bad. That'd mean
  it can be used in prepared statements.
 
 
 I'm starting to think about how this might be done, but I'm a bit confused
 and I don't know if it's something you've overlooked or something I've
 misunderstood.
 
 I've not quite gotten my head around how we might stop the unneeded
 relation from being the primary path to join the other inner relations,
 i.e. what would stop the planner making a plan that hashed all the other
 relations and planned to perform a sequence scan on the relation that we
 have no need to scan (because the foreign key tells us the join is
 pointless). If we were not use use that relation then we'd just have a
 bunch of hash tables with no path to join them up. If we did anything to
 force the planner into creating a plan that would suit skipping relations,
 then we could possibly be throwing away the optimal plan. Right?

I'm not 100% sure I understand your problem description, but let me
describe how I think this would work. During planning, you'd emit the
exactly same plan as you'd today, with two exceptions:
a) When costing a node where one side of a join is very likely to be
   removable, you'd cost it nearly as if there wasn't a join.
b) The planner would attach some sort of 'one time join qual' to the
   'likely removable' join nodes. If, during executor init, that qual
   returns false, simply don't perform the join. Just check the inner
   relation, but entirely skip the outer relation.

With regard to your comment hash tables that aren't joined up: Currently
hash tables aren't built if they're not used. I.e. it's not
ExecInitHash() that does the hashing, but they're generally only built
when needed. E.g. nodeHashJoin.c:ExecHashJoin() only calls
MultiExecProcNode() when in the HJ_BUILD_HASHTABLE state - which it only
initially and sometimes after rescans is.

Does that clear things up or have I completely missed your angle?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to support SEMI and ANTI join removal

2014-09-30 Thread Andres Freund
On 2014-10-01 01:03:35 +1300, David Rowley wrote:
 On Wed, Oct 1, 2014 at 12:01 AM, Andres Freund and...@2ndquadrant.com
 wrote:
 
  On 2014-09-30 23:25:45 +1300, David Rowley wrote:
  
   I've not quite gotten my head around how we might stop the unneeded
   relation from being the primary path to join the other inner relations,
   i.e. what would stop the planner making a plan that hashed all the other
   relations and planned to perform a sequence scan on the relation that we
   have no need to scan (because the foreign key tells us the join is
   pointless). If we were not use use that relation then we'd just have a
   bunch of hash tables with no path to join them up. If we did anything to
   force the planner into creating a plan that would suit skipping
  relations,
   then we could possibly be throwing away the optimal plan. Right?
 
  I'm not 100% sure I understand your problem description, but let me
  describe how I think this would work. During planning, you'd emit the
  exactly same plan as you'd today, with two exceptions:
  a) When costing a node where one side of a join is very likely to be
 removable, you'd cost it nearly as if there wasn't a join.
 
 
 Ok given the tables:
 create table t1 (x int primary key);
 create table t2 (y int primary key);
 
 suppose the planner came up with something like:
 
 test=# explain (costs off) select t2.* from t1 inner join t2 on t1.x=t2.y;
  QUERY PLAN
 
  Hash Join
Hash Cond: (t1.x = t2.y)
-  Seq Scan on t1
-  Hash
  -  Seq Scan on t2
 
 If we had a foreign key...
 
 alter table t2 add constraint t2_y_fkey foreign key (y) references t1 (x);
 
 ...the join to t1 could possibly be ignored by the executor... but
 there's a problem as the plan states we're going to seqscan then hash that
 relation, then seqscan t1 with a hash lookup on each of t1's rows. In this
 case how would the executor skip the scan on t1? I can see how this might
 work if it was t2 that we were removing, as we'd just skip the hash lookup
 part in the hash join node.

Hm, right. But that doesn't seem like a fatal problem to me. The planner
knows about t1/t2 and Seq(t1), Seq(t2), not just Hash(Seq(t2)). So it
can tell the HashJoin node that when the 'shortcut' qualifier is true,
it should source everything from Seq(t2). Since the sequence scan
doesn't care about the node ontop that doesn't seem to be overly
dramatic?
Obviously reality makes this a bit more complicated...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Kevin Grittner
Peter Geoghegan p...@heroku.com wrote:
 On Mon, Sep 29, 2014 at 3:08 PM, Kevin Grittner kgri...@ymail.com wrote:
 Well, unless we abandon transactional semantics for other MERGE
 statements, we should have a way that UPSERT logic continues to
 work if you don't match a suitable index; it will just be slower --
 potentially a lot slower, but that's what indexes are for.

 I want an implementation that doesn't have unique violations,
 unprincipled deadlocks, or serialization failures at READ COMMITTED. I
 want it because that's what  the majority of users actually want. It
 requires no theoretical justification.

Sure.  I'm not suggesting otherwise.

 I don't think we need a separate statement type for the one we
 do well, because I don't think we should do the other one
 without proper transactional semantics.

 That seems like a very impractical attitude. I cannot simulate what
 I've been doing with unique indexes without taking an exclusive table
 lock. That is a major footgun, so it isn't going to happen.

There are certainly other ways to do it, although they require more
work.  As far as UPSERT goes, I agree that we should require such
an index, at least for the initial implementation and into the
foreseeable future.  What I'm saying is that if we implement it
using the standard MERGE syntax, then if the features of MERGE are
extended it will continue to work even in the absence of such an
index.  The index becomes a way of optimizing access rather than
defining what access is allowed.

At the risk of pushing people away from this POV, I'll point out
that this is somewhat similar to what we do for unlogged bulk loads
-- if all the conditions for doing it the fast way are present, we
do it the fast way; otherwise it still works, but slower.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP armor headers

2014-09-30 Thread Heikki Linnakangas

On 09/29/2014 05:38 PM, Marko Tiikkaja wrote:

On 9/29/14 3:02 PM, Heikki Linnakangas wrote:

Is there any real life examples or tools out there to generate armors
with headers with duplicate keys? RFC 4880 says:


 Note that some transport methods are sensitive to line length.  While
 there is a limit of 76 characters for the Radix-64 data (Section
 6.3), there is no limit to the length of Armor Headers.  Care should
 be taken that the Armor Headers are short enough to survive
 transport.  One way to do this is to repeat an Armor Header key
 multiple times with different values for each so that no one line is
 overly long.


Does anyone do that in practice? Is there any precedence for
concatenating the values in other tools that read armor headers?


Maybe I just suck at $SEARCH_ENGINE, but extracting armor headers
programmatically doesn't seem to be very popular.  I could only find one
example, which returned the last instance of the key.  But that seemed
to be more an accident than anything else; it wasn't documented and the
source code didn't say anything about it.  I also think that's the worst
behaviour.  If we can't agree on concatenation, I'd rather see an error.


May I ask you why you wrote this patch? What are you doing with the headers?

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP armor headers

2014-09-30 Thread Marko Tiikkaja

On 9/30/14 4:37 PM, Heikki Linnakangas wrote:

On 09/29/2014 05:38 PM, Marko Tiikkaja wrote:

Maybe I just suck at $SEARCH_ENGINE, but extracting armor headers
programmatically doesn't seem to be very popular.  I could only find one
example, which returned the last instance of the key.  But that seemed
to be more an accident than anything else; it wasn't documented and the
source code didn't say anything about it.  I also think that's the worst
behaviour.  If we can't agree on concatenation, I'd rather see an error.


May I ask you why you wrote this patch? What are you doing with the headers?


We're sending arbitrary messages between systems over HTTP(S), and a 
special header is used to tell the recipient system what type of message 
it is.  The message types are specific to the domain, but you can think 
of them to be roughly equivalent to MIME types.


If what you're trying to get a sense of is why I'd prefer to see 
concatenation, I can't really help you.  For our use case (and perhaps 
for everyone else as well) it would actually make more sense to throw an 
error if pgp_armor_header() is used on a key which appears more than 
once.  The concatenation behaviour was an attempt at a one size fits 
all interface, but now that we're going to also have a 
pgp_armor_headers() function for users to implement the behaviour they 
want themselves, there's no real reason to try and guess what everyone 
wants.  I think I'd prefer to see an ERROR in this case now.



.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Collations and Replication; Next Steps

2014-09-30 Thread Bruce Momjian
On Wed, Sep 17, 2014 at 01:07:56PM +, Matthew Kelly wrote:
  * Unless you keep _all_ of your clusters on the same OS, machines
  from your database spare pool probably won't be the right OS when you
  add them to the cluster because a member failed.

There has been discussion about having master/streaming slaves use the
same OS version, but a simple OS update of an existing master can break
indexes too.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP armor headers

2014-09-30 Thread Heikki Linnakangas

On 09/30/2014 05:45 PM, Marko Tiikkaja wrote:

On 9/30/14 4:37 PM, Heikki Linnakangas wrote:

On 09/29/2014 05:38 PM, Marko Tiikkaja wrote:

Maybe I just suck at $SEARCH_ENGINE, but extracting armor headers
programmatically doesn't seem to be very popular.  I could only find one
example, which returned the last instance of the key.  But that seemed
to be more an accident than anything else; it wasn't documented and the
source code didn't say anything about it.  I also think that's the worst
behaviour.  If we can't agree on concatenation, I'd rather see an error.


May I ask you why you wrote this patch? What are you doing with the headers?


We're sending arbitrary messages between systems over HTTP(S), and a
special header is used to tell the recipient system what type of message
it is.  The message types are specific to the domain, but you can think
of them to be roughly equivalent to MIME types.


Ok. How quaint. :-)


If what you're trying to get a sense of is why I'd prefer to see
concatenation, I can't really help you.  For our use case (and perhaps
for everyone else as well) it would actually make more sense to throw an
error if pgp_armor_header() is used on a key which appears more than
once.  The concatenation behaviour was an attempt at a one size fits
all interface, but now that we're going to also have a
pgp_armor_headers() function for users to implement the behaviour they
want themselves, there's no real reason to try and guess what everyone
wants.  I think I'd prefer to see an ERROR in this case now.


I'm actually now leaning towards providing just a single function, 
pgp_armor_headers(text, key OUT text, value OUT text), which returns all 
the keys and values. That gives maximum flexibility, and leaves it up to 
the user to decide what to do with duplicate keys. It's pretty easy to 
use that to extract just a single header, too:


postgres=# select * FROM pgp_armor_headers('
-BEGIN PGP MESSAGE-
foo: baar
foo: more foo
singlekey: fsdfsd

em9va2E=
=
-END PGP MESSAGE-
') where key = 'singlekey';
key| value
---+
 singlekey | fsdfsd
(1 row)

And if you want to concatenate possible duplicates:

postgres=# select string_agg(value, ' ') FROM pgp_armor_headers('
-BEGIN PGP MESSAGE-
foo: baar
foo: more foo
singlekey: fsdfsd

em9va2E=
=
-END PGP MESSAGE-
') where key = 'foo';
  string_agg
---
 baar more foo
(1 row)

What do you think? Attached patch implements that, but the docs and 
regression tests now need adjustment.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump bug in 9.4beta2 and HEAD

2014-09-30 Thread Alvaro Herrera

I have pushed this fix, except that instead of parsing the OID from the
dropStmt as in your patch, I used te-catalogId.oid, which is much
simpler.

I tested this by pg_restoring to 8.4 (which doesn't have
pg_largeobject_metadata); there is no error raised:

LOG:  sentencia: SELECT CASE WHEN EXISTS(SELECT 1 FROM 
pg_catalog.pg_largeobject WHERE loid = '43748') THEN 
pg_catalog.lo_unlink('43748') END;

In 9.0 the command is the new style:

LOG:  sentencia: SELECT pg_catalog.lo_unlink(oid) FROM 
pg_catalog.pg_largeobject_metadata WHERE oid = '43748';

So it's all fine.  I guess it's fortunate that we already had the
DropBlobIfExists() function.

Now a further problem I notice is all the *other* commands for which we
inject the IF EXISTS clause; there is no support for those in older
servers, so they throw errors if --if-exists is given in the pg_restore
line.  I think we can just say that --if-exists is not supported for
older servers; as Heikki said, we don't promise that pg_dump is
compatible with older servers anyway.  In my test database, several
commands errored out when seeing the EXTENSION in CREATE EXTENSION IF EXISTS.
So we're okay now.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Simon Riggs
On 29 September 2014 18:59, Peter Geoghegan p...@heroku.com wrote:
 On Mon, Sep 29, 2014 at 7:21 AM, Simon Riggs si...@2ndquadrant.com wrote:
 If you were an ORM developer reading the PostgreSQL Release Notes for
 9.5, which URL would you visit to see a complete description of the
 new feature, including how it works concurrently, locking and other
 aspects. How would you check whether some strange behaviour was a bug,
 or intentional?

 We don't do that with UPDATE, so why would we do it with this?

Because this is new, harder and non-standard, so there is no other
place to look. If you want to persuade us that MERGE has poorly
defined concurrency, so you have implemented a new command, the new
command had better have very well defined behaviour.

And because a reviewer asked for it?

For example, this patch for UPSERT doesn't support updatable views.
But I can't see anyone that didn't read the patch would know that.


 All of these were added. There are two new sets of isolation tests,
 one per variant of the new clause (IGNORE/UPDATE).

 When you say added, what do you mean? You posted one new doc patch,
 with no tests in it.

 I mean that there was a commit (not included with the documentation,
 but with the original patchset) with many tests. I don't know why
 you're suggesting that I don't have concurrency tests. There are
 isolation tests in that commit. There are also many regression tests.

I see the tests in earlier patches; I was observing there are no new ones.

There are no tests for the use of CONFLICTING() syntax
No tests for interaction with triggers, with regard to before triggers
changing values prior to conflict detection.

My hope was that the complex behaviour of multiple unique indexes
might be explained there. Forgive me, I didn't see it.



 No explanation of why the CONFLICTING() syntax differs from OLD./NEW.
 syntax used in triggers

 Why should it be the same?

 Because it would be a principled approach to do that.

 That is just an assertion. The MERGE syntax doesn't use that either.

MERGE allows AS row which then allow you to refer to row.x for
column x of the input.

Other people have independently commented the same thing.


 If we aren't going to use MERGE syntax, it would make sense to at
 least use the same terminology.

 e.g.
 INSERT 
 WHEN MATCHED
 UPDATE

 The concept of matched is identical between MERGE and UPSERT and it
 will be confusing to have two words for the same thing.

 I don't care if we change the spelling to WHEN MATCHED
 UPDATE/IGNORE. That seems fine. But MERGE is talking about a join,
 not the presence of a would-be duplicate violation.

I don't understand that comment.

 There seems to be a good reason not to use the MySQL syntax of ON
 DUPLICATE KEY UPDATE, which doesn't allow you to specify UPDATE
 operations other than a replace, so no deltas, e.g. SET a = a + x

 That isn't true, actually. It clearly does.

It does. Rather amusingly I misread the very unclear MySQL docs.


 Having said that, it would be much nicer to have a mode that allows
 you to just say the word UPDATE and have it copy the data into the
 correct columns, like MySQL does. That is very intuitive, even if it
 isn't very flexible.

 Multi-assignment updates (with or without CONFLICTING()) are supported, FWIW.

If I want the incoming row to overwrite the old row, it would be good
to have syntax to support that easily.

Why doesn't
INSERT INTO UNIQUE_TBL VALUES (1, 'a'), (2, 'b'), (2, 'b')  ON
CONFLICT UPDATE SET t = 'fails';
end up with this in the table?

1   a
2   fails

What happens with this?

BEGIN;
INSERT INTO UNIQUE_TBL VALUES (2, 'b')  ON CONFLICT UPDATE SET t = 'fails';
INSERT INTO UNIQUE_TBL VALUES (2, 'b')  ON CONFLICT UPDATE SET t = 'fails';
COMMIT;

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump bug in 9.4beta2 and HEAD

2014-09-30 Thread Pavel Stehule
2014-09-30 17:18 GMT+02:00 Alvaro Herrera alvhe...@2ndquadrant.com:


 I have pushed this fix, except that instead of parsing the OID from the
 dropStmt as in your patch, I used te-catalogId.oid, which is much
 simpler.


yes, it is much better



 I tested this by pg_restoring to 8.4 (which doesn't have
 pg_largeobject_metadata); there is no error raised:

 LOG:  sentencia: SELECT CASE WHEN EXISTS(SELECT 1 FROM
 pg_catalog.pg_largeobject WHERE loid = '43748') THEN
 pg_catalog.lo_unlink('43748') END;

 In 9.0 the command is the new style:

 LOG:  sentencia: SELECT pg_catalog.lo_unlink(oid) FROM
 pg_catalog.pg_largeobject_metadata WHERE oid = '43748';

 So it's all fine.  I guess it's fortunate that we already had the
 DropBlobIfExists() function.

 Now a further problem I notice is all the *other* commands for which we
 inject the IF EXISTS clause; there is no support for those in older
 servers, so they throw errors if --if-exists is given in the pg_restore
 line.  I think we can just say that --if-exists is not supported for
 older servers; as Heikki said, we don't promise that pg_dump is
 compatible with older servers anyway.  In my test database, several
 commands errored out when seeing the EXTENSION in CREATE EXTENSION IF
 EXISTS.
 So we're okay now.


great,

Thank you very much

Pavel




 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



Re: [HACKERS] open items for 9.4

2014-09-30 Thread Josh Berkus
On 09/30/2014 04:56 AM, Heikki Linnakangas wrote:
 There seems to be no decisive consensus here. I'm going to put my foot
 on the ground and go remove it, as I'm leaning towards that option, and
 we need to get the release out. But if someone objects loudly enough to
 actually write the documentation and commit it that way, I'm happy with
 that too.

I'm also in favor of removing it.  We really don't need more GUCs which
nobody has any clear idea how to change or why.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 8:30 AM, Simon Riggs si...@2ndquadrant.com wrote:
 No explanation of why the CONFLICTING() syntax differs from OLD./NEW.
 syntax used in triggers

 Why should it be the same?

 Because it would be a principled approach to do that.

 That is just an assertion. The MERGE syntax doesn't use that either.

 MERGE allows AS row which then allow you to refer to row.x for
 column x of the input.

It does, but that isn't what you suggested. You talked about the
OLD.*/NEW.* syntax.

 I don't care if we change the spelling to WHEN MATCHED
 UPDATE/IGNORE. That seems fine. But MERGE is talking about a join,
 not the presence of a would-be duplicate violation.

 I don't understand that comment.

I just mean that if you want to replace ON CONFLICT UPDATE with WHEN
MATCHED UPDATE - that little part of the grammar - that seems okay.

 Multi-assignment updates (with or without CONFLICTING()) are supported, FWIW.

 If I want the incoming row to overwrite the old row, it would be good
 to have syntax to support that easily.

Well, maybe I'll get around to that when things settle down. That's
clearly in the realm of nice to have, though.

 Why doesn't
 INSERT INTO UNIQUE_TBL VALUES (1, 'a'), (2, 'b'), (2, 'b')  ON
 CONFLICT UPDATE SET t = 'fails';
 end up with this in the table?

 1   a
 2   fails

A cardinality violation - just like MERGE. As with MERGE, the final
value of every row needs to be deterministic (within the command).

 What happens with this?

 BEGIN;
 INSERT INTO UNIQUE_TBL VALUES (2, 'b')  ON CONFLICT UPDATE SET t = 'fails';
 INSERT INTO UNIQUE_TBL VALUES (2, 'b')  ON CONFLICT UPDATE SET t = 'fails';
 COMMIT;

It works fine. No cardinality violation with two separate commands.
See the new ExecLockUpdateTuple() function within nodeModifyTable.c
for extensive discussion on how this is handled.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Valgrind warnings in master branch (Invalid read of size 8) originating within CreatePolicy()

2014-09-30 Thread Stephen Frost
Peter,

* Peter Geoghegan (p...@heroku.com) wrote:
 I see the following Valgrind warnings in a recent build of the master branch:
[...]
 This appears a few times, but always seems to occur with the same call stack.

Many thanks, I've worked out the issue (rsecpolname needs to be filled
in with the results of calling DirectFunctionCall1(namein,
CStringGetDatum()) instead, or we can end up with garbage past the \0
in that fixed-width NameData field).

Will fix, thanks again!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] open items for 9.4

2014-09-30 Thread Gregory Smith

On 9/29/14, 2:30 PM, Andres Freund wrote:

 Can we explain those reasons in the form of documentation?
Yes. Try and benchmark it. It'll be hardware and workload dependant.


I missed this whole thing, and eventually I have to circle back to it.  
I could do it this week.  Could you (or someone else familiar with the 
useful benchmarks) give me more detail on how to replicate one case 
where things should improve?  I can do that, and I have a lab full of 
hardware if it's easier to spot on one type of server. That exercise 
will probably lead to a useful opinion on the feature in its final form, 
any associated GUC, tunables, and necessary level of associated 
documentation in a day or two.


This is a pretty low bar, right?  Examples and documentation sufficient 
for just me to catch up is not asking for very much.  If it isn't 
possible to do that in a very short period of time, it would not bode 
well for broader consumption.


--
Greg Smith greg.sm...@crunchydatasolutions.com
Chief PostgreSQL Evangelist - http://crunchydatasolutions.com/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 8:30 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 29 September 2014 18:59, Peter Geoghegan p...@heroku.com wrote:
 On Mon, Sep 29, 2014 at 7:21 AM, Simon Riggs si...@2ndquadrant.com wrote:
 If you were an ORM developer reading the PostgreSQL Release Notes for
 9.5, which URL would you visit to see a complete description of the
 new feature, including how it works concurrently, locking and other
 aspects. How would you check whether some strange behaviour was a bug,
 or intentional?

 We don't do that with UPDATE, so why would we do it with this?

 Because this is new, harder and non-standard, so there is no other
 place to look. If you want to persuade us that MERGE has poorly
 defined concurrency, so you have implemented a new command, the new
 command had better have very well defined behaviour.

I'm making a point about the structure of the docs here. The behavior
*is* documented, just not in the INSERT documentation, a situation
I've compare with how EvalPlanQual() isn't discussed in the
UPDATE/DELETE/SELECT FOR UPDATE docs. And EvalPlanQual() has some
pretty surprising corner-case behaviors.

That having been said, maybe I could have gone into more detail on the
consensus among unique indexes thing in another part of the
documentation, since that isn't separately covered (only the aspects
of when the predicate is evaluated in READ COMMITTED mode and other
things like that were covered).

 For example, this patch for UPSERT doesn't support updatable views.
 But I can't see anyone that didn't read the patch would know that.

By reading the CREATE VIEW docs. Maybe there could stand to be a
compatibility note in the main INSERT command, but I didn't want to do
that as long as things were up in the air. It might be the case that
we figure out good behavior for updatable views.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH CHECK and Column-Level Privileges

2014-09-30 Thread Dean Rasheed
On 30 September 2014 16:52, Stephen Frost sfr...@snowman.net wrote:
 * Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 29 September 2014 16:46, Stephen Frost sfr...@snowman.net wrote:
  * Robert Haas (robertmh...@gmail.com) wrote:
  Well, I think that's an acceptable approach from the point of view of
  fixing the security exposure, but it's far from ideal.  Good error
  messages are important for usability.  I can live with this as a
  short-term fix, but in the long run I strongly believe we should try
  to do better.

 Yes agreed, error messages are important, and longer term it would be
 better to report on the specific columns the user has access to (for
 all constraints), rather than the all-or-nothing approach of the
 current patch.

 If the user only has column-level privileges on the table then I'm not
 really sure how useful the detail will be.


One of the main things that detail is useful for is identifying the
failing row in a multi-row update. In most real-world cases, I would
expect the column-level privileges to include the table's PK, so the
detail would meet that requirement. In fact the column-level
privileges would pretty much have to include sufficient columns to
usefully identify rows, otherwise updates wouldn't be practical.

 However, for WCOs, I don't think the executor has the
 information it needs to work out how to do that because it doesn't
 even know which view the user updated, because it's not necessarily
 the same one as failed the WCO.

 That's certainly an issue also.

  It certainly wouldn't be hard to add the same check around the WITH
  OPTION case that's around my proposed solution for the other issues-
  just check for SELECT rights on the underlying table.

 I guess that would work well for RLS, since the user would typically
 have SELECT rights on the table they're updating, but I'm not
 convinced it would do much good for auto-updatable views.

 I've been focusing on the 9.4 and back-branches concern, but as for RLS,
 if we're going to try and include the detail in this case then I'd
 suggest we do so only if the user has SELECT rights and RLS is disabled
 on the relation.

Huh? If RLS is disabled, isn't the check option also disabled?

  Otherwise, we'd have to check that the row being
 returned actually passes the SELECT policy.  I'm already not really
 thrilled that we are changing error message results based on user
 permissions, and that seems like it'd be worse.


That's one of the things I never liked about allowing different RLS
policies for different commands --- the idea that the user can UPDATE
a row that they can't even SELECT just doesn't make sense to me.

  Another question
  is if we could/should limit this to the UPDATE case.  With the INSERT
  case, any columns not provided by the user would be filled out by
  defaults, which can likely be seen in the catalog, or the functions in
  the catalog for the defaults or for any triggers might be able to be
  run by the user executing the INSERT anyway to see what would have been
  used in the resulting row.  I'm not completely convinced there's no risk
  there though..
 

 I think it's conceivable that a trigger could populate a column hidden
 to the user with some confidential information, possibly pulled from
 another table that the user doesn't have access to, so the fix has to
 apply to INSERTs as well as UPDATEs.

 What do you think about returning just what the user provided in the
 first place in both of these cases..?  I'm not quite sure what that
 would look like in the UPDATE case but for INSERT (and COPY) it would be
 the subset of columns being inserted and the values originally provided.
 That may not be what the actual conflict is due to, as there could be
 before triggers changing things in the middle, or the conflict could be
 on default values, but at least the input row could be identified and
 there wouldn't be this information leak risk.  Not sure how difficult
 that would be to implement though.


I could see that working for INSERTs, but for UPDATEs I don't think
that would be very useful in practice, because the columns most likely
to be useful for identifying the failing row (e.g., key columns) are
also the columns least likely to have been updated.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Josh Berkus
On 09/30/2014 11:20 AM, Peter Geoghegan wrote:
  For example, this patch for UPSERT doesn't support updatable views.
  But I can't see anyone that didn't read the patch would know that.
 By reading the CREATE VIEW docs. Maybe there could stand to be a
 compatibility note in the main INSERT command, but I didn't want to do
 that as long as things were up in the air. It might be the case that
 we figure out good behavior for updatable views.

All of these things sound like good ideas for documentation
improvements, but hardly anything which should block the patch.  It has
documentation, more than we'd require for a lot of other patches, and
it's not like the 9.5 release is next month.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Full_page_write is off in backup mode

2014-09-30 Thread searcher s
Hi,
I am using postgres 9.2.2.
The postgresql documentation says, full_page_writes is forcibly on after
executing the function pg_start_backup. But in my server full_page_writes
is still off  (not changed).


[HACKERS] Allow format 0000-0000-0000 in postgresql MAC parser

2014-09-30 Thread Herwin Weststrate
Hello,

Some devices send the MAC address in RADIUS requests in the format
--. I've seen this with a 3com switch, but there may be
others. Currently, postgresql doesn't understand this format.

This patch adds an extra line to the macaddr parsing in postgres to
support this format as well. A unit test has been added.

Kind regards,

-- 
Herwin Weststrate
Quarantainenet BV
www.quarantainenet.nl
diff --git a/src/backend/utils/adt/mac.c b/src/backend/utils/adt/mac.c
index aa9993f..509315a 100644
--- a/src/backend/utils/adt/mac.c
+++ b/src/backend/utils/adt/mac.c
@@ -57,6 +57,9 @@ macaddr_in(PG_FUNCTION_ARGS)
 		count = sscanf(str, %2x%2x.%2x%2x.%2x%2x%1s,
 	   a, b, c, d, e, f, junk);
 	if (count != 6)
+		count = sscanf(str, %2x%2x-%2x%2x-%2x%2x%1s,
+	   a, b, c, d, e, f, junk);
+	if (count != 6)
 		count = sscanf(str, %2x%2x%2x%2x%2x%2x%1s,
 	   a, b, c, d, e, f, junk);
 	if (count != 6)
diff --git a/src/test/regress/expected/macaddr.out b/src/test/regress/expected/macaddr.out
index 91edc5a..90e9b34 100644
--- a/src/test/regress/expected/macaddr.out
+++ b/src/test/regress/expected/macaddr.out
@@ -7,14 +7,15 @@ INSERT INTO macaddr_data VALUES (2, '08-00-2b-01-02-03');
 INSERT INTO macaddr_data VALUES (3, '08002b:010203');
 INSERT INTO macaddr_data VALUES (4, '08002b-010203');
 INSERT INTO macaddr_data VALUES (5, '0800.2b01.0203');
-INSERT INTO macaddr_data VALUES (6, '08002b010203');
-INSERT INTO macaddr_data VALUES (7, '0800:2b01:0203'); -- invalid
+INSERT INTO macaddr_data VALUES (6, '0800-2b01-0203');
+INSERT INTO macaddr_data VALUES (7, '08002b010203');
+INSERT INTO macaddr_data VALUES (8, '0800:2b01:0203'); -- invalid
 ERROR:  invalid input syntax for type macaddr: 0800:2b01:0203
-LINE 1: INSERT INTO macaddr_data VALUES (7, '0800:2b01:0203');
+LINE 1: INSERT INTO macaddr_data VALUES (8, '0800:2b01:0203');
 ^
-INSERT INTO macaddr_data VALUES (8, 'not even close'); -- invalid
+INSERT INTO macaddr_data VALUES (9, 'not even close'); -- invalid
 ERROR:  invalid input syntax for type macaddr: not even close
-LINE 1: INSERT INTO macaddr_data VALUES (8, 'not even close');
+LINE 1: INSERT INTO macaddr_data VALUES (9, 'not even close');
 ^
 INSERT INTO macaddr_data VALUES (10, '08:00:2b:01:02:04');
 INSERT INTO macaddr_data VALUES (11, '08:00:2b:01:02:02');
@@ -30,12 +31,13 @@ SELECT * FROM macaddr_data;
   4 | 08:00:2b:01:02:03
   5 | 08:00:2b:01:02:03
   6 | 08:00:2b:01:02:03
+  7 | 08:00:2b:01:02:03
  10 | 08:00:2b:01:02:04
  11 | 08:00:2b:01:02:02
  12 | 08:00:2a:01:02:03
  13 | 08:00:2c:01:02:03
  14 | 08:00:2a:01:02:04
-(11 rows)
+(12 rows)
 
 CREATE INDEX macaddr_data_btree ON macaddr_data USING btree (b);
 CREATE INDEX macaddr_data_hash ON macaddr_data USING hash (b);
@@ -52,9 +54,10 @@ SELECT a, b, trunc(b) FROM macaddr_data ORDER BY 2, 1;
   4 | 08:00:2b:01:02:03 | 08:00:2b:00:00:00
   5 | 08:00:2b:01:02:03 | 08:00:2b:00:00:00
   6 | 08:00:2b:01:02:03 | 08:00:2b:00:00:00
+  7 | 08:00:2b:01:02:03 | 08:00:2b:00:00:00
  10 | 08:00:2b:01:02:04 | 08:00:2b:00:00:00
  13 | 08:00:2c:01:02:03 | 08:00:2c:00:00:00
-(11 rows)
+(12 rows)
 
 SELECT b   '08:00:2b:01:02:04' FROM macaddr_data WHERE a = 1; -- true
  ?column? 
@@ -113,12 +116,13 @@ SELECT ~b   FROM macaddr_data;
  f7:ff:d4:fe:fd:fc
  f7:ff:d4:fe:fd:fc
  f7:ff:d4:fe:fd:fc
+ f7:ff:d4:fe:fd:fc
  f7:ff:d4:fe:fd:fb
  f7:ff:d4:fe:fd:fd
  f7:ff:d5:fe:fd:fc
  f7:ff:d3:fe:fd:fc
  f7:ff:d5:fe:fd:fb
-(11 rows)
+(12 rows)
 
 SELECT  b  '00:00:00:ff:ff:ff' FROM macaddr_data;
  ?column?  
@@ -129,12 +133,13 @@ SELECT  b  '00:00:00:ff:ff:ff' FROM macaddr_data;
  00:00:00:01:02:03
  00:00:00:01:02:03
  00:00:00:01:02:03
+ 00:00:00:01:02:03
  00:00:00:01:02:04
  00:00:00:01:02:02
  00:00:00:01:02:03
  00:00:00:01:02:03
  00:00:00:01:02:04
-(11 rows)
+(12 rows)
 
 SELECT  b | '01:02:03:04:05:06' FROM macaddr_data;
  ?column?  
@@ -145,11 +150,12 @@ SELECT  b | '01:02:03:04:05:06' FROM macaddr_data;
  09:02:2b:05:07:07
  09:02:2b:05:07:07
  09:02:2b:05:07:07
+ 09:02:2b:05:07:07
  09:02:2b:05:07:06
  09:02:2b:05:07:06
  09:02:2b:05:07:07
  09:02:2f:05:07:07
  09:02:2b:05:07:06
-(11 rows)
+(12 rows)
 
 DROP TABLE macaddr_data;
diff --git a/src/test/regress/sql/macaddr.sql b/src/test/regress/sql/macaddr.sql
index 1ccf501..7bad8f5 100644
--- a/src/test/regress/sql/macaddr.sql
+++ b/src/test/regress/sql/macaddr.sql
@@ -9,9 +9,10 @@ INSERT INTO macaddr_data VALUES (2, '08-00-2b-01-02-03');
 INSERT INTO macaddr_data VALUES (3, '08002b:010203');
 INSERT INTO macaddr_data VALUES (4, '08002b-010203');
 INSERT INTO macaddr_data VALUES (5, '0800.2b01.0203');
-INSERT INTO macaddr_data VALUES (6, '08002b010203');
-INSERT INTO macaddr_data VALUES (7, '0800:2b01:0203'); -- invalid
-INSERT INTO macaddr_data VALUES (8, 'not even close'); -- invalid
+INSERT INTO macaddr_data VALUES (6, '0800-2b01-0203');
+INSERT INTO macaddr_data VALUES (7, 

Re: [HACKERS] Time measurement format - more human readable

2014-09-30 Thread Bogdan Pilch
How about, the format of psql duration can be set via some ...
backslash command or commdn line switch? And the default of course
remains the current behavior?

bogdan
 Andres Freund and...@2ndquadrant.com writes:
  On 2014-09-28 20:32:30 -0400, Gregory Smith wrote:
  On 9/28/14, 7:49 AM, Bogdan Pilch wrote:
  I have created a small patch to postgres source (in particular the
  psql part of it) that modifies the way time spent executing the SQL
  commands is printed out.
 
  There are already a wide range of human readable time interval output
  formats available in the database; see the list at 
  http://www.postgresql.org/docs/current/static/datatype-datetime.html#INTERVAL-STYLE-OUTPUT-TABLE
 
  He's talking about psql's \timing...
 
 Indeed.  Still, it seems like this has more downside than upside.
 It seems likely to break some peoples' scripts, and where exactly
 is the groundswell of complaint that the existing format is
 unreadable?  TBH, I've not heard even one complaint about that
 before today.  On the other hand, the number of complaints we will
 get if we change the format is likely to be more than zero.
 
   regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Andres Freund
On 2014-09-30 11:49:21 -0700, Josh Berkus wrote:
 On 09/30/2014 11:20 AM, Peter Geoghegan wrote:
   For example, this patch for UPSERT doesn't support updatable views.
   But I can't see anyone that didn't read the patch would know that.
  By reading the CREATE VIEW docs. Maybe there could stand to be a
  compatibility note in the main INSERT command, but I didn't want to do
  that as long as things were up in the air. It might be the case that
  we figure out good behavior for updatable views.
 
 All of these things sound like good ideas for documentation
 improvements, but hardly anything which should block the patch.  It has
 documentation, more than we'd require for a lot of other patches, and
 it's not like the 9.5 release is next month.

What's blocking it is that (afaik) no committer agrees with the approach
taken to solve the concurrency problems. And several (Heikki, Robert,
me) have stated their dislike of the proposed approach.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Josh Berkus
On 09/30/2014 11:51 AM, Andres Freund wrote:
 All of these things sound like good ideas for documentation
  improvements, but hardly anything which should block the patch.  It has
  documentation, more than we'd require for a lot of other patches, and
  it's not like the 9.5 release is next month.
 What's blocking it is that (afaik) no committer agrees with the approach
 taken to solve the concurrency problems. And several (Heikki, Robert,
 me) have stated their dislike of the proposed approach.

If that's what's blocking it then fine.  But if we might change the
concurrency approach, then what's the point in quibbling about docs?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Josh Berkus
On 09/30/2014 07:15 AM, Kevin Grittner wrote:
 There are certainly other ways to do it, although they require more
 work.  As far as UPSERT goes, I agree that we should require such
 an index, at least for the initial implementation and into the
 foreseeable future.  What I'm saying is that if we implement it
 using the standard MERGE syntax, then if the features of MERGE are
 extended it will continue to work even in the absence of such an
 index.  The index becomes a way of optimizing access rather than
 defining what access is allowed.
 
 At the risk of pushing people away from this POV, I'll point out
 that this is somewhat similar to what we do for unlogged bulk loads
 -- if all the conditions for doing it the fast way are present, we
 do it the fast way; otherwise it still works, but slower.

Except that switching between fast/slow bulk loads affects *only* the
speed of loading, not the locking rules.  Having a statement silently
take a full table lock when we were expecting it to be concurrent
(because, for example, the index got rebuilt and someone forgot the
UNIQUE) violates POLA from my perspective.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 11:51 AM, Andres Freund and...@2ndquadrant.com wrote:
 What's blocking it is that (afaik) no committer agrees with the approach
 taken to solve the concurrency problems. And several (Heikki, Robert,
 me) have stated their dislike of the proposed approach.

Well, it depends on what you mean by approach to concurrency
problems. It's not as if a consensus has emerged in favor of another
approach, and if there is to be another approach, the details need to
be worked out ASAP. Even still, I would appreciate it if people could
review the patch on the assumption that those issues will be worked
out. After all, there are plenty of other parts to this that have
nothing to do with value locking - the entire top half, which has
significant subtleties (some involving concurrency) in its own right,
reasonably well encapsulated from value locking. A couple of weeks
ago, I felt good about the fact that it seemed time was on my side
9.5-wise, but maybe that isn't true. Working through the community
process for this patch is going to be very difficult.

I think everyone understands that there could be several ways of
implementing value locking. I really do think it's a well encapsulated
aspect of the patch, though, so even if you hate how I've implemented
value locking, please try and give feedback on everything else. Simon
wanted to start with the user-visible semantics, which makes sense,
but I see no reason to limit it to that.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH CHECK and Column-Level Privileges

2014-09-30 Thread Stephen Frost
* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 30 September 2014 16:52, Stephen Frost sfr...@snowman.net wrote:
  If the user only has column-level privileges on the table then I'm not
  really sure how useful the detail will be.
 
 One of the main things that detail is useful for is identifying the
 failing row in a multi-row update. In most real-world cases, I would
 expect the column-level privileges to include the table's PK, so the
 detail would meet that requirement. In fact the column-level
 privileges would pretty much have to include sufficient columns to
 usefully identify rows, otherwise updates wouldn't be practical.

That may or may not be true- the user needs sufficient information to
identify a row, but that doesn't mean they have access to all columns
make up a unique constraint.  It's not uncommon to have a surrogate key
for identifying the rows and then an independent uniqueness constraint
on some natural key for the table, which the user may not have access
to.

  I've been focusing on the 9.4 and back-branches concern, but as for RLS,
  if we're going to try and include the detail in this case then I'd
  suggest we do so only if the user has SELECT rights and RLS is disabled
  on the relation.
 
 Huh? If RLS is disabled, isn't the check option also disabled?

Not quite.  If RLS is disabled on the relation then the RLS policies
don't add to the with check option, but a view overtop of a RLS table
might have a with check option.  That's the situation which I was
getting at when it comes to the with-check option.  The other cases of
constraint violation which we're discussing here would need to be
handled also though, so it's not just the with-check case.

   Otherwise, we'd have to check that the row being
  returned actually passes the SELECT policy.  I'm already not really
  thrilled that we are changing error message results based on user
  permissions, and that seems like it'd be worse.
 
 That's one of the things I never liked about allowing different RLS
 policies for different commands --- the idea that the user can UPDATE
 a row that they can't even SELECT just doesn't make sense to me.

The reason for having the per-command RLS policies was that you might
want a different policy applied for different commands (ie: you can see
all rows, but can only update your row); the ability to define a policy
which allows you to UPDATE rows which are not visible to a normal SELECT
is also available through that but isn't really the reason for the
capability.  That said, I agree it isn't common to define policies that
way, but not unheard of either.

  What do you think about returning just what the user provided in the
  first place in both of these cases..?  I'm not quite sure what that
  would look like in the UPDATE case but for INSERT (and COPY) it would be
  the subset of columns being inserted and the values originally provided.
  That may not be what the actual conflict is due to, as there could be
  before triggers changing things in the middle, or the conflict could be
  on default values, but at least the input row could be identified and
  there wouldn't be this information leak risk.  Not sure how difficult
  that would be to implement though.
 
 I could see that working for INSERTs, but for UPDATEs I don't think
 that would be very useful in practice, because the columns most likely
 to be useful for identifying the failing row (e.g., key columns) are
 also the columns least likely to have been updated.

I'm not sure that I follow this- if you're not updating the key columns
then you're not likely to be having a conflict due to them...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] WITH CHECK and Column-Level Privileges

2014-09-30 Thread Dean Rasheed
On 30 September 2014 20:17, Stephen Frost sfr...@snowman.net wrote:
 * Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 30 September 2014 16:52, Stephen Frost sfr...@snowman.net wrote:
  If the user only has column-level privileges on the table then I'm not
  really sure how useful the detail will be.

 One of the main things that detail is useful for is identifying the
 failing row in a multi-row update. In most real-world cases, I would
 expect the column-level privileges to include the table's PK, so the
 detail would meet that requirement. In fact the column-level
 privileges would pretty much have to include sufficient columns to
 usefully identify rows, otherwise updates wouldn't be practical.

 That may or may not be true- the user needs sufficient information to
 identify a row, but that doesn't mean they have access to all columns
 make up a unique constraint.  It's not uncommon to have a surrogate key
 for identifying the rows and then an independent uniqueness constraint
 on some natural key for the table, which the user may not have access
 to.


True, but then the surrogate key would be included in the error
details which would allow the failing row to be identified.


  What do you think about returning just what the user provided in the
  first place in both of these cases..?  I'm not quite sure what that
  would look like in the UPDATE case but for INSERT (and COPY) it would be
  the subset of columns being inserted and the values originally provided.
  That may not be what the actual conflict is due to, as there could be
  before triggers changing things in the middle, or the conflict could be
  on default values, but at least the input row could be identified and
  there wouldn't be this information leak risk.  Not sure how difficult
  that would be to implement though.

 I could see that working for INSERTs, but for UPDATEs I don't think
 that would be very useful in practice, because the columns most likely
 to be useful for identifying the failing row (e.g., key columns) are
 also the columns least likely to have been updated.

 I'm not sure that I follow this- if you're not updating the key columns
 then you're not likely to be having a conflict due to them...


The constraint violation could well be due to updating a non-key
column such as a column with a NOT NULL constraint on it, in which
case only including that column in the error detail wouldn't do much
good -- you'd want to include the key columns if possible.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] libpq-dev: pg_config_manual.h redefines CACHE_LINE_SIZE

2014-09-30 Thread Christoph Berg
Hi,

there's a #define clash between pg_config_manual.h and FreeBSD's
/usr/include/machine-amd64/param.h which also defines CACHE_LINE_SIZE.

It's probably not really a PostgreSQL bug, but it seems easy enough to
rename that definition now as it's only present in 9.4+. It's only
used in one file, xlog.c: 375d8526f2900d0c377f44532f6d09ee06531f67

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/
---BeginMessage---
Package: libpq-dev
Version: CACHE_LINE_SIZE
Severity: normal

Hi,

pg_config_manual.h redfines CACHE_LINE_SIZE in sys/param.h on kfreebsd:

  /usr/include/postgresql/pg_config_manual.h:219:0: error: CACHE_LINE_SIZE 
redefined [-Werror]
   #define CACHE_LINE_SIZE  128
   ^
  In file included from /usr/include/machine/param.h:8:0,
   from /usr/include/sys/kglue/sys/param.h:143,
   from /usr/include/sys/kern/param.h:1,
   from /usr/include/osreldate.h:1,
   from /usr/include/x86_64-kfreebsd-gnu/bits/param.h:36,
   from /usr/include/x86_64-kfreebsd-gnu/sys/param.h:31,
   from collectd.h:214,
   from postgresql.c:39:
  /usr/include/machine-amd64/param.h:97:0: note: this is the location of the 
previous definition
   #define CACHE_LINE_SIZE  (1  CACHE_LINE_SHIFT)
   ^

This causes build-errors when using -Werror. I assume that this is a
rather unusual use-case, thus didn't mark this RC.

Currently, it causes FTBFSs for collectd on kfreebsd-*, see e.g.
https://buildd.debian.org/status/fetch.php?pkg=collectdarch=kfreebsd-amd64ver=5.4.1-3.1stamp=1408492216
I hope, I'll be able to work around it using -Wp,-w there.

Cheers,
Sebastian

-- 
Sebastian tokkee Harl +++ GnuPG-ID: 0x8501C7FC +++ http://tokkee.org/

Those who would give up Essential Liberty to purchase a little Temporary
Safety, deserve neither Liberty nor Safety. -- Benjamin Franklin



signature.asc
Description: Digital signature
---End Message---

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Time measurement format - more human readable

2014-09-30 Thread Gavin Flower

Please don't top post, initial context is important, especially Tom's!  :-)
(see my reply below)

On 01/10/14 03:52, Bogdan Pilch wrote:

How about, the format of psql duration can be set via some ...
backslash command or commdn line switch? And the default of course
remains the current behavior?

bogdan

Andres Freund and...@2ndquadrant.com writes:

On 2014-09-28 20:32:30 -0400, Gregory Smith wrote:

On 9/28/14, 7:49 AM, Bogdan Pilch wrote:

I have created a small patch to postgres source (in particular the
psql part of it) that modifies the way time spent executing the SQL
commands is printed out.

There are already a wide range of human readable time interval output
formats available in the database; see the list at 
http://www.postgresql.org/docs/current/static/datatype-datetime.html#INTERVAL-STYLE-OUTPUT-TABLE

He's talking about psql's \timing...

Indeed.  Still, it seems like this has more downside than upside.
It seems likely to break some peoples' scripts, and where exactly
is the groundswell of complaint that the existing format is
unreadable?  TBH, I've not heard even one complaint about that
before today.  On the other hand, the number of complaints we will
get if we change the format is likely to be more than zero.

regards, tom lane



As a user, your suggestion is fine by me.


Cheers,
Gavin



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq-dev: pg_config_manual.h redefines CACHE_LINE_SIZE

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 12:57 PM, Christoph Berg m...@debian.org wrote:
 It's probably not really a PostgreSQL bug, but it seems easy enough to
 rename that definition now as it's only present in 9.4+. It's only
 used in one file, xlog.c: 375d8526f2900d0c377f44532f6d09ee06531f67


I agree. It should be renamed to PG_CACHE_LINE_SIZE.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq-dev: pg_config_manual.h redefines CACHE_LINE_SIZE

2014-09-30 Thread Andres Freund
On 2014-09-30 13:42:11 -0700, Peter Geoghegan wrote:
 On Tue, Sep 30, 2014 at 12:57 PM, Christoph Berg m...@debian.org wrote:
  It's probably not really a PostgreSQL bug, but it seems easy enough to
  rename that definition now as it's only present in 9.4+. It's only
  used in one file, xlog.c: 375d8526f2900d0c377f44532f6d09ee06531f67
 
 
 I agree. It should be renamed to PG_CACHE_LINE_SIZE.

Agreed. Unless somebody protests I'll do so tomorrow.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Andres Freund
On 2014-09-30 12:05:46 -0700, Peter Geoghegan wrote:
 On Tue, Sep 30, 2014 at 11:51 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  What's blocking it is that (afaik) no committer agrees with the approach
  taken to solve the concurrency problems. And several (Heikki, Robert,
  me) have stated their dislike of the proposed approach.
 
 Well, it depends on what you mean by approach to concurrency
 problems. It's not as if a consensus has emerged in favor of another
 approach, and if there is to be another approach, the details need to
 be worked out ASAP.

Well. People have given you outlines of approaches. And Heikki even gave
you a somewhat working prototype. I don't think you can fairly expect
more.

 Even still, I would appreciate it if people could
 review the patch on the assumption that those issues will be worked
 out.

Right now I don't really see the point. You've so far shown no
inclination to accept significant concerns about your approach. And
without an agreement about how to solve the concurrency issues the
feature is dead in the water. And thus time spent reviewing isn't well
spent.

I'm pretty sure I'm not the only one feeling that way at this point.

 A couple of weeks
 ago, I felt good about the fact that it seemed time was on my side
 9.5-wise, but maybe that isn't true. Working through the community
 process for this patch is going to be very difficult.

The community process involves accepting that your opinion isn't the
community's. Believe me, I learned that the hard way.

It's one thing to argue about the implementation of a feature for a week
or four. Or even insist that you're right in some implementation detail
local to your new code. But you've not moved one jota in the critical
parts that affect large parts of the system in half a year.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Fabrízio de Royes Mello
Hi all,

What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX? As
it holds data (like sequences and tables) I think we can do that.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Kevin Grittner
Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 07:15 AM, Kevin Grittner wrote:

 At the risk of pushing people away from this POV, I'll point out
 that this is somewhat similar to what we do for unlogged bulk loads
 -- if all the conditions for doing it the fast way are present, we
 do it the fast way; otherwise it still works, but slower.

 Except that switching between fast/slow bulk loads affects *only* the
 speed of loading, not the locking rules.  Having a statement silently
 take a full table lock when we were expecting it to be concurrent
 (because, for example, the index got rebuilt and someone forgot the
 UNIQUE) violates POLA from my perspective.

I would not think that an approach which took a full table lock to
implement the more general case would be accepted.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Tom Lane
=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?

It's got the same semantic problems as every other variant of CINE.

If there were a huge groundswell of demand for it, maybe we'd hold our
noses and do it anyway.  But I'm against doing it without that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Josh Berkus
On 09/30/2014 02:39 PM, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 07:15 AM, Kevin Grittner wrote:
 
 At the risk of pushing people away from this POV, I'll point out
 that this is somewhat similar to what we do for unlogged bulk loads
 -- if all the conditions for doing it the fast way are present, we
 do it the fast way; otherwise it still works, but slower.

 Except that switching between fast/slow bulk loads affects *only* the
 speed of loading, not the locking rules.  Having a statement silently
 take a full table lock when we were expecting it to be concurrent
 (because, for example, the index got rebuilt and someone forgot the
 UNIQUE) violates POLA from my perspective.
 
 I would not think that an approach which took a full table lock to
 implement the more general case would be accepted.

Why not?  There are certainly cases ... like bulk loading ... where
users would find it completely acceptable.  Imagine that you're merging
3 files into a single unlogged table before processing them into
finished data.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Kirk Roybal
 

Since PostgreSQL started down that road for so many other relations, I
think many people just expect this to happen as a logical extension. 

It certainly makes life a lot easier in combination with build
management systems. 

/kirk 

On 2014-09-30 16:43, Tom Lane wrote: 

 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?
 
 It's got the same semantic problems as every other variant of CINE.
 
 If there were a huge groundswell of demand for it, maybe we'd hold our
 noses and do it anyway. But I'm against doing it without that.
 
 regards, tom lane

 

Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Kevin Grittner
Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 02:39 PM, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 07:15 AM, Kevin Grittner wrote:

 At the risk of pushing people away from this POV, I'll point out
 that this is somewhat similar to what we do for unlogged bulk loads
 -- if all the conditions for doing it the fast way are present, we
 do it the fast way; otherwise it still works, but slower.

 Except that switching between fast/slow bulk loads affects *only* the
 speed of loading, not the locking rules.  Having a statement silently
 take a full table lock when we were expecting it to be concurrent
 (because, for example, the index got rebuilt and someone forgot the
 UNIQUE) violates POLA from my perspective.

 I would not think that an approach which took a full table lock to
 implement the more general case would be accepted.

 Why not?  There are certainly cases ... like bulk loading ... where
 users would find it completely acceptable.  Imagine that you're merging
 3 files into a single unlogged table before processing them into
 finished data.

So the expectation is that when we implement MERGE it will, by
default, take out an EXCLUSIVE lock for the entire target table for
the entire duration of the command?  I would have expected a bit
more finesse.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Josh Berkus
On 09/30/2014 02:51 PM, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 02:39 PM, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 07:15 AM, Kevin Grittner wrote:

 At the risk of pushing people away from this POV, I'll point out
 that this is somewhat similar to what we do for unlogged bulk loads
 -- if all the conditions for doing it the fast way are present, we
 do it the fast way; otherwise it still works, but slower.

 Except that switching between fast/slow bulk loads affects *only* the
 speed of loading, not the locking rules.  Having a statement silently
 take a full table lock when we were expecting it to be concurrent
 (because, for example, the index got rebuilt and someone forgot the
 UNIQUE) violates POLA from my perspective.

 I would not think that an approach which took a full table lock to
 implement the more general case would be accepted.

 Why not?  There are certainly cases ... like bulk loading ... where
 users would find it completely acceptable.  Imagine that you're merging
 3 files into a single unlogged table before processing them into
 finished data.
 
 So the expectation is that when we implement MERGE it will, by
 default, take out an EXCLUSIVE lock for the entire target table for
 the entire duration of the command?  I would have expected a bit
 more finesse.

I don't know that that is the *expectation*.  However, I personally
would find it *acceptable* if it meant that we could get efficient merge
semantics on other aspects of the syntax, since my primary use for MERGE
is bulk loading.

Regardless, I don't think there's any theoretical way to support UPSERT
without a unique constraint.  Therefore eventual support of this would
require a full table lock.  Therefore having it use the same command as
UPSERT with a unique constraint is a bit of a booby trap for users.
This is a lot like the ADD COLUMN with a default rewrites the whole
table booby trap which hundreds of our users complain about every
month.  We don't want to add more such unexpected consequences for users.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Josh Berkus
On 09/30/2014 02:43 PM, Tom Lane wrote:
 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?
 
 It's got the same semantic problems as every other variant of CINE.
 
 If there were a huge groundswell of demand for it, maybe we'd hold our
 noses and do it anyway.  But I'm against doing it without that.

This isn't the sort of thing there would ever be a clamor of support
for, because it's just not that visible of a feature.  It's more of a
regular annoyance for those who encounter it.  More importantly, adding
an IF NOT EXISTS to CREATE INDEX would allow complete idempotent create
this bunch of tables scripts, since now the create index statements
could be included.  This would be very nice for schema management tools.

I do think it should be name-based.  While an IF NOT EXISTS which
checked for a duplicate column declartion would be nice, there's a raft
of issues with implementing it that way.  Users I know are generally
just looking to avoid getting a transaction-halting error when they run
the same create index statement twice.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Andres Freund
On 2014-09-30 14:51:57 -0700, Kevin Grittner wrote:
 Josh Berkus j...@agliodbs.com wrote:
  On 09/30/2014 02:39 PM, Kevin Grittner wrote:
  Josh Berkus j...@agliodbs.com wrote:
  On 09/30/2014 07:15 AM, Kevin Grittner wrote:
 
  At the risk of pushing people away from this POV, I'll point out
  that this is somewhat similar to what we do for unlogged bulk loads
  -- if all the conditions for doing it the fast way are present, we
  do it the fast way; otherwise it still works, but slower.
 
  Except that switching between fast/slow bulk loads affects *only* the
  speed of loading, not the locking rules.  Having a statement silently
  take a full table lock when we were expecting it to be concurrent
  (because, for example, the index got rebuilt and someone forgot the
  UNIQUE) violates POLA from my perspective.
 
  I would not think that an approach which took a full table lock to
  implement the more general case would be accepted.
 
  Why not?  There are certainly cases ... like bulk loading ... where
  users would find it completely acceptable.  Imagine that you're merging
  3 files into a single unlogged table before processing them into
  finished data.
 
 So the expectation is that when we implement MERGE it will, by
 default, take out an EXCLUSIVE lock for the entire target table for
 the entire duration of the command?  I would have expected a bit
 more finesse.

I think it'd be acceptable. Alternatively we'll just accept that you can
get uniqueness violations under concurrency. I many cases that'll be
fine.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 3:01 PM, Andres Freund and...@2ndquadrant.com wrote:
 I think it'd be acceptable. Alternatively we'll just accept that you can
 get uniqueness violations under concurrency. I many cases that'll be
 fine.


I think living with unique violations is the right thing with MERGE, fwiw.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Andres Freund
On 2014-09-30 14:57:43 -0700, Josh Berkus wrote:
 Regardless, I don't think there's any theoretical way to support UPSERT
 without a unique constraint.

You can do stuff like blocking predicate locking. But without indexes to
support it that gets awfully complicated and unfunny. I don't think we
want to go there. So essentially I agree with that statement.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Kirk Roybal
 

On 2014-09-30 17:01, Josh Berkus wrote: 

 On 09/30/2014 02:43 PM, Tom Lane wrote:
 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes: 
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX? It's 
 got the same semantic problems as every other variant of CINE. If there were 
 a huge groundswell of demand for it, maybe we'd hold our noses and do it 
 anyway. But I'm against doing it without that.

This isn't the sort of thing there would ever be a clamor of support
for, because it's just not that visible of a feature. It's more of a
regular annoyance for those who encounter it. More importantly, adding
an IF NOT EXISTS to CREATE INDEX would allow complete idempotent create
this bunch of tables scripts, since now the create index statements
could be included. This would be very nice for schema management tools.

I do think it should be name-based. While an IF NOT EXISTS which
checked for a duplicate column declartion would be nice, there's a raft
of issues with implementing it that way. Users I know are generally
just looking to avoid getting a transaction-halting error when they run
the same create index statement twice.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com [1]

I second this evaluation. Using build tools to manage schemas, there is
no expectation that the full index signature is checked. Any index of
the same name would be considered a collision for my purposes. 

It is much easier to show CINE to a developer than to explain how an
anonymous code block does the same thing. 

/Kirk 

Links:
--
[1] http://pgexperts.com


Re: [HACKERS] Per table autovacuum vacuum cost limit behaviour strange

2014-09-30 Thread Alvaro Herrera
Robert Haas wrote:

 I favor option (a).   There's something to be said for your proposal
 in terms of logical consistency with what we have now, but to be
 honest I'm not sure it's the behavior anyone wants (I would welcome
 more feedback on what people actually want).  I think we should view
 an attempt to set a limit for a particular table as a way to control
 the rate at which that table is vacuumed - period.

After re-reading this whole thread one more time, I think I have come to
agree with you and Amit here, because not only it is simpler to
implement, but it is also simpler to document.  Per Greg Smith's opinion
elsewhere in the thread, it seems that for end users it doesn't make
sense to make the already complicated mechanism even more complicated.

So in essence what we're going to do is that the balance mechanism
considers only tables that don't have per-table configuration options;
for those that do, we will use the values configured there without any
changes.

I'll see about implementing this and making sure it finds its way to
9.4beta3.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum scheduling starvation and frenzy

2014-09-30 Thread Alvaro Herrera
Alvaro Herrera wrote:

 The attached patch implements that.  I only tested it on HEAD, but
 AFAICS it applies cleanly to 9.4 and 9.3; fairly sure it won't apply to
 9.2.  Given the lack of complaints, I'm unsure about backpatching
 further back than 9.3 anyway.

FWIW my intention is to make sure this patch is in 9.4beta3.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2014-09-30 14:57:43 -0700, Josh Berkus wrote:

 Regardless, I don't think there's any theoretical way to support
 UPSERT without a unique constraint.

 You can do stuff like blocking predicate locking. But without indexes to
 support it that gets awfully complicated and unfunny. I don't think we
 want to go there. So essentially I agree with that statement.

Well, as you seem to be saying, it's not to bad with even an
non-unique index if we wanted to do a little extra work; and there
are a lot of ways to potentially deal with it even without that.
Theoretically, the number of ways to do this is limited only by
time available to brainstorm.

That said, at no time have I advocated that we try to implement
UPSERT in this release with anything but a UNIQUE index.  The issue
I raised was whether a subset of the MERGE syntax should be used to
specify UPSERT rather than inventing our own syntax -- which
doesn't seem in any way incompatible requiring a unique index to
match the expression.  Given subsequent discussion, perhaps we
could decorate it with something to indicate which manner of
concurrency handling is desired?  Techniques discussed so far are

 - UPSERT style
 - Hold an EXCLUSIVE lock on the table
 - Allow native concurrency management

An alternative which seems to be on some people's minds is to use a
different command name for the first option (but why not keep the
rest of the standard syntax?) and to require an explicit LOCK TABLE
statement at the start of the transaction if you want the second
option.

My preference, after this discussion, would be to default to UPSERT
style if the appropriate conditions are met, and to default to the
third option otherwise.  If you want an exclusive lock, ask for it
with the LOCK TABLE statement.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 2:15 PM, Andres Freund and...@2ndquadrant.com wrote:
 Well. People have given you outlines of approaches. And Heikki even gave
 you a somewhat working prototype. I don't think you can fairly expect
 more.

I don't expect anything, really. I asked nicely - that's all. I don't
know why there is so much discussion of what I expect or don't expect.
Things don't work around here by everyone doing only strictly what
they're obligated to do. Everyone is strictly obligated to do nothing,
when you get right down to it.

 Even still, I would appreciate it if people could
 review the patch on the assumption that those issues will be worked
 out.

 Right now I don't really see the point. You've so far shown no
 inclination to accept significant concerns about your approach. And
 without an agreement about how to solve the concurrency issues the
 feature is dead in the water. And thus time spent reviewing isn't well
 spent.

 I'm pretty sure I'm not the only one feeling that way at this point.

I think that's *incredibly* unfair. There appears to be broad
acceptance of the problems around deadlocking as a result of my work
with Heikki. That was a major step forward. Now we all agree on the
parameters of the discussion around value locking, AFAICT. There is an
actual way forward, and not total quagmire -- great. I had to dig my
heals in to win that much, and it wasn't easy. I accept that it
probably wasn't easy for other people either, and I am thankful for
the effort of other people, particularly Heikki, but also you.

 A couple of weeks
 ago, I felt good about the fact that it seemed time was on my side
 9.5-wise, but maybe that isn't true. Working through the community
 process for this patch is going to be very difficult.

 The community process involves accepting that your opinion isn't the
 community's. Believe me, I learned that the hard way.

The community doesn't have a worked-out opinion on this either way.
Arguably, what you and Simon want to do is closer than what I want to
do than what Heikki wants to do - you're still talking about adding
locks that are tied to AMs in a fairly fundamental way. But, FWIW, I'd
sooner take Heikki's approach than insert promise tuples into indexes
directly. I think that Heikki's approach is better.

In all honesty, I don't care who wins, as long as someone does and
we get the feature in shape. No one can win if all sides are not
realistic about the problems. The issues that I've called out about
what Heikki has suggested are quite significant issues. Can't we talk
about them? Or am I required to polish-up Heikki's approach, and
present it at a commitfest, only to have somebody point out the same
issues then? I am *not* nitpicking, and the issues are of fundamental
importance. Look at the issues I raise and you'll see that's the case.

My pointing out of these issues is not some artifice to win the
argument. I don't appreciate the insinuation that it is. I am
completely undeserving of that sort of mistrust. It's insulting. And
it's also a total misrepresentation to suggest it's me versus you,
Heikki, Robert, and Simon. Opinion is far more divided than you let
on, since what you and Simon suggest is far different to what Heikki
suggests. Let's figure out a way to reach agreement.

 It's one thing to argue about the implementation of a feature for a week
 or four. Or even insist that you're right in some implementation detail
 local to your new code. But you've not moved one jota in the critical
 parts that affect large parts of the system in half a year.

You're right. I haven't moved one bit on that. But, on the other hand,
I haven't doubled down on the approach either - I have done very
little on it, and have given it relatively little thought either way.
I preferred to focus my energies on the top half. Surely you'd agree
that that was the logical course of action to take over the last few
months. I don't know if you noticed, but I presented this whole new
revised version as this is the thing that gives us the ability to
discuss the fundamental issue of value locking. So my suggestion was
that if you don't want to have that conversation, at least look at the
top half a bit.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Fabrízio de Royes Mello
On Tue, Sep 30, 2014 at 7:01 PM, Josh Berkus j...@agliodbs.com wrote:

 On 09/30/2014 02:43 PM, Tom Lane wrote:
  =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com
writes:
  What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?
 
  It's got the same semantic problems as every other variant of CINE.
 
  If there were a huge groundswell of demand for it, maybe we'd hold our
  noses and do it anyway.  But I'm against doing it without that.

 This isn't the sort of thing there would ever be a clamor of support
 for, because it's just not that visible of a feature.  It's more of a
 regular annoyance for those who encounter it.  More importantly, adding
 an IF NOT EXISTS to CREATE INDEX would allow complete idempotent create
 this bunch of tables scripts, since now the create index statements
 could be included.  This would be very nice for schema management tools.

 I do think it should be name-based.  While an IF NOT EXISTS which
 checked for a duplicate column declartion would be nice, there's a raft
 of issues with implementing it that way.  Users I know are generally
 just looking to avoid getting a transaction-halting error when they run
 the same create index statement twice.


Here is the patch... it's name-based.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello
diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml
index e469b17..7886729 100644
--- a/doc/src/sgml/ref/create_index.sgml
+++ b/doc/src/sgml/ref/create_index.sgml
@@ -21,7 +21,7 @@ PostgreSQL documentation
 
  refsynopsisdiv
 synopsis
-CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/replaceable ] ON replaceable class=parametertable_name/replaceable [ USING replaceable class=parametermethod/replaceable ]
+CREATE [ IF NOT EXISTS ] [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/replaceable ] ON replaceable class=parametertable_name/replaceable [ USING replaceable class=parametermethod/replaceable ]
 ( { replaceable class=parametercolumn_name/replaceable | ( replaceable class=parameterexpression/replaceable ) } [ COLLATE replaceable class=parametercollation/replaceable ] [ replaceable class=parameteropclass/replaceable ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] )
 [ WITH ( replaceable class=PARAMETERstorage_parameter/replaceable = replaceable class=PARAMETERvalue/replaceable [, ... ] ) ]
 [ TABLESPACE replaceable class=parametertablespace_name/replaceable ]
@@ -99,6 +99,16 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/
 
 variablelist
  varlistentry
+  termliteralIF NOT EXISTS/literal/term
+  listitem
+   para
+Do nothing (except issuing a notice) if a index with the same name
+already exists.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termliteralUNIQUE/literal/term
   listitem
para
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index ee10594..8905e30 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -697,7 +697,8 @@ index_create(Relation heapRelation,
 			 bool allow_system_table_mods,
 			 bool skip_build,
 			 bool concurrent,
-			 bool is_internal)
+			 bool is_internal,
+			 bool if_not_exists)
 {
 	Oid			heapRelationId = RelationGetRelid(heapRelation);
 	Relation	pg_class;
@@ -773,10 +774,22 @@ index_create(Relation heapRelation,
 		elog(ERROR, shared relations must be placed in pg_global tablespace);
 
 	if (get_relname_relid(indexRelationName, namespaceId))
+	{
+		if (if_not_exists)
+		{
+			ereport(NOTICE,
+	(errcode(ERRCODE_DUPLICATE_TABLE),
+	 errmsg(relation \%s\ already exists, skipping,
+			indexRelationName)));
+			heap_close(pg_class, RowExclusiveLock);
+			return InvalidOid;
+		}
+
 		ereport(ERROR,
 (errcode(ERRCODE_DUPLICATE_TABLE),
  errmsg(relation \%s\ already exists,
 		indexRelationName)));
+	}
 
 	/*
 	 * construct tuple descriptor for index tuples
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 160f006..5ef6dcc 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -342,7 +342,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
  rel-rd_rel-reltablespace,
  collationObjectId, classObjectId, coloptions, (Datum) 0,
  true, false, false, false,
- true, false, false, true);
+ true, false, false, true, false);
 
 	heap_close(toast_rel, NoLock);
 
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 8a1cb4b..a03773b 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -610,7 +610,14 @@ DefineIndex(Oid relationId,
 

Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 09/30/2014 02:43 PM, Tom Lane wrote:
 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?

 It's got the same semantic problems as every other variant of CINE.

 I do think it should be name-based.

Name-based, eh?  Don't you recall that in modern practice, people
generally don't specify names for indexes at all?  They've usually
got system-generated names, which doesn't seem like a very cool thing
to have scripts depending on.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Andres Freund
On 2014-09-30 18:47:24 -0400, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  On 09/30/2014 02:43 PM, Tom Lane wrote:
  =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
  What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?
 
  It's got the same semantic problems as every other variant of CINE.
 
  I do think it should be name-based.
 
 Name-based, eh?  Don't you recall that in modern practice, people
 generally don't specify names for indexes at all?  They've usually
 got system-generated names, which doesn't seem like a very cool thing
 to have scripts depending on.

Good point. I think it's fair enough to only allow CINE on named
indexes.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Josh Berkus
On 09/30/2014 03:53 PM, Andres Freund wrote:
 On 2014-09-30 18:47:24 -0400, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 On 09/30/2014 02:43 PM, Tom Lane wrote:
 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?

 It's got the same semantic problems as every other variant of CINE.

 I do think it should be name-based.

 Name-based, eh?  Don't you recall that in modern practice, people
 generally don't specify names for indexes at all?  They've usually
 got system-generated names, which doesn't seem like a very cool thing
 to have scripts depending on.
 
 Good point. I think it's fair enough to only allow CINE on named
 indexes.

On the other hand, the way we form system-generated names is predicable,
so I think it would be perfectly OK to include them.  Desirable, in fact.

For example, if I did this:

CREATE INDEX ON tab1 (cola, colb);

CREATE INDEX IF NOT EXISTS ON tab1 (cola, colb);

I would expect to not end up with two indexes on those two particular
columns, and if we don't omit system-generated names, I won't.

Not that I'm a fan of omitting the name ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Andres Freund
On 2014-09-30 16:03:01 -0700, Josh Berkus wrote:
 On 09/30/2014 03:53 PM, Andres Freund wrote:
  On 2014-09-30 18:47:24 -0400, Tom Lane wrote:
  Josh Berkus j...@agliodbs.com writes:
  On 09/30/2014 02:43 PM, Tom Lane wrote:
  =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com 
  writes:
  What's your thoughts about we implement IF NOT EXISTS for CREATE INDEX?
 
  It's got the same semantic problems as every other variant of CINE.
 
  I do think it should be name-based.
 
  Name-based, eh?  Don't you recall that in modern practice, people
  generally don't specify names for indexes at all?  They've usually
  got system-generated names, which doesn't seem like a very cool thing
  to have scripts depending on.
  
  Good point. I think it's fair enough to only allow CINE on named
  indexes.
 
 On the other hand, the way we form system-generated names is predicable,
 so I think it would be perfectly OK to include them.  Desirable, in fact.

It's not really that predicable. Think about expression indexes. They
also don't contain information about opclasses et all.

Seems like pit of hairy semantics.

 Not that I'm a fan of omitting the name ...

Me neither.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow format 0000-0000-0000 in postgresql MAC parser

2014-09-30 Thread Michael Paquier
On Mon, Sep 29, 2014 at 6:30 PM, Herwin Weststrate
her...@quarantainenet.nl wrote:
 Some devices send the MAC address in RADIUS requests in the format
 --. I've seen this with a 3com switch, but there may be
 others. Currently, postgresql doesn't understand this format.

 This patch adds an extra line to the macaddr parsing in postgres to
 support this format as well. A unit test has been added.
You should register this patch in the next commit fest where it will
be fully reviewed and hopefully committed for 9.5:
https://commitfest.postgresql.org/action/commitfest_view?id=24

Looking at your patch, you should update the documentation as well,
the list of authorized outputs being clearly listed:
http://www.postgresql.org/docs/devel/static/datatype-net-types.html#DATATYPE-MACADDR
This consists in adding simply one line to doc/src/sgml/datatype.sgml.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Simon Riggs
On 30 September 2014 19:49, Josh Berkus j...@agliodbs.com wrote:
 On 09/30/2014 11:20 AM, Peter Geoghegan wrote:
  For example, this patch for UPSERT doesn't support updatable views.
  But I can't see anyone that didn't read the patch would know that.
 By reading the CREATE VIEW docs. Maybe there could stand to be a
 compatibility note in the main INSERT command, but I didn't want to do
 that as long as things were up in the air. It might be the case that
 we figure out good behavior for updatable views.

 All of these things sound like good ideas for documentation
 improvements, but hardly anything which should block the patch.  It has
 documentation, more than we'd require for a lot of other patches, and
 it's not like the 9.5 release is next month.

We won't get consensus simply by saying Would you like a fast upsert
feature? because everyone says Yes to that.

A clear description of the feature being added is necessary to agree
its acceptance. When we implement a SQL Standard feature, we can just
look in the standard to see how it should work and compare. When we go
off-piste, we need more info to make sure we know what we are getting
as well as why we are not getting something from the Standard.

I have not suggested I would block the patch because it doesn't have
docs. I have pointed out that the lack of consensus about the patch is
because nobody knows what it contains, which others agreed with. My
request was, and is, a proposed mechanism to *unblock* a very
obviously stalled patch.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-30 Thread Peter Geoghegan
On Tue, Sep 30, 2014 at 4:28 PM, Simon Riggs si...@2ndquadrant.com wrote:
 A clear description of the feature being added is necessary to agree
 its acceptance. When we implement a SQL Standard feature, we can just
 look in the standard to see how it should work and compare. When we go
 off-piste, we need more info to make sure we know what we are getting
 as well as why we are not getting something from the Standard.

I think that's fair.

 I have not suggested I would block the patch because it doesn't have
 docs. I have pointed out that the lack of consensus about the patch is
 because nobody knows what it contains, which others agreed with. My
 request was, and is, a proposed mechanism to *unblock* a very
 obviously stalled patch.

Please keep asking questions - it isn't necessarily obvious to me
*what* isn't clear, because of my lack of perspective. That's a useful
role. It occurs to me now that I ought to have found a place to
document cardinality violations [1], but I didn't, for example.

[1] http://tracker.firebirdsql.org/browse/CORE-2274
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Josh Berkus
On 09/30/2014 04:16 PM, Andres Freund wrote:
 On 2014-09-30 16:03:01 -0700, Josh Berkus wrote:
 On 09/30/2014 03:53 PM, Andres Freund wrote:
 Good point. I think it's fair enough to only allow CINE on named
 indexes.

 On the other hand, the way we form system-generated names is predicable,
 so I think it would be perfectly OK to include them.  Desirable, in fact.
 
 It's not really that predicable. Think about expression indexes. They
 also don't contain information about opclasses et all.
 
 Seems like pit of hairy semantics.
 
 Not that I'm a fan of omitting the name ...
 
 Me neither.

I'd be OK with a CINE which required you to name the index.

How does your patch work at present, Fabrizio?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Fabrízio de Royes Mello
On Tue, Sep 30, 2014 at 8:47 PM, Josh Berkus j...@agliodbs.com wrote:

 On 09/30/2014 04:16 PM, Andres Freund wrote:
  On 2014-09-30 16:03:01 -0700, Josh Berkus wrote:
  On 09/30/2014 03:53 PM, Andres Freund wrote:
  Good point. I think it's fair enough to only allow CINE on named
  indexes.
 
  On the other hand, the way we form system-generated names is
predicable,
  so I think it would be perfectly OK to include them.  Desirable, in
fact.
 
  It's not really that predicable. Think about expression indexes. They
  also don't contain information about opclasses et all.
 
  Seems like pit of hairy semantics.
 
  Not that I'm a fan of omitting the name ...
 
  Me neither.

 I'd be OK with a CINE which required you to name the index.

 How does your patch work at present, Fabrizio?


My patch will work just if you name the index, because postgres generates a
index name that doesn't exists.

I don't check to a name if we use IF NOT EXISTS, but I can add this check.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Josh Berkus
On 09/30/2014 04:58 PM, Fabrízio de Royes Mello wrote:
 On Tue, Sep 30, 2014 at 8:47 PM, Josh Berkus j...@agliodbs.com wrote:

 On 09/30/2014 04:16 PM, Andres Freund wrote:
 On 2014-09-30 16:03:01 -0700, Josh Berkus wrote:
 On 09/30/2014 03:53 PM, Andres Freund wrote:
 Good point. I think it's fair enough to only allow CINE on named
 indexes.

 On the other hand, the way we form system-generated names is
 predicable,
 so I think it would be perfectly OK to include them.  Desirable, in
 fact.

 It's not really that predicable. Think about expression indexes. They
 also don't contain information about opclasses et all.

 Seems like pit of hairy semantics.

 Not that I'm a fan of omitting the name ...

 Me neither.

 I'd be OK with a CINE which required you to name the index.

 How does your patch work at present, Fabrizio?

 
 My patch will work just if you name the index, because postgres generates a
 index name that doesn't exists.
 
 I don't check to a name if we use IF NOT EXISTS, but I can add this check.

The consensus is that we don't want IF NOT EXISTS to work for
automatically generated index names.  For that reason, we'd want it to
error out if someone does this:

CREATE INDEX IF NOT EXISTS ON table(col);

My suggestion for the error message:

IF NOT EXISTS requires that you name the index.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] crash from pfree and brin index

2014-09-30 Thread Mark Wong
Hi Álvaro,

I have a stack trace from a crash to share, when querying a table with
a brin index.  Let me know what else you need in addition to the stack trace:

#0  0x0077b0eb in pfree (pointer=0x1dcf1a8) at mcxt.c:754
#1  0x0045fe4a in bringetbitmap (fcinfo=optimized out) at brin.c:398
#2  0x00761bb2 in FunctionCall2Coll (flinfo=optimized out, 
collation=optimized out, arg1=optimized out, arg2=optimized out) at 
fmgr.c:1324
#3  0x0049a558 in index_getbitmap (scan=0x1dc9420, bitmap=0x1dcba58) at 
indexam.c:651
#4  0x005aaf55 in MultiExecBitmapIndexScan (node=0x1dc9590) at 
nodeBitmapIndexscan.c:89
#5  0x005aa999 in BitmapHeapNext (node=0x1dc6a30) at 
nodeBitmapHeapscan.c:104
#6  0x005a35de in ExecScanFetch (recheckMtd=0x5aa500 
BitmapHeapRecheck, accessMtd=0x5aa550 BitmapHeapNext, node=0x1dc6a30) at 
execScan.c:82
#7  ExecScan (node=0x1dc6a30, accessMtd=0x5aa550 BitmapHeapNext, 
recheckMtd=0x5aa500 BitmapHeapRecheck) at execScan.c:132
#8  0x0059c6d8 in ExecProcNode (node=0x1dc6a30) at execProcnode.c:414
#9  0x005b3651 in ExecNestLoop (node=0x1dc67c0) at nodeNestloop.c:123
#10 0x0059c658 in ExecProcNode (node=0x1dc67c0) at execProcnode.c:449
#11 0x005a86d2 in agg_fill_hash_table (aggstate=0x1dc6070) at 
nodeAgg.c:1349
#12 ExecAgg (node=0x1dc6070) at nodeAgg.c:
#13 0x0059c5f8 in ExecProcNode (node=0x1dc6070) at execProcnode.c:476
#14 0x005b56a9 in ExecSort (node=0x1dc5dd0) at nodeSort.c:103
#15 0x0059c618 in ExecProcNode (node=0x1dc5dd0) at execProcnode.c:468
#16 0x00599b22 in ExecutePlan (dest=0xb5b700, direction=optimized 
out, numberTuples=0, sendTuples=1 '\001', operation=CMD_SELECT, 
planstate=0x1dc5dd0, estate=0x1dc5c90)
at execMain.c:1475
#17 standard_ExecutorRun (queryDesc=0x1dc4be8, direction=optimized out, 
count=0) at execMain.c:308
#18 0x005552a1 in ExplainOnePlan (plannedstmt=optimized out, 
into=0x0, es=0x7fff2a0aaa30, queryString=optimized out, params=0x0, 
planduration=0x7fff2a0aa9e0) at explain.c:481
#19 0x0061 in ExplainOneQuery (params=optimized out, 
queryString=optimized out, es=0x7fff2a0aaa30, into=0x0, query=optimized 
out) at explain.c:336
#20 ExplainOneQuery (query=optimized out, into=0x0, es=0x7fff2a0aaa30, 
queryString=optimized out, params=optimized out) at explain.c:308
#21 0x00555958 in ExplainQuery (stmt=0x1d243e0, 
queryString=0x1d5a660 explain (analyze, buffers)\nselect o_orderpriority, 
count(*) as order_count\nfrom o2\nwhere o_orderdate = date '1995-01-01'\nand 
o_orderdate  cast(date '1995-01-01' + interval '3 month' as date)\nand 
exi..., params=0x0, dest=0x1daecf0) at explain.c:227
#22 0x006890c1 in standard_ProcessUtility (parsetree=0x1d243e0, 
queryString=0x1d5a660 explain (analyze, buffers)\nselect o_orderpriority, 
count(*) as order_count\nfrom o2\nwhere o_orderdate = date '1995-01-01'\nand 
o_orderdate  cast(date '1995-01-01' + interval '3 month' as date)\nand 
exi..., context=optimized out, params=0x0, dest=optimized out, 
completionTag=optimized out) at utility.c:646
#23 0x006868b7 in PortalRunUtility (portal=0x1dacb80, 
utilityStmt=0x1d243e0, isTopLevel=1 '\001', dest=0x1daecf0, 
completionTag=0x7fff2a0aab50 ) at pquery.c:1187
#24 0x0068772d in FillPortalStore (portal=0x1dacb80, isTopLevel=1 
'\001') at pquery.c:1061
#25 0x006880a0 in PortalRun (portal=0x1dacb80, 
count=9223372036854775807, isTopLevel=1 '\001', dest=0x1d24900, 
altdest=0x1d24900, completionTag=0x7fff2a0aaf20 ) at pquery.c:785
#26 0x0068461e in exec_simple_query (
query_string=0x1d5a660 explain (analyze, buffers)\nselect o_orderpriority, 
count(*) as order_count\nfrom o2\nwhere o_orderdate = date '1995-01-01'\nand 
o_orderdate  cast(date '1995-01-01' + interval '3 month' as date)\nand 
exi...) at postgres.c:1045
#27 PostgresMain (argc=optimized out, argv=optimized out, dbname=0x1d01880 
dbt3, username=optimized out) at postgres.c:4010
#28 0x006349ad in BackendRun (port=0x1d211e0) at postmaster.c:4112
#29 BackendStartup (port=0x1d211e0) at postmaster.c:3787
#30 ServerLoop () at postmaster.c:1566
#31 PostmasterMain (argc=optimized out, argv=optimized out) at 
postmaster.c:1219
#32 0x0045f98a in main (argc=3, argv=0x1d00aa0) at main.c:219

Regards,
Mark
-- 
Mark Wong
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Fabrízio de Royes Mello
On Tue, Sep 30, 2014 at 9:12 PM, Josh Berkus j...@agliodbs.com wrote:

 On 09/30/2014 04:58 PM, Fabrízio de Royes Mello wrote:
  On Tue, Sep 30, 2014 at 8:47 PM, Josh Berkus j...@agliodbs.com wrote:
 
  On 09/30/2014 04:16 PM, Andres Freund wrote:
  On 2014-09-30 16:03:01 -0700, Josh Berkus wrote:
  On 09/30/2014 03:53 PM, Andres Freund wrote:
  Good point. I think it's fair enough to only allow CINE on named
  indexes.
 
  On the other hand, the way we form system-generated names is
  predicable,
  so I think it would be perfectly OK to include them.  Desirable, in
  fact.
 
  It's not really that predicable. Think about expression indexes. They
  also don't contain information about opclasses et all.
 
  Seems like pit of hairy semantics.
 
  Not that I'm a fan of omitting the name ...
 
  Me neither.
 
  I'd be OK with a CINE which required you to name the index.
 
  How does your patch work at present, Fabrizio?
 
 
  My patch will work just if you name the index, because postgres
generates a
  index name that doesn't exists.
 
  I don't check to a name if we use IF NOT EXISTS, but I can add this
check.

 The consensus is that we don't want IF NOT EXISTS to work for
 automatically generated index names.  For that reason, we'd want it to
 error out if someone does this:

 CREATE INDEX IF NOT EXISTS ON table(col);

 My suggestion for the error message:

 IF NOT EXISTS requires that you name the index.


Done.

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello
diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml
index e469b17..7886729 100644
--- a/doc/src/sgml/ref/create_index.sgml
+++ b/doc/src/sgml/ref/create_index.sgml
@@ -21,7 +21,7 @@ PostgreSQL documentation
 
  refsynopsisdiv
 synopsis
-CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/replaceable ] ON replaceable class=parametertable_name/replaceable [ USING replaceable class=parametermethod/replaceable ]
+CREATE [ IF NOT EXISTS ] [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/replaceable ] ON replaceable class=parametertable_name/replaceable [ USING replaceable class=parametermethod/replaceable ]
 ( { replaceable class=parametercolumn_name/replaceable | ( replaceable class=parameterexpression/replaceable ) } [ COLLATE replaceable class=parametercollation/replaceable ] [ replaceable class=parameteropclass/replaceable ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] )
 [ WITH ( replaceable class=PARAMETERstorage_parameter/replaceable = replaceable class=PARAMETERvalue/replaceable [, ... ] ) ]
 [ TABLESPACE replaceable class=parametertablespace_name/replaceable ]
@@ -99,6 +99,16 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ replaceable class=parametername/
 
 variablelist
  varlistentry
+  termliteralIF NOT EXISTS/literal/term
+  listitem
+   para
+Do nothing (except issuing a notice) if a index with the same name
+already exists.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termliteralUNIQUE/literal/term
   listitem
para
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index ee10594..8905e30 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -697,7 +697,8 @@ index_create(Relation heapRelation,
 			 bool allow_system_table_mods,
 			 bool skip_build,
 			 bool concurrent,
-			 bool is_internal)
+			 bool is_internal,
+			 bool if_not_exists)
 {
 	Oid			heapRelationId = RelationGetRelid(heapRelation);
 	Relation	pg_class;
@@ -773,10 +774,22 @@ index_create(Relation heapRelation,
 		elog(ERROR, shared relations must be placed in pg_global tablespace);
 
 	if (get_relname_relid(indexRelationName, namespaceId))
+	{
+		if (if_not_exists)
+		{
+			ereport(NOTICE,
+	(errcode(ERRCODE_DUPLICATE_TABLE),
+	 errmsg(relation \%s\ already exists, skipping,
+			indexRelationName)));
+			heap_close(pg_class, RowExclusiveLock);
+			return InvalidOid;
+		}
+
 		ereport(ERROR,
 (errcode(ERRCODE_DUPLICATE_TABLE),
  errmsg(relation \%s\ already exists,
 		indexRelationName)));
+	}
 
 	/*
 	 * construct tuple descriptor for index tuples
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 160f006..5ef6dcc 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -342,7 +342,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
  rel-rd_rel-reltablespace,
  collationObjectId, classObjectId, coloptions, (Datum) 0,
  true, false, false, false,
- true, false, false, true);
+ true, false, false, true, false);
 
 	heap_close(toast_rel, NoLock);
 
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c

Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Michael Paquier
On Wed, Oct 1, 2014 at 10:03 AM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:

 Done.

You should consider adding that to the next commit fest.
-- 
Michael


Re: [HACKERS] CREATE IF NOT EXISTS INDEX

2014-09-30 Thread Fabrízio de Royes Mello
On Tue, Sep 30, 2014 at 10:22 PM, Michael Paquier michael.paqu...@gmail.com
wrote:

 On Wed, Oct 1, 2014 at 10:03 AM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:

 Done.

 You should consider adding that to the next commit fest.


Sure. Added [1]

Regards,

[1] https://commitfest.postgresql.org/action/patch_view?id=1584

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello