Re: [HACKERS] proposal: searching in array function - array_position

2015-03-19 Thread Pavel Stehule
2015-03-18 20:03 GMT+01:00 Alvaro Herrera alvhe...@2ndquadrant.com:

 Pavel Stehule wrote:
  2015-03-18 12:41 GMT+01:00 Marko Tiikkaja ma...@joh.to:

   I am thinking, so this behave is correct (there is no other
   possible), but it is only corner case for this functionality - and
   if you are thinking, so better to disallow it, I am not against. My
   main focus is 1ND array.
  
   A nonsensical answer for multi-dimensional arrays is worse than no
 answer
   at all.  I think raising an exception is better.
  
 
  I have not problem with it.

 Pushed after adding error checks there and fixing the docs to match.
 Please verify.


it is looking well, thank you.

small issue - there is not documented, so multidimensional arrays are not
supported,

Regards

Pavel



 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services



Re: [HACKERS] pg_dump quietly ignore missing tables - is it bug?

2015-03-19 Thread Pavel Stehule
2015-03-15 16:09 GMT+01:00 Tom Lane t...@sss.pgh.pa.us:

 Pavel Stehule pavel.steh...@gmail.com writes:
  other variant, I hope better than previous. We can introduce new long
  option --strict. With this active option, every pattern specified by -t
  option have to have identifies exactly only one table. It can be used for
  any other should to exists patterns - schemas. Initial implementation
 in
  attachment.

 I think this design is seriously broken.  If I have '-t foo*' the code
 should not prevent that from matching multiple tables.  What would the use
 case for such a restriction be?


the behave is same - only one real identifier is allowed



 What would make sense to me is one or both of these ideas:

 * require a match for a wildcard-free -t switch

 * require at least one (not exactly one) match for a wildcarded -t
   switch.

 Neither of those is what you wrote, though.

 If we implemented the second one of these, it would have to be controlled
 by a new switch, because there are plausible use cases for wildcards that
 sometimes don't match anything (not to mention backwards compatibility).
 There might be a reasonable argument for the first one being the
 default behavior, though; I'm not sure if we could get away with that
 from a compatibility perspective.


both your variant has sense for me. We can implement these points
separately. And I see a first point as much more important than second.
Because there is a significant risk of hidden broken backup.

We can implement a some long option with same functionality like now - for
somebody who need backup some explicitly specified tables optionally. Maybe
--table-if-exists ??

Is it acceptable for you?

Regards

Pavel









 regards, tom lane



[HACKERS] [PATCH] two-arg current_setting() with fallback

2015-03-19 Thread David Christensen
Apologies if this is a double-post.

Enclosed is a patch that creates a two-arg current_setting() function.  From 
the commit message:

The two-arg form of the current_setting() function will allow a
fallback value to be returned instead of throwing an error when an
unknown GUC is provided.  This would come in most useful when using
custom GUCs; e.g.:

  -- errors out if the 'foo.bar' setting is unset
  SELECT current_setting('foo.bar');

  -- returns current setting of foo.bar, or 'default' if not set
  SELECT current_setting('foo.bar', 'default')

This would save you having to wrap the use of the function in an
exception block just to catch and utilize a default setting value
within a function.




0001-Add-two-arg-for-of-current_setting-NAME-FALLBACK.patch
Description: Binary data

--
David Christensen
End Point Corporation
da...@endpoint.com
785-727-1171




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Thu, Mar 19, 2015 at 04:36:38PM -0400, Robert Haas wrote:
 So, either way, what happens if the query cancel shows up just an
 instant after you clear the flag?

 Oh, good point.  This version handles that case addressing only the
 log_duration* block.

This is just moving the failure cases around, and not by very much either.

The core issue here, I think, is that xact.c only holds off interrupts
during what it considers to be the commit critical section --- which is
okay from the standpoint of transactional consistency.  But the complaint
has to do with what the client perceives to have happened if a SIGINT
arrives somewhere between where xact.c has committed and where postgres.c
has reported the commit to the client.  If we want to address that, I
think postgres.c needs to hold off interrupts for the entire duration from
just before CommitTransactionCommand() to just after ReadyForQuery().
That's likely to be rather messy, because there are so many code paths
there, especially when you consider error cases.

A possible way to do this without incurring unacceptably high risk of
breakage (in particular, ending up with interrupts still held off when
they shouldn't be any longer) is to assume that there should never be a
case where we reach ReadCommand() with interrupts still held off.  Then
we could invent an additional interrupt primitive RESET_INTERRUPTS()
that forces InterruptHoldoffCount to zero (and, presumably, then does
a CHECK_FOR_INTERRUPTS()); then putting a HOLD_INTERRUPTS() before calling
CommitTransactionCommand() and a RESET_INTERRUPTS() before waiting for
client input would presumably be pretty safe.  On the other hand, that
approach could easily mask interrupt holdoff mismatch bugs elsewhere in
the code base.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch : Allow toast tables to be moved to a different tablespace

2015-03-19 Thread Andreas Karlsson

On 03/19/2015 04:55 PM, Julien Tachoires wrote:

On 18/03/2015 19:54, Andreas Karlsson wrote:

Looks good but I think one minor improvement could be to set the table
space of the toast entires to the same as the tablespace of the table to
reduce the amount of SET default_tablespace. What do you think?


Yes, you're right, some useless SET default_tablespace were added for
each ALTER TABLE SET TOAST TABLESPACE statement. It's now fixed with
this new patch. Thanks.


I am confused by your fix. Wouldn't cleaner fix be to use 
tbinfo-reltablespace rather than tbinfo-reltoasttablespace when 
calling ArchiveEntry()?


I tried the attached path and it seemed to work just fine.

--
Andreas Karlsson
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c589372..de6b359 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8157,7 +8157,7 @@ dumpTOASTTablespace(Archive *fout, DumpOptions *dopt, TableInfo *tbinfo,
 		ArchiveEntry(fout, nilCatalogId, createDumpId(),
 	fmtId(tbinfo-dobj.name),
 	tbinfo-dobj.namespace-dobj.name,
-	tbinfo-reltoasttablespace, tbinfo-rolname,
+	tbinfo-reltablespace, tbinfo-rolname,
 	false, TOAST TABLESPACE, SECTION_NONE,
 	query-data, , NULL,
 	(tbinfo-dobj.dumpId), 1,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 08:43:52PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Thu, Mar 19, 2015 at 06:59:20PM -0300, Alvaro Herrera wrote:
  I don't understand why aren't interrupts held until after the commit is
  done -- including across the mentioned ereports.
 
  Uh, I think Robert was thinking of pre-commit triggers at the top of
  CommitTransaction() that might take a long time and we might want to
  cancel.
 
 Yeah, that's a good point.  So really the only way to make this work as
 requested is to have some cooperation between xact.c and postgres.c,
 so that the hold taken midway through CommitTransaction is kept until
 we reach the idle point.
 
 The attached is only very lightly tested but shows what we probably
 would need for this.  It's a bit messy in that the API for
 CommitTransactionCommand leaves it unspecified whether interrupts are
 held at exit; I'm not sure if it's useful or feasible to be more precise.

Oh, I see what you are saying, and why a global variable will not work. 
I thought all paths reset the cancel state when the returned from a
commit, but it seems there are many places that don't do reset, so you
have to pass a flag down into CommitTransaction() to control that ---
makes sense.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: doc: simplify examples of dynamic SQL

2015-03-19 Thread David G. Johnston
On Thu, Mar 19, 2015 at 5:18 PM, Bruce Momjian br...@momjian.us wrote:

 On Thu, Mar 19, 2015 at 04:01:32PM -0700, David G. Johnston wrote:

 ​Prefacing it with:  You may also see the following syntax in the wild
 since
  format was only recently introduced.​
 
  ​may solve your lack of reason for inclusion.

 Uh, the problem with that is we are not going to revisit this when
 format isn't recently introduced.  I think script writers naturally
 think of query construction using string concatenation first, so showing
 it first seems fine.


​+1​

There are other places later in the docs where we explain all the quote*
 functions and show examples of query construction using string
 concatenation, but I am not sure how we can remove those.


​Can you be more specific?

On a related note:

If you are dealing with values that might be null, you should usually use
quote_nullable in place of quote_literal.

Its unclear why, aside from semantic uncleanliness, someone would use
quote_literal given its identical behavior for non-null values and inferior
behavior which passed NULL.  The function table for the two could maybe be
more clear since quote_nullable(NULL) returns a string representation of
NULL without any quotes while quote_literal(NULL) returns an actual NULL
that ultimately poisons the string concatenation that these functions are
used with.

reads some more

The differences between the actual null and the string NULL are strictly in
capitalization - which is not consistent even within the table.  concat_ws
states NULL arguments are ignored and so represents actual null with
all-caps which is string NULL in the quote_* descriptions.  Having read
40.5.4 and example 40-1 the difference is clear and obvious so maybe what
is in the table is sufficient for this topic.

I would suggest adding a comment to quote_ident and quote_nullable that
corresponding format codes are %I and %L.  Obviously there is no quote_
function to correspond with %S.  There is likewise nor corresponding format
code for quote_literal since quote_nullable is superior in every way (that
I can tell at least).

David J.


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 04:36:38PM -0400, Robert Haas wrote:
 On Thu, Mar 19, 2015 at 10:23 AM, Bruce Momjian br...@momjian.us wrote:
  First attached patch is more surgical and clears a possible cancel
  request before we report the query duration in the logs --- this doesn't
  affect any after triggers that might include CHECK_FOR_INTERRUPTS()
  calls we want to honor.
 
  Another approach would be to have CommitTransaction() clear any pending
  cancel before it calls RESUME_INTERRUPTS().  The second attached patch
  takes that approach, and also works.
 
 So, either way, what happens if the query cancel shows up just an
 instant after you clear the flag?

Oh, good point.  This version handles that case addressing only the
log_duration* block.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index 33720e8..4374fb4
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*** exec_simple_query(const char *query_stri
*** 1165,1184 
  	/*
  	 * Emit duration logging if appropriate.
  	 */
- 	switch (check_log_duration(msec_str, was_logged))
  	{
! 		case 1:
! 			ereport(LOG,
! 	(errmsg(duration: %s ms, msec_str),
! 	 errhidestmt(true)));
! 			break;
! 		case 2:
! 			ereport(LOG,
! 	(errmsg(duration: %s ms  statement: %s,
! 			msec_str, query_string),
! 	 errhidestmt(true),
! 	 errdetail_execute(parsetree_list)));
! 			break;
  	}
  
  	if (save_log_statement_stats)
--- 1165,1193 
  	/*
  	 * Emit duration logging if appropriate.
  	 */
  	{
! 		int output_duration_level = check_log_duration(msec_str, was_logged);
! 
! 		if (output_duration_level != 0)
! 		{
! 			/* hold client cancel as we have already committed */
! 			HOLD_CANCEL_INTERRUPTS();
! 
! 			if (output_duration_level == 1)
! ereport(LOG,
! 		(errmsg(duration: %s ms, msec_str),
! 		 errhidestmt(true)));
! 			else if (output_duration_level == 2)
! ereport(LOG,
! 		(errmsg(duration: %s ms  statement: %s,
! msec_str, query_string),
! 		 errhidestmt(true),
! 		 errdetail_execute(parsetree_list)));
! 
! 			/* clear client cancel */
! 			QueryCancelPending = false;
! 			RESUME_CANCEL_INTERRUPTS();
! 		}
  	}
  
  	if (save_log_statement_stats)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using 128-bit integers for sum, avg and statistics aggregates

2015-03-19 Thread Andreas Karlsson

On 03/19/2015 07:08 PM, Andres Freund wrote:

Working on committing this:


Nice fixes. Sorry about forgetting numericvar_to_int*.

As for the reviewers those lists look pretty much correct. David Rowley 
should probably be added to the second patch for his early review and 
benchmarking.


--
Andreas Karlsson


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: doc: simplify examples of dynamic SQL

2015-03-19 Thread Bruce Momjian
On Thu, Oct  2, 2014 at 09:06:54PM -0700, David G Johnston wrote:
 Jim Nasby-5 wrote
  On 10/2/14, 6:51 AM, Pavel Stehule wrote:
  EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = %L',
  colname, keyvalue)
  or
  -1, because of quoting issues
  EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = $1',
  colname)
USING keyvalue;
  Better, but I think it should really be quote_ident( colname )
 
 http://www.postgresql.org/docs/9.4/static/plpgsql-statements.html#PLPGSQL-QUOTE-LITERAL-EXAMPLE
 
 The use of %I and %L solve all quoting issues when using format(); they
 likely call the relevant quote_ function on the user's behalf.

Doing some research on EXECUTE, I found that for constants, USING is
best because it _conditionally_ quotes based on the data type, and for
identifiers, format(%I) is best.

  A old examples are very instructive, but little bit less readable and
  maybe too complex for beginners.
 
  Opinions? 
  Honestly, I'm not to fond of either. format() is a heck of a lot nicer
  than a forest of ||'s, but I think it still falls short of what we'd
  really want here which is some kind of variable substitution or even a
  templating language. IE:
  
  EXECUTE 'UDPATE tbl SET $colname = newvalue WHERE key = $keyvalue';
 
 Putting that example into the docs isn't a good idea...it isn't valid in
 PostgreSQL ;)
 
 
 My complaint with the topic is that it is not specific enough.  There are
 quite a few locations with dynamic queries.  My take is that the
 concatenation form be shown only in possible ways to accomplish this type
 sections but that all actual examples or recommendations make use of the
 format function. 

I have done this with the attached PL/pgSQL doc patch.

 The link above (40.5.4 in 9.4) is one such section where both forms need to
 be showed but I would suggest reversing the order so that we first introduce
 - prominently - the format function and then show the old-school way.  That
 said there is some merit to emphasizing the wrong and hard way so as to help
 the reader conclude that the less painful format function really is their
 best friend...but that would be my fallback position here.

I tried showing format() first, but then it was odd about why to then
show ||.  I ended up showing || first, then showing format() and saying
it is better.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index 158d9d2..52b4daa
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** EXECUTE 'SELECT count(*) FROM '
*** 1222,1227 
--- 1222,1234 
 INTO c
 USING checked_user, checked_date;
  /programlisting
+  A cleaner approach is to use functionformat()/'s literal%I/
+  specification for table or column names:
+ programlisting
+ EXECUTE format('SELECT count(*) FROM %I WHERE inserted_by = $1 AND inserted lt;= $2', tabname)
+INTO c
+USING checked_user, checked_date;
+ /programlisting
   Another restriction on parameter symbols is that they only work in
   commandSELECT/, commandINSERT/, commandUPDATE/, and
   commandDELETE/ commands.  In other statement
*** EXECUTE 'SELECT count(*) FROM '
*** 1297,1307 
  /para
  
  para
!  Dynamic values that are to be inserted into the constructed
!  query require careful handling since they might themselves contain
   quote characters.
!  An example (this assumes that you are using dollar quoting for the
!  function as a whole, so the quote marks need not be doubled):
  programlisting
  EXECUTE 'UPDATE tbl SET '
  || quote_ident(colname)
--- 1304,1317 
  /para
  
  para
!  Dynamic values require careful handling since they might contain
   quote characters.
!  An example using functionformat()/ (this assumes that you are
!  dollar quoting the function body so quote marks need not be doubled):
! programlisting
! EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) USING newvalue, keyvalue;
! /programlisting
!  It is also possible to call the quoting functions directly:
  programlisting
  EXECUTE 'UPDATE tbl SET '
  || quote_ident(colname)
*** EXECUTE format('UPDATE tbl SET %I = %L W
*** 1399,1407 
  EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname)
 USING newvalue, keyvalue;
  /programlisting
!  This form is more efficient, because the parameters
!  literalnewvalue/literal and literalkeyvalue/literal are not
!  converted to text.
  /para
 /example
  
--- 1409,1417 
  EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname)
 USING newvalue, keyvalue;
  /programlisting
!  This form is better because the variables are emphasisoptionally/
!  quoted based on their 

Re: [HACKERS] proposal: doc: simplify examples of dynamic SQL

2015-03-19 Thread David G. Johnston
On Thu, Mar 19, 2015 at 3:38 PM, Bruce Momjian br...@momjian.us wrote:

 On Thu, Oct  2, 2014 at 09:06:54PM -0700, David G Johnston wrote:
  Jim Nasby-5 wrote
   On 10/2/14, 6:51 AM, Pavel Stehule wrote:
   EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = %L',
   colname, keyvalue)
   or
   -1, because of quoting issues
   EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = $1',
   colname)
 USING keyvalue;
   Better, but I think it should really be quote_ident( colname )
 
 
 http://www.postgresql.org/docs/9.4/static/plpgsql-statements.html#PLPGSQL-QUOTE-LITERAL-EXAMPLE
 
  The use of %I and %L solve all quoting issues when using format(); they
  likely call the relevant quote_ function on the user's behalf.

 Doing some research on EXECUTE, I found that for constants, USING is
 best because it _conditionally_ quotes based on the data type, and for
 identifiers, format(%I) is best.


​
​On a nit-pick note, ISTM that EXECUTE 'SELECT $1' USING ('1')​
​
​ is not really optionally quoted based on their data types but rather
processed in such a way as to not require quoting at all.  Doesn't execute
effectively bypass converting the USING values to text in much the same way
as PREPARE/EXECUTE does in SQL?  i.e., It uses the extended query protocol
with a separate BIND instead of interpolating the arguments and then using
a simple query protocol.

Not that the reader likely cares - they just need to know to never place
%I, %L or $# within quotes.  I would say the same goes for %S always
unless forced to do otherwise.


   A old examples are very instructive, but little bit less readable and
   maybe too complex for beginners.
  
   Opinions?
   Honestly, I'm not to fond of either. format() is a heck of a lot nicer
   than a forest of ||'s, but I think it still falls short of what we'd
   really want here which is some kind of variable substitution or even a
   templating language. IE:
  
   EXECUTE 'UDPATE tbl SET $colname = newvalue WHERE key = $keyvalue';
 
  Putting that example into the docs isn't a good idea...it isn't valid in
  PostgreSQL ;)
 
 
  My complaint with the topic is that it is not specific enough.  There are
  quite a few locations with dynamic queries.  My take is that the
  concatenation form be shown only in possible ways to accomplish this
 type
  sections but that all actual examples or recommendations make use of the
  format function.

 I have done this with the attached PL/pgSQL doc patch.


​Thank You!
​



  The link above (40.5.4 in 9.4) is one such section where both forms need
 to
  be showed but I would suggest reversing the order so that we first
 introduce
  - prominently - the format function and then show the old-school way.
 That
  said there is some merit to emphasizing the wrong and hard way so as to
 help
  the reader conclude that the less painful format function really is their
  best friend...but that would be my fallback position here.

 I tried showing format() first, but then it was odd about why to then
 show ||.  I ended up showing || first, then showing format() and saying
 it is better.


​Prefacing it with:  You may also see the following syntax in the wild
since format was only recently introduced.​

​may solve your lack of reason for inclusion.

Neither item requires attention but some food for thought.

David J.


Re: [HACKERS] pg_xlogdump MSVC build script oddities

2015-03-19 Thread Michael Paquier
On Thu, Mar 19, 2015 at 11:25 AM, Heikki Linnakangas hlinn...@iki.fi wrote:
 I'm guessing that the current state of affairs is just an oversight. I tried
 changing it so that xlogreader.c is built into pg_xlogdump without copying,
 and it seemed to work. But I'm using a very recent version of MSVC - perhaps
 it won't work on pre-VS2011 versions.

 Unless someone has some insights on this, I'm going to commit the attached,
 and see what the buildfarm thinks of it.

This looks good to me. I just tested your patch on MS 2010, and it
worked, but that's perhaps not old enough to trigger the problem. Now
currawong and mastodon would be the real tests..
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Max Filippov
Hi,

when PostgreSQL is cross-compiled in the Buildroot with uClibc toolchain
it may not correctly detect compiler/linker flags for threading. [1]
The reason is that config/acx_pthread.m4:146 uses compiler and linker
stdout and stderr to make decision if acx_pthread_ok should be yes or no:

  if test `(eval $ac_link 21 15)` =   test `(eval
$ac_compile 21 15)` = ; then

and the toolchain emits the following warning at linking step:

  libcrypto.so: warning: gethostbyname is obsolescent, use
getnameinfo() instead.

git log doesn't tell much why it is done that way. Does anybody know?
Can that test be rewritten as

  if eval $ac_link 21 15  eval $ac_compile 21 15 ; then

?

[1] http://comments.gmane.org/gmane.comp.lib.uclibc.buildroot/110204

-- 
Thanks.
-- Max


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: doc: simplify examples of dynamic SQL

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 04:01:32PM -0700, David G. Johnston wrote:
 Doing some research on EXECUTE, I found that for constants, USING is
 best because it _conditionally_ quotes based on the data type, and for
 identifiers, format(%I) is best.
 
 
 
 ​
 ​On a nit-pick note, ISTM that EXECUTE 'SELECT $1' USING ('1')​
 ​
 ​ is not really optionally quoted based on their data types but rather
 processed in such a way as to not require quoting at all.  Doesn't execute
 effectively bypass converting the USING values to text in much the same way as
 PREPARE/EXECUTE does in SQL?  i.e., It uses the extended query protocol with a
 separate BIND instead of interpolating the arguments and then using a simple
 query protocol.
 
 Not that the reader likely cares - they just need to know to never place %I,
 %L or $# within quotes.  I would say the same goes for %S always unless 
 forced
 to do otherwise.

You are correct.  I have modified that paragraph in the attached
version.  Not only is %L inefficient, but converting to text can cause
errors, e.g. adding two strings throws an error:

test= do $$ declare x text; begin execute format('select %L + ''2''',  
1) into x; raise '%', x; end;$$;
ERROR:  operator is not unique: unknown + unknown
LINE 1: select '1' + '2'
   ^
HINT:  Could not choose a best candidate operator. You might need to 
add explicit type casts.
QUERY:  select '1' + '2'
CONTEXT:  PL/pgSQL function inline_code_block line 1 at EXECUTE 
statement

while adding an integer to a string works:

test= do $$ declare x text; begin execute format('select $1 + ''2''') 
using 1 into x; raise '%', x; end;$$;
ERROR:  3

  The link above (40.5.4 in 9.4) is one such section where both forms need
 to
  be showed but I would suggest reversing the order so that we first
 introduce
  - prominently - the format function and then show the old-school way. 
 That
  said there is some merit to emphasizing the wrong and hard way so as to
 help
  the reader conclude that the less painful format function really is 
 their
  best friend...but that would be my fallback position here.
 
 I tried showing format() first, but then it was odd about why to then
 show ||.  I ended up showing || first, then showing format() and saying
 it is better.
 
 
 ​Prefacing it with:  You may also see the following syntax in the wild since
 format was only recently introduced.​
  
 ​may solve your lack of reason for inclusion.

Uh, the problem with that is we are not going to revisit this when
format isn't recently introduced.  I think script writers naturally
think of query construction using string concatenation first, so showing
it first seems fine.

There are other places later in the docs where we explain all the quote*
functions and show examples of query construction using string
concatenation, but I am not sure how we can remove those.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index 158d9d2..eb80169
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** EXECUTE 'SELECT count(*) FROM '
*** 1222,1227 
--- 1222,1234 
 INTO c
 USING checked_user, checked_date;
  /programlisting
+  A cleaner approach is to use functionformat()/'s literal%I/
+  specification for table or column names:
+ programlisting
+ EXECUTE format('SELECT count(*) FROM %I WHERE inserted_by = $1 AND inserted lt;= $2', tabname)
+INTO c
+USING checked_user, checked_date;
+ /programlisting
   Another restriction on parameter symbols is that they only work in
   commandSELECT/, commandINSERT/, commandUPDATE/, and
   commandDELETE/ commands.  In other statement
*** EXECUTE 'SELECT count(*) FROM '
*** 1297,1307 
  /para
  
  para
!  Dynamic values that are to be inserted into the constructed
!  query require careful handling since they might themselves contain
   quote characters.
!  An example (this assumes that you are using dollar quoting for the
!  function as a whole, so the quote marks need not be doubled):
  programlisting
  EXECUTE 'UPDATE tbl SET '
  || quote_ident(colname)
--- 1304,1317 
  /para
  
  para
!  Dynamic values require careful handling since they might contain
   quote characters.
!  An example using functionformat()/ (this assumes that you are
!  dollar quoting the function body so quote marks need not be doubled):
! programlisting
! EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) USING newvalue, keyvalue;
! /programlisting
!  It is also possible to call the quoting functions directly:
  programlisting
  EXECUTE 'UPDATE tbl SET '
  

Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 07:19:02PM -0400, Tom Lane wrote:
 The core issue here, I think, is that xact.c only holds off interrupts
 during what it considers to be the commit critical section --- which is
 okay from the standpoint of transactional consistency.  But the complaint
 has to do with what the client perceives to have happened if a SIGINT
 arrives somewhere between where xact.c has committed and where postgres.c
 has reported the commit to the client.  If we want to address that, I
 think postgres.c needs to hold off interrupts for the entire duration from
 just before CommitTransactionCommand() to just after ReadyForQuery().
 That's likely to be rather messy, because there are so many code paths
 there, especially when you consider error cases.
 
 A possible way to do this without incurring unacceptably high risk of
 breakage (in particular, ending up with interrupts still held off when
 they shouldn't be any longer) is to assume that there should never be a
 case where we reach ReadCommand() with interrupts still held off.  Then
 we could invent an additional interrupt primitive RESET_INTERRUPTS()
 that forces InterruptHoldoffCount to zero (and, presumably, then does
 a CHECK_FOR_INTERRUPTS()); then putting a HOLD_INTERRUPTS() before calling
 CommitTransactionCommand() and a RESET_INTERRUPTS() before waiting for
 client input would presumably be pretty safe.  On the other hand, that
 approach could easily mask interrupt holdoff mismatch bugs elsewhere in
 the code base.

Well, right now we allow interrupts for as long as possible, i.e. to the
middle of CommitTransaction().  Your approach would block interrupts for
a larger span, which might be worse than the bug we are fixing.  It also
feels like it would be unmodular as functions would change the blocking
state when they are called.

Tom is right that my cancel5.diff version has an area between the first
cancel erase and the second cancel erase where a cancel might arrive.  I
assumed there were no checks in that area, but I might be wrong, and
there could be checks there someday.

The fundamental problem is that the place we need to block cancels
starts several levels down in a function, and the place we need to clear
it is higher.  Maybe the entire HOLD/RESUME block API we have for this
is wrong and we just need a global variable to ignore client cancels to
be read inside the signal handler, and we just set and clear it.  I can
work on a patch if people like that idea.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Tom Lane
Max Filippov jcmvb...@gmail.com writes:
 when PostgreSQL is cross-compiled in the Buildroot with uClibc toolchain
 it may not correctly detect compiler/linker flags for threading. [1]
 The reason is that config/acx_pthread.m4:146 uses compiler and linker
 stdout and stderr to make decision if acx_pthread_ok should be yes or no:

   if test `(eval $ac_link 21 15)` =   test `(eval
 $ac_compile 21 15)` = ; then

 and the toolchain emits the following warning at linking step:

   libcrypto.so: warning: gethostbyname is obsolescent, use
 getnameinfo() instead.

 git log doesn't tell much why it is done that way. Does anybody know?

The short answer is that the linker you're using is written by pedantic
idiots.  Notice that the gethostbyname call it's complaining about is
somewhere inside libcrypto; it's *not* in Postgres, much less the test
program being built here.  As such, we have no options whatever for
suppressing the complaint, which means that the complaint is being
delivered inappropriately, and it shouldn't be there.  The linker is a
silly choice for a place to impose this sort of value judgment anyway;
it has absolutely no way to know whether the use of gethostbyname in
a particular program is unsafe.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 06:59:20PM -0300, Alvaro Herrera wrote:
 Robert Haas wrote:
  On Thu, Mar 19, 2015 at 10:23 AM, Bruce Momjian br...@momjian.us wrote:
 
   The issue with CommitTransaction() is that it only _holds_ the signal
   --- it doesn't clear it.  Now, since there are very few
   CHECK_FOR_INTERRUPTS() calls in the typical commit process flow, the
   signal is normally erased.  However, if log_duration or
   log_min_duration_statement are set, they call ereport, which calls
   errfinish(), which has a call to CHECK_FOR_INTERRUPTS().
  
   First attached patch is more surgical and clears a possible cancel
   request before we report the query duration in the logs --- this doesn't
   affect any after triggers that might include CHECK_FOR_INTERRUPTS()
   calls we want to honor.
  
   Another approach would be to have CommitTransaction() clear any pending
   cancel before it calls RESUME_INTERRUPTS().  The second attached patch
   takes that approach, and also works.
  
  So, either way, what happens if the query cancel shows up just an
  instant after you clear the flag?
 
 I don't understand why aren't interrupts held until after the commit is
 done -- including across the mentioned ereports.

Uh, I think Robert was thinking of pre-commit triggers at the top of
CommitTransaction() that might take a long time and we might want to
cancel.  In fact, he is right that mid-way into CommitTransaction(),
after those pre-commit triggers, we do HOLD_INTERRUPTS(), then set our
clog bit and continue to the bottom of that function.  What is happening
is that we don't _clear_ the cancel bit and log_duration is finding the
cancel.

In an ideal world, we would clear the client cancel in
CommitTransaction() and when we do log_duration*, and the attached patch
now does that.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
new file mode 100644
index 1495bb4..853671f
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
*** CommitTransaction(void)
*** 1958,1963 
--- 1958,1966 
  	 */
  	s-state = TRANS_DEFAULT;
  
+ 	/* We have committed so clear any client cancel. */
+ 	QueryCancelPending = false;
+ 
  	RESUME_INTERRUPTS();
  }
  
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index 33720e8..b797cad
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*** exec_simple_query(const char *query_stri
*** 1165,1184 
  	/*
  	 * Emit duration logging if appropriate.
  	 */
- 	switch (check_log_duration(msec_str, was_logged))
  	{
! 		case 1:
! 			ereport(LOG,
! 	(errmsg(duration: %s ms, msec_str),
! 	 errhidestmt(true)));
! 			break;
! 		case 2:
! 			ereport(LOG,
! 	(errmsg(duration: %s ms  statement: %s,
! 			msec_str, query_string),
! 	 errhidestmt(true),
! 	 errdetail_execute(parsetree_list)));
! 			break;
  	}
  
  	if (save_log_statement_stats)
--- 1165,1194 
  	/*
  	 * Emit duration logging if appropriate.
  	 */
  	{
! 		int output_duration_level = check_log_duration(msec_str, was_logged);
! 
! 		if (output_duration_level != 0)
! 		{
! 			/* hold client cancel as we have already committed */
! 			HOLD_CANCEL_INTERRUPTS();
! 
! 			if (output_duration_level == 1)
! ereport(LOG,
! 		(errmsg(duration: %s ms, msec_str),
! 		 errhidestmt(true)));
! 			else if (output_duration_level == 2)
! ereport(LOG,
! 		(errmsg(duration: %s ms  statement: %s,
! msec_str, query_string),
! 		 errhidestmt(true),
! 		 errdetail_execute(parsetree_list)));
! 
! 			/* clear client cancel */
! 			QueryCancelPending = false;
! 
! 			RESUME_CANCEL_INTERRUPTS();
! 		}
  	}
  
  	if (save_log_statement_stats)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] two-arg current_setting() with fallback

2015-03-19 Thread Tom Lane
David Christensen da...@endpoint.com writes:
 The two-arg form of the current_setting() function will allow a
 fallback value to be returned instead of throwing an error when an
 unknown GUC is provided.  This would come in most useful when using
 custom GUCs; e.g.:

   -- errors out if the 'foo.bar' setting is unset
   SELECT current_setting('foo.bar');

   -- returns current setting of foo.bar, or 'default' if not set
   SELECT current_setting('foo.bar', 'default')

 This would save you having to wrap the use of the function in an
 exception block just to catch and utilize a default setting value
 within a function.

That seems kind of ugly, not least because it assumes that you know
a value that couldn't be mistaken for a valid value of the GUC.
(I realize that you are thinking of cases where you want to pretend
that the GUC has some valid value, but that's not the only use case.)

ISTM, since we don't allow GUCs to have null values, it'd be better to
define the variant function as returning NULL for no-such-GUC.  Then the
behavior you want could be achieved by wrapping that in a COALESCE, but
the behavior of probing whether the GUC is set at all would be achieved
with an IS NULL test.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using 128-bit integers for sum, avg and statistics aggregates

2015-03-19 Thread Peter Geoghegan
On Thu, Mar 19, 2015 at 4:49 PM, Andreas Karlsson andr...@proxel.se wrote:
 Nice fixes. Sorry about forgetting numericvar_to_int*.

 As for the reviewers those lists look pretty much correct. David Rowley
 should probably be added to the second patch for his early review and
 benchmarking.

This also seems fine to me.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: searching in array function - array_position

2015-03-19 Thread Alvaro Herrera
Pavel Stehule wrote:
 2015-03-18 20:03 GMT+01:00 Alvaro Herrera alvhe...@2ndquadrant.com:

  Pushed after adding error checks there and fixing the docs to match.
  Please verify.
 
 
 it is looking well, thank you.

Thanks for checking.

 small issue - there is not documented, so multidimensional arrays are not
 supported,

I added a parenthised comment in the table, (array must be
one-dimensional).  I copied this from the entry for the array_remove
function IIRC; it seemed enough ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Robert Haas
On Wed, Mar 18, 2015 at 10:56 PM, Bruce Momjian br...@momjian.us wrote:
 I have researched this issue originally reported in June of 2014 and
 implemented a patch to ignore cancel while we are completing a commit.
 I am not clear if this is the proper place for this code, though a
 disable_timeout() call on the line above suggests I am close.  :-)

This would also disable cancel interrupts while running AFTER
triggers, which seems almost certain to be wrong.  TBH, I'm not sure
why the existing HOLD_INTERRUPTS() in CommitTransaction() isn't
already preventing this problem.  Did you investigate that at all?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable referencing itself in example of pgbench.sgml

2015-03-19 Thread Robert Haas
On Thu, Mar 19, 2015 at 7:20 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 While looking at some recent commit related to pgbench I noticed this
 example in the docs:
 \set aid (1021 * :aid) % (10 * :scale) + 1
 This actually would fail because aid references itself.

There's no special prohibition against that.  It works fine if aid is
already set.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Kevin Grittner
Tatsuo Ishii is...@postgresql.org wrote:

 ereport(ERROR,
 (errcode(ERRCODE_T_R_DEADLOCK_DETECTED),
 errmsg(canceling statement due to conflict with recovery),
   errdetail(User transaction caused buffer deadlock with recovery.)));

 ereport(ERROR,
 (errcode(ERRCODE_T_R_DEADLOCK_DETECTED),
 errmsg(deadlock detected),
 errdetail_internal(%s, clientbuf.data),
 errdetail_log(%s, logbuf.data),
 errhint(See server log for query details.)));

 The latter is a normal deadlock and can be obseved by stats
 because pgstat_report_deadlock() is called.

 The former is using the same error code but the meaning is pretty
 different and users might be confused IMO.

 I am not sure sharing the same error code is the best idea here.

That SQLSTATE value is intended to be used where the transaction
has failed because it was run concurrently with some other
transaction, rather than before or after it; and it is intended to
suggest that the transaction may succeed if run after the competing
transaction has completed.  If those apply, it seems like the right
SQLSTATE.  A user can certainly distinguish between the conditions
by looking at the error messages.

For me the big question is whether software written to retry a
transaction from the beginning when it gets this SQLSTATE would be
doing something dumb to retry transactions (perhaps after a brief
delay) for the conflict with recovery.  If using the same automated
recovery techniques is sane, then IMO it makes sense to use the
same SQLSTATE.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-03-19 Thread Kyotaro HORIGUCHI
Hello, Thank you for reviewing, and sorry to have overlooked
this.

# I hope the CF app to add the author as a receiver when issueing
# a mail.

regards,

At Thu, 12 Mar 2015 11:06:29 +, Jeevan Chalke jeevan.cha...@gmail.com 
wrote in 20150312110629.2540.70807.p...@coridan.postgresql.org
 The following review has been posted through the commitfest application:
 make installcheck-world:  tested, passed
 Implements feature:   tested, passed
 Spec compliant:   tested, passed
 Documentation:tested, passed
 
 Looks good. Passing it to committer.
 
 The new status of this patch is: Ready for Committer

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Variable referencing itself in example of pgbench.sgml

2015-03-19 Thread Michael Paquier
Hi all,

While looking at some recent commit related to pgbench I noticed this
example in the docs:
\set aid (1021 * :aid) % (10 * :scale) + 1
This actually would fail because aid references itself.

Attached is a patch to change this example as follows:
-\set aid (1021 * :aid) % (10 * :scale) + 1
+\set aid (1021 * :ntellers) % (10 * :scale) + 1
Regards,
-- 
Michael
diff --git a/doc/src/sgml/pgbench.sgml b/doc/src/sgml/pgbench.sgml
index ed12e27..22ac371 100644
--- a/doc/src/sgml/pgbench.sgml
+++ b/doc/src/sgml/pgbench.sgml
@@ -769,7 +769,7 @@ pgbench optional replaceableoptions/ /optional replaceabledbname/
   Examples:
 programlisting
 \set ntellers 10 * :scale
-\set aid (1021 * :aid) % (10 * :scale) + 1
+\set aid (1021 * :ntellers) % (10 * :scale) + 1
 /programlisting/para
 /listitem
/varlistentry

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Tatsuo Ishii
 I think the solution for this is assigning a unique id to each
 message.  This is already done in some commercial databases. They are
 pretty usefull for tech supports.
 
 We already have file and line number recorded.

Problem with it is, the line number (and sometimes the file name)
change with version to version.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: doc: simplify examples of dynamic SQL

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 06:05:52PM -0700, David G. Johnston wrote:
 On Thu, Mar 19, 2015 at 5:18 PM, Bruce Momjian br...@momjian.us wrote:
 There are other places later in the docs where we explain all the quote*
 functions and show examples of query construction using string
 concatenation, but I am not sure how we can remove those.
 
 
 
 ​Can you be more specific?

Yes.  You can see the output of the attached patch here:


http://momjian.us/tmp/pgsql/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

Notice:

EXECUTE 'UPDATE tbl SET '
|| quote_ident(colname)
|| ' = '
|| quote_nullable(newvalue)
|| ' WHERE key = '
|| quote_nullable(keyvalue);

and 

EXECUTE 'UPDATE tbl SET '
|| quote_ident(colname)
|| ' = $$'
|| newvalue
|| '$$ WHERE key = '
|| quote_literal(keyvalue);

It is making a point about nulls and stuff.  There are later queries
that use format().

 On a related note:
 
 If you are dealing with values that might be null, you should usually use
 quote_nullable in place of quote_literal.
 
 Its unclear why, aside from semantic uncleanliness, someone would use
 quote_literal given its identical behavior for non-null values and inferior
 behavior which passed NULL.  The function table for the two could maybe be 
 more
 clear since quote_nullable(NULL) returns a string representation of NULL
 without any quotes while quote_literal(NULL) returns an actual NULL that
 ultimately poisons the string concatenation that these functions are used 
 with.
 
 reads some more
 
 The differences between the actual null and the string NULL are strictly in
 capitalization - which is not consistent even within the table.  concat_ws
 states NULL arguments are ignored and so represents actual null with 
 all-caps
 which is string NULL in the quote_* descriptions.  Having read 40.5.4 and
 example 40-1 the difference is clear and obvious so maybe what is in the table
 is sufficient for this topic.
 
 I would suggest adding a comment to quote_ident and quote_nullable that
 corresponding format codes are %I and %L.  Obviously there is no quote_
 function to correspond with %S.  There is likewise nor corresponding format
 code for quote_literal since quote_nullable is superior in every way (that I
 can tell at least).

OK, I have added that tip --- good suggestion.   Patch attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index 158d9d2..aee8264
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** EXECUTE 'SELECT count(*) FROM '
*** 1222,1227 
--- 1222,1234 
 INTO c
 USING checked_user, checked_date;
  /programlisting
+  A cleaner approach is to use functionformat()/'s literal%I/
+  specification for table or column names:
+ programlisting
+ EXECUTE format('SELECT count(*) FROM %I WHERE inserted_by = $1 AND inserted lt;= $2', tabname)
+INTO c
+USING checked_user, checked_date;
+ /programlisting
   Another restriction on parameter symbols is that they only work in
   commandSELECT/, commandINSERT/, commandUPDATE/, and
   commandDELETE/ commands.  In other statement
*** EXECUTE 'SELECT count(*) FROM '
*** 1297,1307 
  /para
  
  para
!  Dynamic values that are to be inserted into the constructed
!  query require careful handling since they might themselves contain
   quote characters.
!  An example (this assumes that you are using dollar quoting for the
!  function as a whole, so the quote marks need not be doubled):
  programlisting
  EXECUTE 'UPDATE tbl SET '
  || quote_ident(colname)
--- 1304,1317 
  /para
  
  para
!  Dynamic values require careful handling since they might contain
   quote characters.
!  An example using functionformat()/ (this assumes that you are
!  dollar quoting the function body so quote marks need not be doubled):
! programlisting
! EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname) USING newvalue, keyvalue;
! /programlisting
!  It is also possible to call the quoting functions directly:
  programlisting
  EXECUTE 'UPDATE tbl SET '
  || quote_ident(colname)
*** EXECUTE 'UPDATE tbl SET '
*** 1393,1407 
  programlisting
  EXECUTE format('UPDATE tbl SET %I = %L WHERE key = %L', colname, newvalue, keyvalue);
  /programlisting
   The functionformat/function function can be used in conjunction with
   the literalUSING/literal clause:
  programlisting
  EXECUTE format('UPDATE tbl SET %I = $1 WHERE key = $2', colname)
 USING newvalue, keyvalue;
  /programlisting
!  This form is more 

Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Bruce Momjian
On Fri, Mar 20, 2015 at 04:51:55AM +0300, Max Filippov wrote:
 xtensa-linux-gcc -o conftest -Wall -Wmissing-prototypes
 -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
 -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
 -fwrapv -fexcess-precision=standard -D_LARGEFILE_SOURCE
 -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -mlongcalls
 -mtext-section-literals -Os -pthread -D_LARGEFILE_SOURCE
 -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE conftest.c
 -lssl -lcrypto -lz -lreadline -lrt -lcrypt -ldl -lm
 
 and if I drop irrelevant libraries from that command its stdout+stderr
 will probably be empty.
 
 But I was curious why this test is written *that* way.

Threading compiles are different for every platform so the code just
tries everything --- we didn't anticipate that adding a useless library
would actually cause a failure.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shapes on the regression test for polygon

2015-03-19 Thread Bruce Momjian
On Tue, Oct 14, 2014 at 11:00:47AM -0400, Bruce Momjian wrote:
 On Tue, Oct 14, 2014 at 05:40:06PM +0300, Emre Hasegeli wrote:
   I extracted Emre's diagram adjustments from the patch and applied it,
   and no tabs now.  Emre, I assume your regression changes did not affect
   the diagram contents.
  
  Thank you for looking at it.  I wanted to make the tests consistent
  with the diagrams.  Now they look better but they still don't make
  sense with the tests.  I looked at it some more, and come to the
  conclusion that removing them is better than changing the tests.
 
 OK, unless I hear more feedback I will remove the diagrams.

Removed.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Max Filippov
On Fri, Mar 20, 2015 at 5:09 AM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Mar 20, 2015 at 04:51:55AM +0300, Max Filippov wrote:
 xtensa-linux-gcc -o conftest -Wall -Wmissing-prototypes
 -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
 -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
 -fwrapv -fexcess-precision=standard -D_LARGEFILE_SOURCE
 -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -mlongcalls
 -mtext-section-literals -Os -pthread -D_LARGEFILE_SOURCE
 -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE conftest.c
 -lssl -lcrypto -lz -lreadline -lrt -lcrypt -ldl -lm

 and if I drop irrelevant libraries from that command its stdout+stderr
 will probably be empty.

 But I was curious why this test is written *that* way.

 Threading compiles are different for every platform so the code just
 tries everything --- we didn't anticipate that adding a useless library
 would actually cause a failure.

Sorry, I must be not clear enough: why checking compiler/linker output
instead of checking their exit code or presence of produced object/
executable files?

-- 
Thanks.
-- Max


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and ranges

2015-03-19 Thread Bruce Momjian
On Fri, Dec 26, 2014 at 01:33:26PM +0300, Teodor Sigaev wrote:
 Teodor's patch could use some more comments. The 
 STOP_SCAN/MATCH_SCAN/CONT_SCAN
 macros are a good idea, but they probably should go into
 src/include/access/gin.h so that they can be used in all compare_partial
 implementations.
 
 STOP_SCAN/MATCH_SCAN/CONT_SCAN macros are moved to gin's header, and
 comments are improved.
 
 Split patch to two: gin and module

Where are we on this patch?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Bruce Momjian
On Fri, Mar 20, 2015 at 05:15:51AM +0300, Max Filippov wrote:
 On Fri, Mar 20, 2015 at 5:09 AM, Bruce Momjian br...@momjian.us wrote:
  On Fri, Mar 20, 2015 at 04:51:55AM +0300, Max Filippov wrote:
  xtensa-linux-gcc -o conftest -Wall -Wmissing-prototypes
  -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
  -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
  -fwrapv -fexcess-precision=standard -D_LARGEFILE_SOURCE
  -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -mlongcalls
  -mtext-section-literals -Os -pthread -D_LARGEFILE_SOURCE
  -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE conftest.c
  -lssl -lcrypto -lz -lreadline -lrt -lcrypt -ldl -lm
 
  and if I drop irrelevant libraries from that command its stdout+stderr
  will probably be empty.
 
  But I was curious why this test is written *that* way.
 
  Threading compiles are different for every platform so the code just
  tries everything --- we didn't anticipate that adding a useless library
  would actually cause a failure.
 
 Sorry, I must be not clear enough: why checking compiler/linker output
 instead of checking their exit code or presence of produced object/
 executable files?

Oh, uh, I don't rember the answer to that one.  I think the code is in
config/acx_pthread.m4 and I assume we are just checking using the
configure macros that are provided.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Tatsuo Ishii
 It is sad for users the only way to distinguish the conditions is by
 looking at the error messages. They want to know the root of the
 problem.
 
 Sure. It's always a balance. If you go to the extreme of your argument
 every possible error gets one individual error code. But then error
 handling gets too complex.

I think the solution for this is assigning a unique id to each
message.  This is already done in some commercial databases. They are
pretty usefull for tech supports.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Andrew Gierth
 Max == Max Filippov jcmvb...@gmail.com writes:

 Max Sorry, I must be not clear enough: why checking compiler/linker
 Max output instead of checking their exit code or presence of produced
 Max object/ executable files?

Going by the comment some lines above, my guess would be because some
compilers accept some option like -pthreads and issue a warning message
saying that it is ignored, and pg wants to not treat such options as
valid

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Tom Lane
Tatsuo Ishii is...@postgresql.org writes:
 Sure. It's always a balance. If you go to the extreme of your argument
 every possible error gets one individual error code. But then error
 handling gets too complex.

 I think the solution for this is assigning a unique id to each
 message.  This is already done in some commercial databases. They are
 pretty usefull for tech supports.

We already have file and line number recorded.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 Max == Max Filippov jcmvb...@gmail.com writes:
  Max Sorry, I must be not clear enough: why checking compiler/linker
  Max output instead of checking their exit code or presence of produced
  Max object/ executable files?

 Going by the comment some lines above, my guess would be because some
 compilers accept some option like -pthreads and issue a warning message
 saying that it is ignored, and pg wants to not treat such options as
 valid

Precisely.  We don't want every link step producing a useless warning.
Ideally, make -s would print nothing whatsoever; to the extent that
tools produce unsuppressable routine chatter, that's evil because it
makes it harder to notice actually-useful warnings.

(My current ambition in this area is to shut up clang from complaining
like so:
clang: warning: argument unused during compilation: '-pthread'
clang: warning: argument unused during compilation: '-pthread'
clang: warning: argument unused during compilation: '-pthread'
clang: warning: argument unused during compilation: '-pthread'
clang: warning: argument unused during compilation: '-pthread'
which is another bit of entirely useless pedantry, but rather hard to work
around because we assume that CFLAGS should be included when linking.)

It's tempting to consider avoiding Max's problem by doing the ACX_PTHREAD
test before picking up any other libraries.  But I'm worried that that
would cause more problems than it solves.  It's worth noting that the
Autoconf documentation specifically recommends testing for libraries
before testing for compiler characteristics.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for autocommit-off mode for psql and fix some omissions

2015-03-19 Thread Bruce Momjian
On Mon, Oct  6, 2014 at 03:49:37PM +0200, Feike Steenbergen wrote:
 On 6 October 2014 14:09, Michael Paquier michael.paqu...@gmail.com wrote:
  That's a good catch and it should be a separate patch. This could even be
  considered for a back-patch down to 9.2. Thoughts?
 
 If I isolate DROP INDEX concurrently, this patch would do the trick.

Patch applied for 9.5.  Thanks.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Andrew Gierth

  if test `(eval $ac_link 21 15)` =   test `(eval
  $ac_compile 21 15)` = ; then

FWIW, I happened to run into this recently on IRC with someone having
compile problems on FreeBSD (10.1); they were using some nonstandard
compile flags, and configure's pthread test was breaking as a result
(they did not report what the actual warning was).

While investigating that, I also noticed that this code prevents any
attempt at running configure with -x in effect from working properly,
making it a bit hard to debug.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] configure can't detect proper pthread flags

2015-03-19 Thread Max Filippov
Hi Tom,

On Fri, Mar 20, 2015 at 3:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Max Filippov jcmvb...@gmail.com writes:
 when PostgreSQL is cross-compiled in the Buildroot with uClibc toolchain
 it may not correctly detect compiler/linker flags for threading. [1]
 The reason is that config/acx_pthread.m4:146 uses compiler and linker
 stdout and stderr to make decision if acx_pthread_ok should be yes or no:

   if test `(eval $ac_link 21 15)` =   test `(eval
 $ac_compile 21 15)` = ; then

 and the toolchain emits the following warning at linking step:

   libcrypto.so: warning: gethostbyname is obsolescent, use
 getnameinfo() instead.

 git log doesn't tell much why it is done that way. Does anybody know?

 The short answer is that the linker you're using is written by pedantic
 idiots.

Well... That doesn't answer my question.

  Notice that the gethostbyname call it's complaining about is
 somewhere inside libcrypto; it's *not* in Postgres, much less the test
 program being built here.

Actually it *is* in the program being built here, because it's being
linked with libcrypto. The full command line produced by the first eval
is this:

xtensa-linux-gcc -o conftest -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -fexcess-precision=standard -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -mlongcalls
-mtext-section-literals -Os -pthread -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE conftest.c
-lssl -lcrypto -lz -lreadline -lrt -lcrypt -ldl -lm

and if I drop irrelevant libraries from that command its stdout+stderr
will probably be empty.

But I was curious why this test is written *that* way.

-- 
Thanks.
-- Max


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets delete src/test/performance

2015-03-19 Thread Bruce Momjian
On Tue, Oct  7, 2014 at 02:03:04PM +0200, Andres Freund wrote:
 Hi,
 
 The code in there doesn't look very interesting - and very unlikely to
 run these days. Notably it relies on a binary called 'postmaster' et
 al...
 
 The last realy changes are from a long time ago:
 
 commit 142d42f9386ed81a4f0779ec8a0cad1254173b5e
 Author: Vadim B. Mikheev vadi...@yahoo.com
 Date:   Fri Sep 26 14:57:36 1997 +
 
 Scripts to run queries and data.
 
 commit dbde5caeed4c9bdaf1292e52eafed80bbf01e9e9
 Author: Vadim B. Mikheev vadi...@yahoo.com
 Date:   Fri Sep 26 14:55:44 1997 +
 
 Some results.
 
 commit cf76759f34a172d424301cfa3723baee37f4a7ce
 Author: Vadim B. Mikheev vadi...@yahoo.com
 Date:   Fri Sep 26 14:55:21 1997 +
 
 Start with performance suite.

Any objection if I remove the src/test/performance directory and all its
files?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Superuser connect during smart shutdown

2015-03-19 Thread Bruce Momjian
On Mon, Oct 20, 2014 at 03:10:50PM -0400, Tom Lane wrote:
 Robert Haas robertmh...@gmail.com writes:
  On Sun, Oct 19, 2014 at 12:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I've certainly objected to it in the past, but I don't believe
  I was the only one objecting.
 
  What's your feeling now?
 
 I'm prepared to yield on the point.

OK, are we up for changing the default pg_ctl shutdown method for 9.5,
(smart to fast), or should we wait for 9.6?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance improvement for joins where outer side is unique

2015-03-19 Thread Kyotaro HORIGUCHI
Hello,

 Hello, The attached is non-search version of unique join.

This is a quite bucket-brigade soye makes many functions got
changes to have new parameter only to convey the new bool
information. I know it's very bad.

After some consideration, I removed almost all of such parameters
from path creation function and confine the main work within
add_paths_to_joinrel. After all the patch shrinked and looks
better. Addition to that, somehow, some additional regtests found
to be inner-unique.

I think this satisfies your wish and implemented in non
exhaustive-seearch-in-jointree manner. It still don't have
regressions for itself but I don't see siginificance in adding
it so much...

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
From e1d593a6dd3154d7299b4995c4affa50476df15f Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi horiguchi.kyot...@lab.ntt.co.jp
Date: Tue, 17 Mar 2015 15:37:47 +0900
Subject: [PATCH] Boost inner-unique joins.

Inner-unique joins is accelerated by omitting to fetch the second
inner rows on execution. By this patch allow executor to do so
by detecting inner-unique joins on join path creation.
---
 src/backend/commands/explain.c   |  7 
 src/backend/executor/nodeHashjoin.c  | 12 ---
 src/backend/executor/nodeMergejoin.c | 13 +---
 src/backend/executor/nodeNestloop.c  | 12 ---
 src/backend/optimizer/path/joinpath.c| 48 +++-
 src/backend/optimizer/plan/createplan.c  | 28 
 src/include/nodes/execnodes.h|  1 +
 src/include/nodes/plannodes.h|  1 +
 src/include/nodes/relation.h |  1 +
 src/test/regress/expected/equivclass.out |  6 ++--
 src/test/regress/expected/join.out   |  4 +--
 src/test/regress/expected/rowsecurity.out|  2 +-
 src/test/regress/expected/select_views_1.out |  8 ++---
 13 files changed, 107 insertions(+), 36 deletions(-)

diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index a951c55..b8a68b5 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -1151,9 +1151,16 @@ ExplainNode(PlanState *planstate, List *ancestors,
 		appendStringInfo(es-str,  %s Join, jointype);
 	else if (!IsA(plan, NestLoop))
 		appendStringInfoString(es-str,  Join);
+	if (((Join *)plan)-inner_unique)
+		appendStringInfoString(es-str, (inner unique));
+
 }
 else
+{
 	ExplainPropertyText(Join Type, jointype, es);
+	ExplainPropertyText(Inner unique, 
+			((Join *)plan)-inner_unique?true:false, es);
+}
 			}
 			break;
 		case T_SetOp:
diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c
index 1d78cdf..d3b14e5 100644
--- a/src/backend/executor/nodeHashjoin.c
+++ b/src/backend/executor/nodeHashjoin.c
@@ -306,10 +306,11 @@ ExecHashJoin(HashJoinState *node)
 	}
 
 	/*
-	 * In a semijoin, we'll consider returning the first
-	 * match, but after that we're done with this outer tuple.
+	 * We'll consider returning the first match if the inner
+	 * is unique, but after that we're done with this outer
+	 * tuple.
 	 */
-	if (node-js.jointype == JOIN_SEMI)
+	if (node-js.inner_unique)
 		node-hj_JoinState = HJ_NEED_NEW_OUTER;
 
 	if (otherqual == NIL ||
@@ -451,6 +452,7 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
 	hjstate = makeNode(HashJoinState);
 	hjstate-js.ps.plan = (Plan *) node;
 	hjstate-js.ps.state = estate;
+	hjstate-js.inner_unique = node-join.inner_unique;
 
 	/*
 	 * Miscellaneous initialization
@@ -498,8 +500,10 @@ ExecInitHashJoin(HashJoin *node, EState *estate, int eflags)
 	/* set up null tuples for outer joins, if needed */
 	switch (node-join.jointype)
 	{
-		case JOIN_INNER:
 		case JOIN_SEMI:
+			hjstate-js.inner_unique = true;
+			/* fall through */
+		case JOIN_INNER:
 			break;
 		case JOIN_LEFT:
 		case JOIN_ANTI:
diff --git a/src/backend/executor/nodeMergejoin.c b/src/backend/executor/nodeMergejoin.c
index 15742c5..3c21ffe 100644
--- a/src/backend/executor/nodeMergejoin.c
+++ b/src/backend/executor/nodeMergejoin.c
@@ -840,10 +840,11 @@ ExecMergeJoin(MergeJoinState *node)
 	}
 
 	/*
-	 * In a semijoin, we'll consider returning the first
-	 * match, but after that we're done with this outer tuple.
+	 * We'll consider returning the first match if the inner
+	 * is unique, but after that we're done with this outer
+	 * tuple.
 	 */
-	if (node-js.jointype == JOIN_SEMI)
+	if (node-js.inner_unique)
 		node-mj_JoinState = EXEC_MJ_NEXTOUTER;
 
 	qualResult = (otherqual == NIL ||
@@ -1486,6 +1487,8 @@ ExecInitMergeJoin(MergeJoin *node, EState *estate, int eflags)
 	mergestate-js.ps.plan = (Plan *) node;
 	mergestate-js.ps.state = estate;
 
+	mergestate-js.inner_unique = node-join.inner_unique;
+
 	/*
 	 * Miscellaneous initialization
 	 *
@@ -1553,8 +1556,10 @@ 

Re: [HACKERS] get_actual_variable_range vs idx_scan/idx_tup_fetch

2015-03-19 Thread Bruce Momjian
On Sat, Oct 18, 2014 at 02:20:45PM -0400, Bruce Momjian wrote:
 On Sat, Oct 18, 2014 at 06:15:03PM +0200, Marko Tiikkaja wrote:
  On 10/18/14, 5:46 PM, Tom Lane wrote:
  Marko Tiikkaja ma...@joh.to writes:
  Yes, exactly; if I had had the option to disable the index from the
  optimizer's point of view, I'd have seen that it's not used for looking
  up any data by any queries, and thus I would have known that I can
  safely drop it without slowing down queries.  Which was the only thing I
  cared about, and where the stats we provide failed me.
  
  This argument is *utterly* wrongheaded, because it assumes that the
  planner's use of the index provided no benefit to your queries.  If the
  planner was touching the index at all then it was planning queries in
  which knowledge of the extremal value was relevant to accurate selectivity
  estimation.  So it's quite likely that without the index you'd have gotten
  different and inferior plans, whether or not those plans actually chose to
  use the index.
  
  Maybe.  But at the same time that's a big problem: there's no way of
  knowing whether the index is actually useful or not when it's used
  only by the query planner.
 
 That is a good point.  Without an index, the executor is going to do a
 sequential scan, while a missing index to the optimizer just means worse
 statistics.

I have applied the attached patch to document that the optimizer can
increase the index usage statistics.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
new file mode 100644
index afcfb89..71d06ce
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
*** postgres   27093  0.0  0.0  30096  2752
*** 1382,1389 
/para
  
para
!Indexes can be used via either simple index scans or quotebitmap/
!index scans.  In a bitmap scan
 the output of several indexes can be combined via AND or OR rules,
 so it is difficult to associate individual heap row fetches
 with specific indexes when a bitmap scan is used.  Therefore, a bitmap
--- 1382,1389 
/para
  
para
!Indexes can be used by simple index scans, quotebitmap/ index scans,
!and the optimizer.  In a bitmap scan
 the output of several indexes can be combined via AND or OR rules,
 so it is difficult to associate individual heap row fetches
 with specific indexes when a bitmap scan is used.  Therefore, a bitmap
*** postgres   27093  0.0  0.0  30096  2752
*** 1393,1398 
--- 1393,1401 
 structnamepg_stat_all_tables/.structfieldidx_tup_fetch/
 count for the table, but it does not affect
 structnamepg_stat_all_indexes/.structfieldidx_tup_fetch/.
+The optimizer also accesses indexes to check for supplied constants
+whose values are outside the recorded range of the optimizer statistics
+because the optimizer statistics might be stale.
/para
  
note

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Add launchd Support

2015-03-19 Thread Bruce Momjian
On Mon, Oct 20, 2014 at 03:59:01PM -0700, David E. Wheeler wrote:
 Hackers,

 In Mac OS X 10.10 “Yosemite,” Apple removed SystemStarter, upon
 which our OS X start script has relied since 2007. So here is a patch
 that adds support for its replacement, launchd. It includes 7 day log
 rotation like the old script did. The install script still prefers
 the SystemStarter approach for older versions of the OS, for the sake
 of easier backward compatibility. We could change that if we wanted,
 since launchd has been part of the OS for around a decade.

Where are we on this?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] compress method for spgist

2015-03-19 Thread Bruce Momjian
On Wed, Oct 22, 2014 at 05:41:19PM +0400, Teodor Sigaev wrote:
 When we developed SP-GiST we missed analogue of GiST's compress
 method. There was two reasons for that: lack of imagination to
 imagine case with different types of indexed value and column, and
 we didn't want call some method while index fit in one page. Some
 discussion on that
 http://www.postgresql.org/message-id/27542.1323534...@sss.pgh.pa.us
 http://www.postgresql.org/message-id/4ee77ea1.6030...@sigaev.ru
 
 But we was wrong: PostGIS guys found an example: polygon indexing
 with storing just a bounding box. Actually, I don't know index
 structure for boxes suitable for SP-GiST but I'm not a geometer.
 They are.
 
 Attached patch provides support of optional compress method for SP-GiST.

Where are we on this?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC - Idea Discussion

2015-03-19 Thread Kouhei Kaigai
 I think you reference the old branch in my personal repository.
 Could you confirm the repository URL? Below is the latest.
   https://github.com/pg-strom/devel

Sorry, it is not a problem of pg-strom repository.

Please use the custom_join branch of the tree below:
  https://github.com/kaigai/sepgsql/tree/custom_join

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] flags argument for dsm_create

2015-03-19 Thread Robert Haas
On Thu, Mar 19, 2015 at 12:21 PM, Stephen Frost sfr...@snowman.net wrote:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
 Robert Haas robertmh...@gmail.com writes:
  I'm slightly confused.  Does that mean just change it or does that
  mean add dsm_create_extended instead?

 FWIW, I vote for just change it.  We change C-level APIs all the time,
 and this function has surely not got either the longevity nor the wide
 usage that might argue for keeping its API sancrosanct.

 Agreed.

OK, committed that way after, uh, actually testing it and fixing the bugs.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Thom Brown
On 19 March 2015 at 14:35, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Thom Brown wrote:
 On 19 March 2015 at 14:12, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
  Dmitry Dolgov wrote:
 
  * jsonb_slice - extract a subset of an jsonb
Example of usage:
 
  =# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
  ARRAY['b', 'f', 'x']);
 
 jsonb_slice
  ---
{b: {c: 2}, f: 3}
 
  This is a bit strange.  Why did f get flattened out of d?  Is the
  resulting document still valid for the purposes of an application using
  it?  I think I'd expect the result to be {b: {c: 2}, d: {f: 3}}

 Why would d be output when it wasn't in the requested slice?

 Because it contains f.

Okay, so it pulls it all parents?  So I guess you'd get this too:

SELECT jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}, f:
4}'::jsonb, ARRAY['b', 'f', 'x']);

  jsonb_slice

 {a: 1, b: {c: 2}, d: {f: 3}, f: 4}

 Although I'm still a bit confused about f being produced.

 I guess you could say that the second argument is an array of element
 paths, not key names.  So to get the result I suggest, you would have to
 use ARRAY['{b}', '{d,f}', '{x}'].  (Hm, this is a non-rectangular
 array actually... I guess I'd go for ARRAY['b', 'd//f', 'x'] instead, or
 whatever the convention is to specify a json path).

I think that's where jsquery would come in handy.
-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Andres Freund
On 2015-03-19 12:50:09 +, Kevin Grittner wrote:
 For me the big question is whether software written to retry a
 transaction from the beginning when it gets this SQLSTATE would be
 doing something dumb to retry transactions (perhaps after a brief
 delay) for the conflict with recovery.  If using the same automated
 recovery techniques is sane, then IMO it makes sense to use the
 same SQLSTATE.

Yes, it imo makes sense to use the same techniques. In both cases you
need to keep enough state to give up at some point; the combination of
running transactions might make the likelihood of succeeding too low.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] assessing parallel-safety

2015-03-19 Thread Amit Kapila
On Wed, Feb 18, 2015 at 10:53 PM, Robert Haas robertmh...@gmail.com wrote:

 On Sun, Feb 15, 2015 at 6:24 PM, Noah Misch n...@leadboat.com wrote:
  It's important for parallelism to work under the extended query
protocol, and
  that's nontrivial.  exec_parse_message() sets cursorOptions.
  exec_bind_message() runs the planner.  We don't know if a parallel plan
is
  okay before seeing max_rows in exec_execute_message().

 Yes, that's a problem.  One could require users of the extended query
 protocol to indicate their willingness to accept a parallel query plan
 when sending the bind message by setting the appropriate cursor
 option; and one could, when that option is specified, refuse execute
 messages with max_rows != 0.  However, that has the disadvantage of
 forcing all clients to be updated for the new world order.

Another way could be when max_rows != 0, then inform executor to
just execute the plan locally, which means run the parallel seq scan
node in master only.  We already need to do the same when no workers
are available at the time of execution.

I think we can inform executor by setting this information in Estate
during ExecutorRun.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Alvaro Herrera
Dmitry Dolgov wrote:

 * jsonb_slice - extract a subset of an jsonb
   Example of usage:
 
 =# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
 ARRAY['b', 'f', 'x']);
 
jsonb_slice
 ---
   {b: {c: 2}, f: 3}

This is a bit strange.  Why did f get flattened out of d?  Is the
resulting document still valid for the purposes of an application using
it?  I think I'd expect the result to be {b: {c: 2}, d: {f: 3}}

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about to have relnamespace and relrole?

2015-03-19 Thread Alvaro Herrera
Kyotaro HORIGUCHI wrote:

 # I hope the CF app to add the author as a receiver when issueing
 # a mail.

Moreover, it should add everyone who was in To, From, CC in the email
that the commitfest app is replying to; if the patch authors are not
listed, add them as well.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Dmitry Dolgov
Hi, everyone

I'm Dmitry Dolgov, a phd student at the KemSU, Russia. I would like to
submit a proposal to the GSoC about additional jsonb functionality, and I
want to get any feedback and thougths about this.


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Dmitry Dolgov
Synopsis: Althrough Jsonb was introduced in PostgreSQL 9.4, there are
several functions, that still missing. Partially this missing functionality
was implemented in this extension [1] and the corresponding patch [2]. The
purpose of this work is to implement the rest of functions accordingly to
importance.

Benefits: New functionality, than can made the usage of the jsonb more
convenient.

Deliverables: Implementation of the following functions (in the form of an
extension
* jsonb_delete_jsonb - delete key/value pairs based on the other jsonb.
  Example of usage:

=# jsonb_delete_jsonb('{a: 1, b: {c: 2, d: 3}, f: [4,
5]}'::jsonb, '{a: 4, f: [4, 5], c: 2}'::jsonb);

 jsonb_delete_jsonb
---
 {a: 1, b: {c: 2, d: 3}}

* jsonb_slice - extract a subset of an jsonb
  Example of usage:

=# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
ARRAY['b', 'f', 'x']);

   jsonb_slice
---
  {b: {c: 2}, f: 3}

* jsonb_to_array - get jsonb keys and values as an array
  Example of usage:

=# jsonb_to_array('{a: 1, b: {c: 2}, d: [3, 4]}'::jsonb);

jsonb_to_array
--
   {a, 1, b, c, 2, d, 3, 4}

* jsonb_keys - get jsonb keys as an array
  Example of usage:

=# jsonb_keys('{a: 1, b: {c: 2}}'::jsonb);

jsonb_keys
-
{a, b, c}

* jsonb_vals - get jsonb values as an array
  Example of usage:

=# jsonb_vals('{a: 1, b: {c: 2}, d: [3, 4]}'::jsonb);

jsonb_vals
--
   {1, 2, 3, 4}

* jsonb_add_to_path - append a new element to jsonb value at the
specific path
  Example of usage:

   =# jsonb_add_to_path('{a: 1, b: {c: [d, f]}}'::jsonb, {b,
c}::text[], '[g]'::jsonb);

   jsonb_add_to_path
---
   {a: 1, b: {c: [d, f, g]}}

* jsonb_intersection - extract intersecting key/value pairs
  Example of usage:

=# jsonb_intersection('{a: 1, b: 2, d: {f: 3}, g: [4,
5]}'::jsonb, '{b: 2, c: 3, f: 3, g: [4, 5]}'::jsonb);

 jsonb_intersection

{b: 2, g: [4, 5]}

Schedule: I suppose, this can take 2-3 months for me. First of all I'll
implement the jsonb_delete_jsonb, jsonb_slice, jsonb_to_array, jsonb_keys,
jsonb_vals functions (just because it almost clear how to implement them).
Each function will require tests, and certainly some time will be spent at
the finish on the improvements for extension as a whole.

Unfortunately, this proposal isn't submitted to the GSoC system yet (I'm
planning to do this in the next Tuesday).

[1]: https://github.com/erthalion/jsonbx
[2]: https://commitfest.postgresql.org/4/154/

On 19 March 2015 at 20:16, Dmitry Dolgov 9erthali...@gmail.com wrote:

 Hi, everyone

 I'm Dmitry Dolgov, a phd student at the KemSU, Russia. I would like to
 submit a proposal to the GSoC about additional jsonb functionality, and I
 want to get any feedback and thougths about this.




Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Thom Brown
On 19 March 2015 at 13:23, Dmitry Dolgov 9erthali...@gmail.com wrote:
 Synopsis: Althrough Jsonb was introduced in PostgreSQL 9.4, there are
 several functions, that still missing. Partially this missing functionality
 was implemented in this extension [1] and the corresponding patch [2]. The
 purpose of this work is to implement the rest of functions accordingly to
 importance.

 Benefits: New functionality, than can made the usage of the jsonb more
 convenient.

 Deliverables: Implementation of the following functions (in the form of an
 extension
 * jsonb_delete_jsonb - delete key/value pairs based on the other jsonb.
   Example of usage:

 =# jsonb_delete_jsonb('{a: 1, b: {c: 2, d: 3}, f: [4,
 5]}'::jsonb, '{a: 4, f: [4, 5], c: 2}'::jsonb);

  jsonb_delete_jsonb
 ---
  {a: 1, b: {c: 2, d: 3}}

Perhaps it's my misunderstanding, but this would seem to be more of an
intersection operation on keys rather than a delete.

 * jsonb_slice - extract a subset of an jsonb
   Example of usage:

 =# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
 ARRAY['b', 'f', 'x']);

jsonb_slice
 ---
   {b: {c: 2}, f: 3}

 * jsonb_to_array - get jsonb keys and values as an array
   Example of usage:

 =# jsonb_to_array('{a: 1, b: {c: 2}, d: [3, 4]}'::jsonb);

 jsonb_to_array
 --
{a, 1, b, c, 2, d, 3, 4}

Is there a use-case for the example you've given above, where you take
JSON containing objects and arrays, and flatten them out into a
one-dimensional array?


 * jsonb_keys - get jsonb keys as an array
   Example of usage:

 =# jsonb_keys('{a: 1, b: {c: 2}}'::jsonb);

 jsonb_keys
 -
 {a, b, c}

 * jsonb_vals - get jsonb values as an array
   Example of usage:

 =# jsonb_vals('{a: 1, b: {c: 2}, d: [3, 4]}'::jsonb);

 jsonb_vals
 --
{1, 2, 3, 4}

 * jsonb_add_to_path - append a new element to jsonb value at the
 specific path
   Example of usage:

=# jsonb_add_to_path('{a: 1, b: {c: [d, f]}}'::jsonb, {b,
 c}::text[], '[g]'::jsonb);

jsonb_add_to_path
 ---
{a: 1, b: {c: [d, f, g]}}

What should happen if g or {g} were used instead?

 * jsonb_intersection - extract intersecting key/value pairs
   Example of usage:

 =# jsonb_intersection('{a: 1, b: 2, d: {f: 3}, g: [4,
 5]}'::jsonb, '{b: 2, c: 3, f: 3, g: [4, 5]}'::jsonb);

  jsonb_intersection
 
 {b: 2, g: [4, 5]}

Could there be a corresponding jsonb_except function which does the
opposite (i.e. returns everything on the left side except where it
matches with the right)?

Thanks.

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Collation-aware comparisons in GIN opclasses

2015-03-19 Thread Bruce Momjian
On Sun, Sep 28, 2014 at 10:33:33PM -0400, Bruce Momjian wrote:
 On Mon, Sep 15, 2014 at 03:42:20PM -0700, Peter Geoghegan wrote:
  On Mon, Sep 15, 2014 at 12:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
   No.  And we don't know how to change the default opclass without
   breaking things, either.
  
  Is there a page on the Wiki along the lines of things that we would
  like to change if ever there is a substantial change in on-disk format
  that will break pg_upgrade? ISTM that we should be intelligently
  saving those some place, just as Redhat presumably save up
  ABI-breakage over many years for the next major release of RHEL.
  Alexander's complaint is a good example of such a change, IMV. Isn't
  it more or less expected that the day will come when we'll make a
  clean break?
 
 It is on the TODO page under pg_upgrade:
 
   Desired changes that would prevent upgrades with pg_upgrade 

Item added to TODO list.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL 9.4 mmap(2) performance regression on FreeBSD...

2015-03-19 Thread Robert Haas
On Thu, Mar 19, 2015 at 12:16 PM, Bruce Momjian br...@momjian.us wrote:
 On Mon, Oct 13, 2014 at 11:35:18AM -0400, Bruce Momjian wrote:
 On Mon, Oct 13, 2014 at 05:21:32PM +0200, Andres Freund wrote:
   If we have it, we should improve it, or remove it.  We might want to use
   this code for something else in the future, so it should be improved
   where feasible.
 
  Meh. We don't put in effort into code that doesn't matter just because
  it might get used elsewhere some day. By that argument we'd need to
  performance optimize a lot of code. And actually, using that code
  somewhere else is more of a counter indication than a pro
  argument. MAP_NOSYNC isn't a general purpose flag.

 The key is that this is platform-specific behavior, so if we should use
 a flag to use it right, we should.  You are right that optimizing
 rarely used code with generic calls isn't a good use of time.

 I have adjusted Sean's mmap() options patch to match our C layout and
 plan to apply this to head, as it is from August.

Looks great, thanks for taking care of that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Alvaro Herrera
Thom Brown wrote:
 On 19 March 2015 at 14:35, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
  Thom Brown wrote:
  On 19 March 2015 at 14:12, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
   Dmitry Dolgov wrote:
  
   * jsonb_slice - extract a subset of an jsonb
 Example of usage:

 Okay, so it pulls it all parents?  So I guess you'd get this too:
 
 SELECT jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}, f:
 4}'::jsonb, ARRAY['b', 'f', 'x']);
 
   jsonb_slice
 
  {a: 1, b: {c: 2}, d: {f: 3}, f: 4}

Yeah, except a wouldn't be output, of course.  (The example gets more
interesting if d contains more members than just f.  Those would not
get output.)


  Although I'm still a bit confused about f being produced.
 
  I guess you could say that the second argument is an array of element
  paths, not key names.  So to get the result I suggest, you would have to
  use ARRAY['{b}', '{d,f}', '{x}'].  (Hm, this is a non-rectangular
  array actually... I guess I'd go for ARRAY['b', 'd//f', 'x'] instead, or
  whatever the convention is to specify a json path).
 
 I think that's where jsquery would come in handy.

If that's what we think, then perhaps we shouldn't accept jsonb_slice at
all because of ambiguous mode of operation.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Thom Brown
On 19 March 2015 at 14:12, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Dmitry Dolgov wrote:

 * jsonb_slice - extract a subset of an jsonb
   Example of usage:

 =# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
 ARRAY['b', 'f', 'x']);

jsonb_slice
 ---
   {b: {c: 2}, f: 3}

 This is a bit strange.  Why did f get flattened out of d?  Is the
 resulting document still valid for the purposes of an application using
 it?  I think I'd expect the result to be {b: {c: 2}, d: {f: 3}}

Why would d be output when it wasn't in the requested slice?
Although I'm still a bit confused about f being produced.

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2015: Extra Jsonb functionality

2015-03-19 Thread Alvaro Herrera
Thom Brown wrote:
 On 19 March 2015 at 14:12, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
  Dmitry Dolgov wrote:
 
  * jsonb_slice - extract a subset of an jsonb
Example of usage:
 
  =# jsonb_slice('{a: 1, b: {c: 2}, d: {f: 3}}'::jsonb,
  ARRAY['b', 'f', 'x']);
 
 jsonb_slice
  ---
{b: {c: 2}, f: 3}
 
  This is a bit strange.  Why did f get flattened out of d?  Is the
  resulting document still valid for the purposes of an application using
  it?  I think I'd expect the result to be {b: {c: 2}, d: {f: 3}}
 
 Why would d be output when it wasn't in the requested slice?

Because it contains f.

 Although I'm still a bit confused about f being produced.

I guess you could say that the second argument is an array of element
paths, not key names.  So to get the result I suggest, you would have to
use ARRAY['{b}', '{d,f}', '{x}'].  (Hm, this is a non-rectangular
array actually... I guess I'd go for ARRAY['b', 'd//f', 'x'] instead, or
whatever the convention is to specify a json path).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] assessing parallel-safety

2015-03-19 Thread Amit Kapila
On Thu, Mar 19, 2015 at 7:05 PM, Amit Kapila amit.kapil...@gmail.com
wrote:

 On Wed, Feb 18, 2015 at 10:53 PM, Robert Haas robertmh...@gmail.com
wrote:
 
  On Sun, Feb 15, 2015 at 6:24 PM, Noah Misch n...@leadboat.com wrote:
   It's important for parallelism to work under the extended query
protocol, and
   that's nontrivial.  exec_parse_message() sets cursorOptions.
   exec_bind_message() runs the planner.  We don't know if a parallel
plan is
   okay before seeing max_rows in exec_execute_message().
 
  Yes, that's a problem.  One could require users of the extended query
  protocol to indicate their willingness to accept a parallel query plan
  when sending the bind message by setting the appropriate cursor
  option; and one could, when that option is specified, refuse execute
  messages with max_rows != 0.  However, that has the disadvantage of
  forcing all clients to be updated for the new world order.

 Another way could be when max_rows != 0, then inform executor to
 just execute the plan locally, which means run the parallel seq scan

typo
/parallel/partial


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] cancelling statement due to user request error occurs but the transaction has committed.

2015-03-19 Thread Bruce Momjian
On Thu, Mar 19, 2015 at 07:54:02AM -0400, Robert Haas wrote:
 On Wed, Mar 18, 2015 at 10:56 PM, Bruce Momjian br...@momjian.us wrote:
  I have researched this issue originally reported in June of 2014 and
  implemented a patch to ignore cancel while we are completing a commit.
  I am not clear if this is the proper place for this code, though a
  disable_timeout() call on the line above suggests I am close.  :-)
 
 This would also disable cancel interrupts while running AFTER
 triggers, which seems almost certain to be wrong.  TBH, I'm not sure
 why the existing HOLD_INTERRUPTS() in CommitTransaction() isn't
 already preventing this problem.  Did you investigate that at all?

Yes, the situation is complex, and was suggested by the original poster.
The issue with CommitTransaction() is that it only _holds_ the signal
--- it doesn't clear it.  Now, since there are very few
CHECK_FOR_INTERRUPTS() calls in the typical commit process flow, the
signal is normally erased.  However, if log_duration or
log_min_duration_statement are set, they call ereport, which calls
errfinish(), which has a call to CHECK_FOR_INTERRUPTS().  

First attached patch is more surgical and clears a possible cancel
request before we report the query duration in the logs --- this doesn't
affect any after triggers that might include CHECK_FOR_INTERRUPTS()
calls we want to honor.

Another approach would be to have CommitTransaction() clear any pending
cancel before it calls RESUME_INTERRUPTS().  The second attached patch
takes that approach, and also works.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
new file mode 100644
index 33720e8..9521c48
*** a/src/backend/tcop/postgres.c
--- b/src/backend/tcop/postgres.c
*** exec_simple_query(const char *query_stri
*** 1163,1168 
--- 1163,1174 
  		NullCommand(dest);
  
  	/*
+ 	 * We have already committed, so clear any cancel requests
+ 	 * that might be processed by the ereport() calls below.
+ 	 */
+ 	QueryCancelPending = false;
+ 
+ 	/*
  	 * Emit duration logging if appropriate.
  	 */
  	switch (check_log_duration(msec_str, was_logged))
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
new file mode 100644
index 1495bb4..9b6da95
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
*** CommitTransaction(void)
*** 1958,1963 
--- 1958,1969 
  	 */
  	s-state = TRANS_DEFAULT;
  
+ 	/*
+ 	 * We have already committed, so clear any cancel requests
+ 	 * that might be processed later.
+ 	 */
+ 	QueryCancelPending = false;
+ 
  	RESUME_INTERRUPTS();
  }
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Andres Freund
On 2015-03-19 23:31:21 +0900, Tatsuo Ishii wrote:
  That SQLSTATE value is intended to be used where the transaction
  has failed because it was run concurrently with some other
  transaction, rather than before or after it; and it is intended to
  suggest that the transaction may succeed if run after the competing
  transaction has completed.  If those apply, it seems like the right
  SQLSTATE.  A user can certainly distinguish between the conditions
  by looking at the error messages.
 
 It is sad for users the only way to distinguish the conditions is by
 looking at the error messages. They want to know the root of the
 problem.

Sure. It's always a balance. If you go to the extreme of your argument
every possible error gets one individual error code. But then error
handling gets too complex.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERRCODE_T_R_DEADLOCK_DETECTED

2015-03-19 Thread Tatsuo Ishii
 That SQLSTATE value is intended to be used where the transaction
 has failed because it was run concurrently with some other
 transaction, rather than before or after it; and it is intended to
 suggest that the transaction may succeed if run after the competing
 transaction has completed.  If those apply, it seems like the right
 SQLSTATE.  A user can certainly distinguish between the conditions
 by looking at the error messages.

It is sad for users the only way to distinguish the conditions is by
looking at the error messages. They want to know the root of the
problem.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-03-19 Thread Robert Haas
On Wed, Mar 18, 2015 at 11:43 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 Patch fixes the problem and now for Rescan, we don't need to Wait
 for workers to finish.

 Assuming this actually fixes the problem, I think we should back-patch
 it into 9.4.

 +1

OK, done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] flags argument for dsm_create

2015-03-19 Thread Andres Freund
On 2015-03-19 12:10:03 -0400, Robert Haas wrote:
 On Thu, Mar 19, 2015 at 11:25 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2015-03-19 11:21:45 -0400, Robert Haas wrote:
  One question I struggled with is whether to keep the existing
  dsm_create() signature intact and add a new function
  dsm_create_extended().  I eventually decided against it.  The
  dsm_create() API is relatively new at this point, so there probably
  aren't too many people who will be inconvenienced by an API break now.
If we go ahead and create dsm_create_extended() now, and then need
  to make another API change down the line, I doubt there will be much
  support for dsm_create_extended2() or whatever.  So my gut is that
  it's better to just change this outright, and keep
  dsm_create_extended() as an idea for the future.  But I could go the
  other way on that if people feel strongly about it.
 
  +1 for a clear API break.
 
 I'm slightly confused.  Does that mean just change it or does that
 mean add dsm_create_extended instead?

The former.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] flags argument for dsm_create

2015-03-19 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Robert Haas robertmh...@gmail.com writes:
  I'm slightly confused.  Does that mean just change it or does that
  mean add dsm_create_extended instead?
 
 FWIW, I vote for just change it.  We change C-level APIs all the time,
 and this function has surely not got either the longevity nor the wide
 usage that might argue for keeping its API sancrosanct.

Agreed.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] GSoC - Idea Discussion

2015-03-19 Thread Tomas Vondra
Hi Hitesh,

On 18.3.2015 21:11, hitesh ramani wrote:
 Hello devs,
 
 As stated earlier I was thinking to propose the integration of
 Postgres and CUDA for faster execution of order by queries thru
 optimizing the sorting code and sorting it with CUDA. I saw and tried
 to run PG Strom and ran into issues. Moreover, PG Strom is
 implemented in OpenCL, not CUDA.

Could you please elaborate more why to choose CUDA, a nvidia-only
technology, rather than OpenCL, supported by much wider range of
companies and projects? Why do you consider OpenCL unsuitable?

Not that CUDA is bad - it certainly works better in some scenarios, but
this is a cost/benefits question, and it only works with devices
manufactured by a single company. That significantly limits the
usefulness of the work, IMHO.

You mention that you ran into issues with PG Strom.  What issues?

 
 I have hardware to run CUDA and currently I'm at a point where I
 have almost integrated Postgres and CUDA. This opens up gates for a
 lot of features which can be optimized thru CUDA and parallel
 processing, though here I only want to focus on sorting, hence kind
 of feasible for the time period.

Can we see some examples, what this actually means? What you can and
can't do at this point, etc.? Can you share some numbers how this
improves the performance?

 
 As I did some research, I found CUDA is more efficient in not just
 the parallel performance but data transfer latency too. My idea is to
 create a branch of Postgres with the CUDA integrated code.


More efficient than what?

 
 For the feasibility, I guess it's very much feasible because I've
 almost integrated CUDA execution and the code needs to be optimized
 as per CUDA.

That's really difficult to judge, because you have not provided any
source code, examples or anything else to support this.

 
 Please give in your valuable suggestions and views on this.

From where I sit, this looks interesting, but rather as a research
project rather than something than can be integrated into PostgreSQL in
a foreseeable future. Not sure that's what GSoC is intended for.

Also, we badly need more details on this - current status, examples, and
especially project plan explaining the scope. It's impossible to say
whether the sort can be implemented within the GSoC time frame.


-- 
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] flags argument for dsm_create

2015-03-19 Thread Robert Haas
Discussion on the parallel sequential scan thread has revealed the
need for a way to make dsm_create() return NULL, instead of failing,
when we hit the system-wide maximum on the number of dynamic shared
memory segments than can be created.  I've developed a small patch for
this which I attach here.  It adds a second argument to dsm_create(),
an integer flags argument.  I would like to go ahead and commit this
more or less immediately if there are not objections.

One question I struggled with is whether to keep the existing
dsm_create() signature intact and add a new function
dsm_create_extended().  I eventually decided against it.  The
dsm_create() API is relatively new at this point, so there probably
aren't too many people who will be inconvenienced by an API break now.
  If we go ahead and create dsm_create_extended() now, and then need
to make another API change down the line, I doubt there will be much
support for dsm_create_extended2() or whatever.  So my gut is that
it's better to just change this outright, and keep
dsm_create_extended() as an idea for the future.  But I could go the
other way on that if people feel strongly about it.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


dsm-create-flags.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] assessing parallel-safety

2015-03-19 Thread Amit Kapila
On Wed, Mar 18, 2015 at 9:31 PM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Mar 17, 2015 at 9:48 AM, Robert Haas robertmh...@gmail.com
wrote:
  On Tue, Mar 17, 2015 at 2:26 AM, Noah Misch n...@leadboat.com wrote:
  Neither that rule, nor its variant downthread, would hurt operator
authors too
  much.  To make the planner categorically parallel-safe, though, means
limiting
  evaluate_function() to parallel-safe functions.  That would
dramatically slow
  selected queries.  It's enough for the PL scenario if planning a
parallel-safe
  query is itself parallel-safe.  If the planner is parallel-unsafe when
  planning a parallel-unsafe query, what would suffer?
 
  Good point.  So I guess the rule can be that planning a parallel-safe
  query should be parallel-safe.  From there, it follows that estimators
  for a parallel-safe operator must also be parallel-safe.  Which seems
  fine.

 More work is needed here, but for now, here is a rebased patch, per
 Amit's request.


Thanks for rebasing the patch.  I have done some performance testing
of this + parallel-mode-v8.1.patch to see the impact of these patches
for non-parallel statements.

First set of tests are done with pgbench read-only workload

Test Environment hydra: (IBM POWER-7 16 cores, 64 hardware threads)
Below data is median of 3 runs (detailed data of 3 runs can be found in
attached document pert_data_assess_parallel_safety.ods):

Shared_buffers- 8GB
Scale Factor - 100
HEAD - Commit - 8d1f2390


 *1* *8* *16* *32*  HEAD 9098 70080 129891 195862  PATCH 8960 69678 130591
195570

This data shows there is no performance impact (apart from 1~2%
run-to-run difference, which I consider as noise) of these patches
on read only workload.


Second set of test constitutes of long sql statements with many expressions
in where clause to check the impact of extra-pass over query tree in
planner to decide if it contains any parallel-unsafe function.

Before executing below tests, execute test_prepare_parallel_safety.sql
script

Test-1
---
SQL statement containing 100 expressions (expressions are such that they
have CoerceViaIO node (presumably the costliest one in function
contain_parallel_unsafe_walker())) in where clause, refer attached sql
script (test_parallel_safety.sql)

Data for 10 runs (Highest of 10 runs):
HEAD - 1.563 ms
PATCH - 1.589 ms

Test-2

Similar SQL statement as used for Test-1, but containing 500 expressions,
refer attached script (test_more_parallel_safety.sql)

Data for 5 runs (Median of 5 runs):
HEAD - 5.011 ms
PATCH - 5.201 ms

Second set of tests shows that there is about 1.5 to 3.8% performance
regression with patches.  I think having such long expressions and that
to containing CoerceViaIO nodes is very unusual scenario, so this
doesn't seem to be too much of a concern.


Apart from this, I have one observation:
static int
exec_stmt_execsql(PLpgSQL_execstate *estate,
  PLpgSQL_stmt_execsql
*stmt)
{
ParamListInfo paramLI;
long tcount;
int rc;
PLpgSQL_expr *expr =
stmt-sqlstmt;

/*
 * On the first call for this statement generate the plan, and detect
 * whether
the statement is INSERT/UPDATE/DELETE
 */
if (expr-plan == NULL)
{
ListCell   *l;

exec_prepare_plan(estate, expr, 0);

Shouldn't we need parallelOk in function exec_stmt_execsql()
to pass cursoroption in above function as we have done in
exec_run_select()?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


perf_data_assess_parallel_safety.ods
Description: application/vnd.oasis.opendocument.spreadsheet


test_prepare_parallel_safety.sql
Description: Binary data


test_parallel_safety.sql
Description: Binary data


test_more_parallel_safety.sql
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers