Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Szymon Guz
On 3 September 2014 01:08, Jan Wieck j...@wi3ck.info wrote:

 On 09/02/2014 06:56 PM, Andrew Dunstan wrote:

 People are free to do what they want, but to my mind that would be a
 massive waste of resources, and probably imposing a substantial extra
 maintenance burden on the core committers.


 I hear you and agree to some degree.

 But at the same time I remember that one of the strengths of Postgres used
 to be to be able to incorporate new ideas.

 This seems to be one of those cases.

 Instead of fork plpgsql2, what about designing a completely new
 PL/postgres from scratch? It will only take 3-10 years, but I bet it will
 be worth it after all. And I mean that. No sarcasm.


And how it would be better then already existing plperl/plpython?

- Szymon


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joel Jacobson
On Wed, Sep 3, 2014 at 7:54 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 I am not against to improve a PL/pgSQL. And I repeat, what can be done and
 can be done early:

 a) ASSERT clause -- with some other modification to allow better static
 analyze of DML statements, and enforces checks in runtime.

 b) #option or PRAGMA clause with GUC with function scope that enforce check
 on processed rows after any DML statement

 c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
 DIAGNOSTICS rc = ROW_COUNT

 If you need more, and some users would more, then it job for new language
 really.

You fail to illustrate *why* it's a job for a new language.
All improvements suggested above are possible with plpgsql, and *should*
be improved in plpgsql, that I agree with.

But the 100% backwards-compatibiity ambition puts hard limits on
what's possible,
and if we can accept (100%-X) backwards compatibility where X is a small number,
then so much more ideas are possible, and that's why plpgsql2 is a good idea.

Hopefully, most of the plpgsql2 changes can be turned on/off in
plpgsql with PRAGMA clause with GUC,
but will be more messy than a good decent default behaviour.

I'm in favour of Tom's idea. To merely make the plpgsql2 language a
way of explicitly saying you want
a specific exact combination of features/beaviour/settings which we
can implemented in plpgsql's existing codebase.

Since it was about 100 posts since Tom's post, maybe it's worth
repeating for those who missed it:

 What I would think about is

c) plpgsql and plpgsql2 are the same code base, with a small number
of places that act differently depending on the language version.

We could alternatively get the result by inventing a bunch of pragma
declarations, or some similar notation, that control the behavioral
changes one-at-a-time.  That might even be worth doing anyway, in
case somebody likes some of the ideas and others not so much.  But
I'd see the language version as a convenient shorthand for enabling a
specified collection of pretty-localized incompatible behavior changes.
If they're not pretty localized, there's going to be a barrier to
uptake, very comparable to the python3 analogy mentioned upthread.

regards, tom lane

I fully agree on this approach. It's maintainable and it will be
useful from day 1.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: proposal: ignore null fields in not relation type composite type based constructors

2014-09-03 Thread Jeevan Chalke
Hi Pavel,

Here are few more comments on new implementation.

1.
 /*
- * SQL function row_to_json(row)
+ * SQL function row_to_json(row record, pretty bool, ignore_nulls bool)
  */

In above comments, parameter name row should changed to rowval.

2.
-DATA(insert OID = 3155 (  row_to_json   PGNSP PGUID 12 1 0 0 0 f f f f
t f s 1 0 114 2249 _null_ _null_ _null_ _null_ row_to_json _null_ _null_
_null_ ));
+DATA(insert OID = 3155 (  row_to_json   PGNSP PGUID 12 1 0 0 0 f f f f
t f s 1 0 114 2249 16 16 _null_ _null_ {rowval,pretty,ignore_nulls}
_null_ row_to_json _null_ _null_ _null_ ));

Number of arguments (pronargs) should be 3 now. However, when we create it
again with default values it gets updated. But still here we should not have
inconsistency.

3.
 extern Datum row_to_json(PG_FUNCTION_ARGS);
 extern Datum row_to_json_pretty(PG_FUNCTION_ARGS);
+extern Datum row_to_json_pretty_choosy(PG_FUNCTION_ARGS);
 extern Datum to_json(PG_FUNCTION_ARGS);

With this new implementation, we have NOT added row_to_json_pretty_choosy()
function. So need to remove that added line. Also we have only one function
with default arguments and thus removed row_to_json_pretty() function as
well. Thus need to remove extern for that too.

4.
Can we have couple of test cases with named argument along with skipped
pretty parameter test?


Thanks

-- 
Jeevan B Chalke
Principal Software Engineer, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [Fwd: Re: [HACKERS] proposal: new long psql parameter --on-error-stop]

2014-09-03 Thread Pavel Stehule
Hello

fixed ECHO, ECHO_HIDDEN, PROPMPT

Regards

Pavel



2014-09-01 11:52 GMT+02:00 Fujii Masao masao.fu...@gmail.com:

 On Thu, Aug 28, 2014 at 9:34 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
 
 
 
  2014-08-28 14:22 GMT+02:00 Fujii Masao masao.fu...@gmail.com:
 
  On Thu, Aug 28, 2014 at 5:48 AM, Pavel Stehule pavel.steh...@gmail.com
 
  wrote:
   comments?
 
  +fprintf(output, _(  ECHO   control what input is
  written to standard output [all, queries]\n));
 
  The valid values in the help messages should be consistent with
  the values that the tab-completion displays. So in the case of ECHO,
  errors and none also should be added in the message. Thought?
 
  In the help messages of some psql variables like ECHO_HIDDEN, valid
  values are not explained. Why not?
 
 
  it is based on http://www.postgresql.org/docs/9.4/static/app-psql.html
 
  ECHO_HIDDEN
 
  When this variable is set and a backslash command queries the database,
 the
  query is first shown. This way you can study the PostgreSQL internals and
  provide similar functionality in your own programs. (To select this
 behavior
  on program start-up, use the switch -E.) If you set the variable to the
  value noexec, the queries are just shown but are not actually sent to the
  server and executed.
 
  There are no clear a set of valid values :( .. When I found a known
 fields
  in doc, I used it.

 At least noexec seems to be documented as a valid value. Of course,
 it's better to document other valid values.

 Regards,

 --
 Fujii Masao

commit 17a0708a0466cc8ff9e8debd0a7e9062eebe3a61
Author: Pavel Stehule pavel.steh...@gooddata.com
Date:   Wed Aug 27 22:47:07 2014 +0200

access to help_variables and usage from psql via psql command

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index db314c3..9bb14e9 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -568,6 +568,15 @@ EOF
   /listitem
 /varlistentry
 
+varlistentry
+  termoption--help-variables//term
+  listitem
+  para
+  Show help about applicationpsql/application variables,
+  and exit.
+  /para
+  /listitem
+/varlistentry
   /variablelist
  /refsect1
 
@@ -2572,10 +2581,12 @@ testdb=gt; userinput\setenv LESS -imx4F/userinput
 
 
   varlistentry
-termliteral\?/literal/term
+termliteral\? [ options | variables ]/literal/term
 listitem
 para
-Shows help information about the backslash commands.
+Shows help information about the backslash commands.  This command can have a
+option variables or options to take help for psql configuration variables
+or psql command line options.
 /para
 /listitem
   /varlistentry
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index e16b4d5..987a79f 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1503,7 +1503,19 @@ exec_command(const char *cmd,
 
 	/* \? -- slash command help */
 	else if (strcmp(cmd, ?) == 0)
-		slashUsage(pset.popt.topt.pager);
+	{
+		char	   *opt0 = psql_scan_slash_option(scan_state,
+	OT_NORMAL, NULL, false);
+
+		if (!opt0)
+			slashUsage(pset.popt.topt.pager);
+		else if (strcmp(opt0, variables) == 0)
+			help_variables(pset.popt.topt.pager);
+		else if (strcmp(opt0, options) == 0)
+			usage(pset.popt.topt.pager);
+		else
+			slashUsage(pset.popt.topt.pager);
+	}
 
 #if 0
 
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index f8f000f..4f29f2a 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -46,11 +46,12 @@
 #define ON(var) (var ? _(on) : _(off))
 
 void
-usage(void)
+usage(unsigned short int pager)
 {
 	const char *env;
 	const char *user;
 	char	   *errstr;
+	FILE	   *output;
 
 	/* Find default user, in case we need it. */
 	user = getenv(PGUSER);
@@ -64,77 +65,82 @@ usage(void)
 		}
 	}
 
-	printf(_(psql is the PostgreSQL interactive terminal.\n\n));
-	printf(_(Usage:\n));
-	printf(_(  psql [OPTION]... [DBNAME [USERNAME]]\n\n));
+	output = PageOutput(59, pager);
+
+	fprintf(output, _(psql is the PostgreSQL interactive terminal.\n\n));
+	fprintf(output, _(Usage:\n));
+	fprintf(output, _(  psql [OPTION]... [DBNAME [USERNAME]]\n\n));
 
-	printf(_(General options:\n));
+	fprintf(output, _(General options:\n));
 	/* Display default database */
 	env = getenv(PGDATABASE);
 	if (!env)
 		env = user;
-	printf(_(  -c, --command=COMMANDrun only single command (SQL or internal) and exit\n));
-	printf(_(  -d, --dbname=DBNAME  database name to connect to (default: \%s\)\n), env);
-	printf(_(  -f, --file=FILENAME  execute commands from file, then exit\n));
-	printf(_(  -l, --list   list available databases, then exit\n));
-	printf(_(  -v, --set=, --variable=NAME=VALUE\n
-			set psql variable NAME to VALUE\n));
-	printf(_(  -V, --versionoutput version information, then exit\n));
-	printf(_(  -X, 

Re: [HACKERS] Re: proposal: ignore null fields in not relation type composite type based constructors

2014-09-03 Thread Pavel Stehule
Hi


2014-09-03 9:27 GMT+02:00 Jeevan Chalke jeevan.cha...@enterprisedb.com:

 Hi Pavel,

 Here are few more comments on new implementation.

 1.
  /*
 - * SQL function row_to_json(row)
 + * SQL function row_to_json(row record, pretty bool, ignore_nulls bool)
   */

 In above comments, parameter name row should changed to rowval.

 2.
 -DATA(insert OID = 3155 (  row_to_json   PGNSP PGUID 12 1 0 0 0 f f f
 f t f s 1 0 114 2249 _null_ _null_ _null_ _null_ row_to_json _null_
 _null_ _null_ ));
 +DATA(insert OID = 3155 (  row_to_json   PGNSP PGUID 12 1 0 0 0 f f f
 f t f s 1 0 114 2249 16 16 _null_ _null_ {rowval,pretty,ignore_nulls}
 _null_ row_to_json _null_ _null_ _null_ ));

 Number of arguments (pronargs) should be 3 now. However, when we create it
 again with default values it gets updated. But still here we should not
 have
 inconsistency.

 3.
  extern Datum row_to_json(PG_FUNCTION_ARGS);
  extern Datum row_to_json_pretty(PG_FUNCTION_ARGS);
 +extern Datum row_to_json_pretty_choosy(PG_FUNCTION_ARGS);
  extern Datum to_json(PG_FUNCTION_ARGS);

 With this new implementation, we have NOT added row_to_json_pretty_choosy()
 function. So need to remove that added line. Also we have only one function
 with default arguments and thus removed row_to_json_pretty() function as
 well. Thus need to remove extern for that too.

 4.
 Can we have couple of test cases with named argument along with skipped
 pretty parameter test?



done

Regards

Pavel



 Thanks

 --
 Jeevan B Chalke
 Principal Software Engineer, Product Development
 EnterpriseDB Corporation
 The Enterprise PostgreSQL Company


commit 17a0708a0466cc8ff9e8debd0a7e9062eebe3a61
Author: Pavel Stehule pavel.steh...@gooddata.com
Date:   Wed Aug 27 22:47:07 2014 +0200

access to help_variables and usage from psql via psql command

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index db314c3..9bb14e9 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -568,6 +568,15 @@ EOF
   /listitem
 /varlistentry
 
+varlistentry
+  termoption--help-variables//term
+  listitem
+  para
+  Show help about applicationpsql/application variables,
+  and exit.
+  /para
+  /listitem
+/varlistentry
   /variablelist
  /refsect1
 
@@ -2572,10 +2581,12 @@ testdb=gt; userinput\setenv LESS -imx4F/userinput
 
 
   varlistentry
-termliteral\?/literal/term
+termliteral\? [ options | variables ]/literal/term
 listitem
 para
-Shows help information about the backslash commands.
+Shows help information about the backslash commands.  This command can have a
+option variables or options to take help for psql configuration variables
+or psql command line options.
 /para
 /listitem
   /varlistentry
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index e16b4d5..987a79f 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1503,7 +1503,19 @@ exec_command(const char *cmd,
 
 	/* \? -- slash command help */
 	else if (strcmp(cmd, ?) == 0)
-		slashUsage(pset.popt.topt.pager);
+	{
+		char	   *opt0 = psql_scan_slash_option(scan_state,
+	OT_NORMAL, NULL, false);
+
+		if (!opt0)
+			slashUsage(pset.popt.topt.pager);
+		else if (strcmp(opt0, variables) == 0)
+			help_variables(pset.popt.topt.pager);
+		else if (strcmp(opt0, options) == 0)
+			usage(pset.popt.topt.pager);
+		else
+			slashUsage(pset.popt.topt.pager);
+	}
 
 #if 0
 
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index f8f000f..4f29f2a 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -46,11 +46,12 @@
 #define ON(var) (var ? _(on) : _(off))
 
 void
-usage(void)
+usage(unsigned short int pager)
 {
 	const char *env;
 	const char *user;
 	char	   *errstr;
+	FILE	   *output;
 
 	/* Find default user, in case we need it. */
 	user = getenv(PGUSER);
@@ -64,77 +65,82 @@ usage(void)
 		}
 	}
 
-	printf(_(psql is the PostgreSQL interactive terminal.\n\n));
-	printf(_(Usage:\n));
-	printf(_(  psql [OPTION]... [DBNAME [USERNAME]]\n\n));
+	output = PageOutput(59, pager);
+
+	fprintf(output, _(psql is the PostgreSQL interactive terminal.\n\n));
+	fprintf(output, _(Usage:\n));
+	fprintf(output, _(  psql [OPTION]... [DBNAME [USERNAME]]\n\n));
 
-	printf(_(General options:\n));
+	fprintf(output, _(General options:\n));
 	/* Display default database */
 	env = getenv(PGDATABASE);
 	if (!env)
 		env = user;
-	printf(_(  -c, --command=COMMANDrun only single command (SQL or internal) and exit\n));
-	printf(_(  -d, --dbname=DBNAME  database name to connect to (default: \%s\)\n), env);
-	printf(_(  -f, --file=FILENAME  execute commands from file, then exit\n));
-	printf(_(  -l, --list   list available databases, then exit\n));
-	printf(_(  -v, --set=, --variable=NAME=VALUE\n
-			set psql variable NAME to VALUE\n));
-	printf(_(  -V, --version 

Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Pavel Stehule
2014-09-03 9:14 GMT+02:00 Joel Jacobson j...@trustly.com:

 On Wed, Sep 3, 2014 at 7:54 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  I am not against to improve a PL/pgSQL. And I repeat, what can be done
 and
  can be done early:
 
  a) ASSERT clause -- with some other modification to allow better static
  analyze of DML statements, and enforces checks in runtime.
 
  b) #option or PRAGMA clause with GUC with function scope that enforce
 check
  on processed rows after any DML statement
 
  c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
  DIAGNOSTICS rc = ROW_COUNT
 
  If you need more, and some users would more, then it job for new language
  really.

 You fail to illustrate *why* it's a job for a new language.
 All improvements suggested above are possible with plpgsql, and *should*
 be improved in plpgsql, that I agree with.


ok, super



 But the 100% backwards-compatibiity ambition puts hard limits on
 what's possible,
 and if we can accept (100%-X) backwards compatibility where X is a small
 number,
 then so much more ideas are possible, and that's why plpgsql2 is a good
 idea.

 Hopefully, most of the plpgsql2 changes can be turned on/off in
 plpgsql with PRAGMA clause with GUC,
 but will be more messy than a good decent default behaviour.

 I'm in favour of Tom's idea. To merely make the plpgsql2 language a
 way of explicitly saying you want
 a specific exact combination of features/beaviour/settings which we
 can implemented in plpgsql's existing codebase.

 Since it was about 100 posts since Tom's post, maybe it's worth
 repeating for those who missed it:

  What I would think about is
 
 c) plpgsql and plpgsql2 are the same code base, with a small number
 of places that act differently depending on the language version.
 
 We could alternatively get the result by inventing a bunch of pragma
 declarations, or some similar notation, that control the behavioral
 changes one-at-a-time.  That might even be worth doing anyway, in
 case somebody likes some of the ideas and others not so much.  But
 I'd see the language version as a convenient shorthand for enabling a
 specified collection of pretty-localized incompatible behavior changes.
 If they're not pretty localized, there's going to be a barrier to
 uptake, very comparable to the python3 analogy mentioned upthread.
 
 regards, tom lane

 I fully agree on this approach. It's maintainable and it will be
 useful from day 1.


I can accept technical solution, but I have hard problem with your vision
of plpgsql future. I afraid so it is too specific with your use case.

When you use name plpgsql2 you say, so plpgsql2 is successor plpgsql. It is
very hard to accept it.  So any other name is not problem for me - like
plpgsql-safe-subset or something else

Pavel


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joel Jacobson
On Wed, Sep 3, 2014 at 10:07 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 When you use name plpgsql2 you say, so plpgsql2 is successor plpgsql. It is
 very hard to accept it.  So any other name is not problem for me - like
 plpgsql-safe-subset or something else

plpgsql2 *is* the successor of plpgsql, that's why it has a 2 in the name.
Anything which is very different from plpgsql should instead get a
different name.

For all new users, having a convenient shorthand (plpgsql2) for
enabling what ever the
project think is the best-practice collection of pragmas, is a simple
and efficient way
of helping new users to get the best possible behaviour of the
language, when starting
from scratch.
It also simplifies communication among developers, when they talk
about code written in plpgsql2,
they will all eventually know what they mean, instead of having to
describe what collection of pragmas
they use in their code. That also simplifies code examples, but most
importantly, one does not have
to declare all the pragmas for each function, or worry about the
pragmas in the config file will ever change.

Once we have agreed upon plpgsql2, then it will be a constant, and
never break compatibility,
and that's a good thing. Then we can all write new code according to
the updated specs and
hopefully we will not need a plpgsql3 until year 2030.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why after increase the hash table partitions, TPMC decrease

2014-09-03 Thread Amit Kapila
On Wed, Sep 3, 2014 at 8:44 AM, Xiaoyulei xiaoyu...@huawei.com wrote:


 benchmarSQL has about half reads. So I think it should be effective.

 I don't think BufFreelistLock take much time, it just get a buffer from
list. It should be very fast.

Only incase all the data fits in shared buffers, else it needs to
perform clock sweep which can be costly in certain cases.

 The test server has 2 CPUs and 12 cores in each CPU. 24 processor
totally. CPU Idle time is over 50%. IO only 10%(data is in SSD)

 I perf one process of pg. The hot spot is hash search. perf data file is
more than 1M, so I do not attach it. I send it separately.

Could you once check the callers of hash_search_with_hash_value()
as it gets called from multiple paths?  I am not able to view the
perf.data file you have sent.


Also, you might want to check the performance on 9.4 codebase,
as there are quite a few performance improvements in it.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] Join push-down support for foreign tables

2014-09-03 Thread Shigeru Hanada
Hi all,

In 2011 I proposed join push-down support for foreign tables, which
would improve performance of queries which contain join between
foreign tables in one server, but it has not finished before time-up.
This performance improvement would widen application range of foreign
tables, so I'd like to tackle the work again.

The descriptions below are based on previous discussions and additional studies.

Background
==

At the moment FDWs can't handle join, so every join are processed on
local side even if the source relations are on the same server.  It's
apparently inefficient to fetch possible rows from remote and join
them on local and waste some of them since join condition doesn't
match.  If FDW (typically SQL-based FDWs like postgres_fdw) can get
control of JOIN operation, it would optimize queries for source tables
into a join query and avoid transfer of un-match rows.

With this improvement, most of joins in usual use, especially joins
between large foreign tables which don't match much, would become
remarkablly fast, for the reasons below.

a) less data transfer
Especially for inner joins, result of join is usually much smaller
than source tables.  If the original target list doesn't contain join
keys, FDW might be able to omit from the SELECT list of remote queries
because they are only necessary on remote side.

b) more optimization on remote side
Join query would provide remote data source more optimization chances,
such as using index.

Changes expected


In the past development trial, these changes seem necessary at least.

(1) Add server oid field to RelOptInfo
This attribute is set only when the RelOptInfo is a joinrel, and all
underlying base relations are foreign tables and they have same server
oid.  This field is set through join consideration from lower join
level to high (many tables) level, IOW from the bottom to the top.  If
all base relations joined in a query are on same server, top
RelOptInfo which represents final output has valid server oid.  In
such case, whole query could be pushed down to the server and user can
get most efficient result.

New helper function GetFdwRoutineByServerId(Oid serverid) which
returns FdwRoutine of given server oid would be handy.

(2) Add new path node for foreign join
New path node ForeignJoinPath, which inherits JoinPath like other join
path nodes, represents a join between ForeignPath or ForeignJoinPath.
ForeignJoinPath has fdw_private list to hold FDW-specific information
through the path consideration phase.  This is similar to fdw_private
of ForeignPath path node.

This node cares only type of join such as INNER JOIN and LEFT OUTER
JOIN, but doesn't care how to do it.  IOW foreign join is not special
case of existing join nodes such as nested loops, merge join and hash
join.  FDW can implement a foreign join in arbitrary way, for
instance, file_fdw can have already-joined file for particular
combination for optimization, and postgres_fdw can generate a SELECT
query which contains JOIN clause and avoid essentially unnecessary
data transfer.

At the moment I'm not sure whether we should support SEMI/ANTI join in
the context of foreign join.  It would require postgres_fdw (or other
SQL-based FDWs) to generate query with subquery connected with IN/NOT
IN clause, but it seems too far to head to in the first version.

We (and especially FDW authors) need to note that join push-down is
not the best way in some situations.  In most cases OUTER JOIN
populates data on remote side more than current FDWs transfer,
especially for FULL OUTER JOIN and
CROSS JOIN (cartesian product).

(3) Add new plan node for foreign join
New plan node ForeignJoin, which inherits Join  like other join plan
nodes.  This node is similar to other join plan nodes such as
NestLoop, MergeJoin and HashJoin, but it delegates actual processing
to FDW associated to the server.

This means that new plan state node for ForeignJoin, say
ForeignJoinState, is also needed.

(4) Add new FDW API functions
Adding Join push-down support requires some functions to be added to
FdwRoutine to give control to FDWs.

a) GetForeignJoinPaths()
This allows FDWs to provide alternative join paths for a join
RelOptInfo.  This is called from add_paths_to_joinrel() after
considering other join possibilities, and FDW should call add_path()
for each possible foreign join path.  Foreign join paths are built
similarly to existing join paths, in a bottom-up manner.

FDWs may push ordered or unordered paths here, but combination of sort
keys would bloat up easily if FDW has no information about efficient
patterns such as remote indexes.  FDW should not add too many paths to
prevent exponential overhead of join combination.

b) GetForeignJoinPlan()
This creates ForeignJoin plan node from ForeignJoinPath and other
planner infromation.

c) Executor functions for ForeignJoin plan node
A set of funcitons for executing ForeignJoin plan node is also needed.
Begin/ReScan/Iterate/End are basic 

Re: [HACKERS] implement subject alternative names support for SSL connections

2014-09-03 Thread Heikki Linnakangas

On 09/01/2014 09:14 PM, Alexey Klyukin wrote:

On Mon, Sep 1, 2014 at 10:39 AM, Alexey Klyukin al...@hintbits.com wrote:

On Fri, Aug 29, 2014 at 11:22 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

Yeah, I think a certificate without CN should be supported. See also RFC 6125, section 
4.1. Rules [for issuers of certificates]:


5.  Even though many deployed clients still check for the CN-ID
within the certificate subject field, certification authorities
are encouraged to migrate away from issuing certificates that
represent the server's fully qualified DNS domain name in a
CN-ID.  Therefore, the certificate SHOULD NOT include a CN-ID
unless the certification authority issues the certificate in
accordance with a specification that reuses this one and that
explicitly encourages continued support for the CN-ID identifier
type in the context of a given application technology.



Certificates without a CN-ID are probably rare today, but they might start to 
appear in the future.


Ok, I will change a patch to add support for this clause.


Attached is a new version. I've changed the logic to check for the SAN
names first, and only check the common name if there is no match. The
error when the common name is missing is only shown if SAN section
does not contain any DNS names as well.


* It's ugly that the caller does the malloc and memcpy, and the 
certificate_name_entry_validate_match function then modifies its name 
argument. Move the malloc+memcpy inside the function.


* The error message in certificate_name_entry_validate_match says SSL 
certificate's common name contains embedded null even though it's also 
used for SANs.



The tricky part is the error
message if no match was found: initially, it only listed a single
common name, but now tracking all DNS names just for the sake of the
error message makes the code more bloated, so I'm wondering if simply
stating that there was no match, as implemented in the attached patch,
would be good enough?


Hmm. It would still be nice to say something about the certificate that 
was received. How about:


  server certificate with common name %s does not match host name %s

?

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] TODO item for protocol revision: Negotiate encryption in connection handshake

2014-09-03 Thread Craig Ringer
Hi all

Another thing I keep on wishing Pg's protocol had is an after-connection
negotiation for transport encryption, like STARTTLS .

Right now, the client has to guess if the server requires, permits, or
rejects SSL, and decide whether to start with SSL or !SSL. If that
fails, it has to try the other one.

The way it's managed in pg_hba.conf means that users usually just get
confusing errors like:

FATAL: no pg_hba.conf entry for host 192.168.0.1, user postgres,
database whatever, SSL off

without the client app being given the opportunity to be told by the
server Please upgrade to transport level security before proceeding.

I like how IMAP does it, where the server announces its capabilities.

Reasonable to aim for in a protocol v4?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest status

2014-09-03 Thread Heikki Linnakangas
We now have 32 patches in Needs Review state, and 7 of those don't 
have a reviewer assigned. They are:


1. Grouping Sets
2. hash join - dynamic bucket count
3. Enable WAL archiving even in standby
4. Selectivity estimation for inet operators
5. Better syntax for REINDEX
6. pgcrypto: support PGP signatures
7. pgcrypto: PGP armour headers

Out of these, the first 4 have generated a fair amount of discussion on 
the list, but no-one has dared to put down their name as a reviewer. 
What is the real status of these patches, are the authors really waiting 
for a review at this stage? Authors: please speak up and update the 
status to Returned with Feedback or Waiting on Author, if you know 
how to proceed. Others: If you have been involved in the discussions, 
please sign up as a reviewer and make a decision on how to move forward 
with the patch.


I think the latter 3 patches are missing a reviewer because no-one is 
interested in them. There was some discussion on the REINDEX syntax, and 
whether we want the patch at all. The pgcrypto patches have received 
zero comments.


If you think that a feature is worthwhile, please sign up as a reviewer. 
If these patches don't have a reviewer assigned by the end of the week, 
I'm going to mark them as Rejected on the grounds that no-one cares 
about them.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Scaling shared buffer eviction

2014-09-03 Thread Amit Kapila
On Wed, Sep 3, 2014 at 1:45 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Aug 28, 2014 at 7:11 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
  I have updated the patch to address the feedback.  Main changes are:
 
  1. For populating freelist, have a separate process (bgreclaimer)
  instead of doing it by bgwriter.
  2. Autotune the low and high threshold values for buffers
  in freelist. I have used the formula as suggested by you upthread.
  3. Cleanup of locking regimen as discussed upthread (completely
  eliminated BufFreelist Lock).
  4. Improved comments and general code cleanup.

 +Background Reclaimer's Processing
 +-

 I suggest titling this section Background Reclaim.

I don't mind changing it, but currently used title is based on similar
title Background Writer's Processing.  It is used in previous
paragraph.  Is there a reason to title this differently?

 +The background reclaimer is designed to move buffers to freelist that are

 I suggest replacing the first three words of this sentence with
bgreclaimer.

Again what I have used is matching with BgWriter's explanation. I thought
it would be better if wording used is similar.


 +while (tmp_num_to_free  0)

 I am not sure it's a good idea for this value to be fixed at loop
 start and then just decremented.

It is based on the idea what bgwriter does for num_to_scan and
calling it once has advantage that we need to take freelist_lck
just once.

 Shouldn't we loop and do the whole
 thing over once we reach the high watermark, only stopping when
 StrategySyncStartAndEnd() says num_to_free is 0?

Do you mean to say that for high water mark check, we should
always refer StrategySyncStartAndEnd() rather than getting the
value in begining?

Are you thinking that somebody else would have already put some
buffers onto freelist which would change initially identified high water
mark?  I think that can be done only during very few and less used
operations.  Do you think for that we start referring
StrategySyncStartAndEnd() in loop?

 In freelist.c, it seems like a poor idea to have two spinlocks as
 consecutive structure members; they'll be in the same cache line,
 leading to false sharing.  If we merge them into a single spinlock,
 does that hurt performance?

I have kept them separate so that backends searching for a buffer
in freelist doesn't contend with bgreclaimer (while doing clock sweep)
or clock sweep being done by other backends.  I think it will be bit
tricky to devise a test where this can hurt, however it doesn't seem
too bad to have two separate locks in this case.

 If we put them further apart, e.g. by
 moving the freelist_lck to the start of the structure, followed by the
 latches, and leaving victimbuf_lck where it is, does that help
 performance?

I can investigate.

 +/*
 + * If the buffer is pinned or has a nonzero usage_count,
 we cannot use
 + * it; discard it and retry.  (This can only happen if
VACUUM put a
 + * valid buffer in the freelist and then someone else
 used it before
 + * we got to it.  It's probably impossible altogether as
 of 8.3, but
 + * we'd better check anyway.)
 + */
 +

 This comment is clearly obsolete.

Okay, but this patch hasn't changed anything w.r.t above comment,
so I haven't changed it. Do you want me to remove second part of
comment starting with (This can only happen?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Commitfest status

2014-09-03 Thread Pavel Stehule
Hi


2014-09-03 13:18 GMT+02:00 Heikki Linnakangas hlinnakan...@vmware.com:

 We now have 32 patches in Needs Review state, and 7 of those don't have
 a reviewer assigned. They are:

 1. Grouping Sets


I plan to do review of Grouping Sets, but I am afraid so I cannot to do in
next two weeks.

Regards

Pavel


 2. hash join - dynamic bucket count
 3. Enable WAL archiving even in standby
 4. Selectivity estimation for inet operators
 5. Better syntax for REINDEX
 6. pgcrypto: support PGP signatures
 7. pgcrypto: PGP armour headers

 Out of these, the first 4 have generated a fair amount of discussion on
 the list, but no-one has dared to put down their name as a reviewer. What
 is the real status of these patches, are the authors really waiting for a
 review at this stage? Authors: please speak up and update the status to
 Returned with Feedback or Waiting on Author, if you know how to
 proceed. Others: If you have been involved in the discussions, please sign
 up as a reviewer and make a decision on how to move forward with the patch.

 I think the latter 3 patches are missing a reviewer because no-one is
 interested in them. There was some discussion on the REINDEX syntax, and
 whether we want the patch at all. The pgcrypto patches have received zero
 comments.

 If you think that a feature is worthwhile, please sign up as a reviewer.
 If these patches don't have a reviewer assigned by the end of the week, I'm
 going to mark them as Rejected on the grounds that no-one cares about them.

 - Heikki



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] psql \watch versus \timing

2014-09-03 Thread Michael Paquier
On Mon, Sep 1, 2014 at 11:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Michael Paquier michael.paqu...@gmail.com writes:
 I just tested the patch and this feature works as expected if timing
 is on and it displays the individual run time of each query kicked by
 \watch. Note that --echo-hidden does not display the query run during
 each loop and that this is contrary to the behavior in HEAD so it
 breaks backward compatibility, but are there really people relying in
 the existing behavior?

 ISTM that's an anti-feature anyway, and changing that behavior is a
 good thing.
OK, then as all the comments are basically addressed, here is an
updated patch correcting the comment problems mentioned by Heikki.
This is ready for a committer.
Regards,
-- 
Michael
*** a/src/bin/psql/command.c
--- b/src/bin/psql/command.c
***
*** 2687,2693  do_watch(PQExpBuffer query_buf, long sleep)
  
  	for (;;)
  	{
! 		PGresult   *res;
  		time_t		timer;
  		long		i;
  
--- 2687,2693 
  
  	for (;;)
  	{
! 		int	res;
  		time_t		timer;
  		long		i;
  
***
*** 2700,2764  do_watch(PQExpBuffer query_buf, long sleep)
   sleep, asctime(localtime(timer)));
  		myopt.title = title;
  
! 		/*
! 		 * Run the query.  We use PSQLexec, which is kind of cheating, but
! 		 * SendQuery doesn't let us suppress autocommit behavior.
! 		 */
! 		res = PSQLexec(query_buf-data, false);
! 
! 		/* PSQLexec handles failure results and returns NULL */
! 		if (res == NULL)
! 			break;
  
  		/*
! 		 * If SIGINT is sent while the query is processing, PSQLexec will
! 		 * consume the interrupt.  The user's intention, though, is to cancel
! 		 * the entire watch process, so detect a sent cancellation request and
! 		 * exit in this case.
  		 */
! 		if (cancel_pressed)
! 		{
! 			PQclear(res);
  			break;
! 		}
! 
! 		switch (PQresultStatus(res))
! 		{
! 			case PGRES_TUPLES_OK:
! printQuery(res, myopt, pset.queryFout, pset.logfile);
! break;
! 
! 			case PGRES_COMMAND_OK:
! fprintf(pset.queryFout, %s\n%s\n\n, title, PQcmdStatus(res));
! break;
! 
! 			case PGRES_EMPTY_QUERY:
! psql_error(_(\\watch cannot be used with an empty query\n));
! PQclear(res);
! return false;
! 
! 			case PGRES_COPY_OUT:
! 			case PGRES_COPY_IN:
! 			case PGRES_COPY_BOTH:
! psql_error(_(\\watch cannot be used with COPY\n));
! PQclear(res);
! return false;
! 
! 			default:
! /* other cases should have been handled by PSQLexec */
! psql_error(_(unexpected result status for \\watch\n));
! PQclear(res);
! return false;
! 		}
! 
! 		PQclear(res);
! 
! 		fflush(pset.queryFout);
  
  		/*
  		 * Set up cancellation of 'watch' via SIGINT.  We redo this each time
! 		 * through the loop since it's conceivable something inside PSQLexec
! 		 * could change sigint_interrupt_jmp.
  		 */
  		if (sigsetjmp(sigint_interrupt_jmp, 1) != 0)
  			break;
--- 2700,2721 
   sleep, asctime(localtime(timer)));
  		myopt.title = title;
  
! 		/* Run the query and print out the results */
! 		res = PSQLexecWatch(query_buf-data, myopt);
  
  		/*
! 		 * PSQLexecWatch handles the case where we can no longer
! 		 * repeat the query, and returns 0 or -1.
  		 */
! 		if (res == 0)
  			break;
! 		if (res == -1)
! 			return false;
  
  		/*
  		 * Set up cancellation of 'watch' via SIGINT.  We redo this each time
! 		 * through the loop since it's conceivable something inside
! 		 * PSQLexecWatch could change sigint_interrupt_jmp.
  		 */
  		if (sigsetjmp(sigint_interrupt_jmp, 1) != 0)
  			break;
*** a/src/bin/psql/common.c
--- b/src/bin/psql/common.c
***
*** 497,502  PSQLexec(const char *query, bool start_xact)
--- 497,598 
  }
  
  
+ /*
+  * PSQLexecWatch
+  *
+  * This function is used for \watch command to send the query to
+  * the server and print out the results.
+  *
+  * Returns 1 if the query executed successfully, 0 if it cannot be repeated,
+  * e.g., because of the interrupt, -1 on error.
+  */
+ int
+ PSQLexecWatch(const char *query, const printQueryOpt *opt)
+ {
+ 	PGresult   *res;
+ 	double	elapsed_msec = 0;
+ 	instr_time	before;
+ 	instr_time	after;
+ 
+ 	if (!pset.db)
+ 	{
+ 		psql_error(You are currently not connected to a database.\n);
+ 		return 0;
+ 	}
+ 
+ 	SetCancelConn();
+ 
+ 	if (pset.timing)
+ 		INSTR_TIME_SET_CURRENT(before);
+ 
+ 	res = PQexec(pset.db, query);
+ 
+ 	ResetCancelConn();
+ 
+ 	if (!AcceptResult(res))
+ 	{
+ 		PQclear(res);
+ 		return 0;
+ 	}
+ 
+ 	if (pset.timing)
+ 	{
+ 		INSTR_TIME_SET_CURRENT(after);
+ 		INSTR_TIME_SUBTRACT(after, before);
+ 		elapsed_msec = INSTR_TIME_GET_MILLISEC(after);
+ 	}
+ 
+ 	/*
+ 	 * If SIGINT is sent while the query is processing, the interrupt
+ 	 * will be consumed.  The user's intention, though, is to cancel
+ 	 * the entire watch process, so detect a sent cancellation request and
+ 	 * exit in this case.
+ 	 */
+ 	if (cancel_pressed)
+ 	{
+ 		PQclear(res);
+ 		return 0;
+ 	}
+ 
+ 	switch (PQresultStatus(res))
+ 	{

Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Joel Jacobson
On Wed, Aug 6, 2014 at 2:46 PM, Marko Tiikkaja ma...@joh.to wrote:
 Hi hackers,

 Attached is a patch to add support for PGP signatures in encrypted messages
 into pgcrypto.

I noticed Heikki wanted to check if there is any interested for the
patches in the current commitfest.

Yes, our company Trustly are very interested in the two PGP additions
to pgcrypto.

We currently use these patches in production in a separate database,
but if they would be part of standard postgres, we wouldn't need to
run the application using the functionality in a separate database
server, which would simplify things a lot.

Without these patches, there is no way to deal with PGP signatures.
Since signatures is a crucial component of OpenPGP, the existing
encryption/decryption features are useful, but not nearly as useful as
if you also have the capabilities to generate and verify PGP
signatures.

We use the PGP functionality in a system called BankAPI, which is open
source and available here: https://github.com/trustly/bankapi

Also, in the documentation, it has already been acknowledged the lack
of signing is a current limitation:
F.25.3.9. Limitations of PGP Code
No support for signing. That also means that it is not checked whether
the encryption subkey belongs to the master key.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Heikki Linnakangas

On 09/03/2014 02:51 PM, Joel Jacobson wrote:

On Wed, Aug 6, 2014 at 2:46 PM, Marko Tiikkaja ma...@joh.to wrote:

Hi hackers,

Attached is a patch to add support for PGP signatures in encrypted messages
into pgcrypto.


I noticed Heikki wanted to check if there is any interested for the
patches in the current commitfest.

Yes, our company Trustly are very interested in the two PGP additions
to pgcrypto.


Cool. Please sign up as a reviewer then, so that we can get these 
patches reviewed and committed.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10823: Better REINDEX syntax.

2014-09-03 Thread Vik Fearing
On 09/02/2014 10:17 PM, Marko Tiikkaja wrote:
 On 2014-08-29 01:00, Alvaro Herrera wrote:
 Vik Fearing wrote:

 Here are two patches for this.

 The first one, reindex_user_tables.v1.patch, implements the variant that
 only hits user tables, as suggested by you.

 The second one, reindex_no_dbname.v1.patch, allows the three
 database-wide variants to omit the database name (voted for by Daniel
 Migowski, Bruce, and myself; voted against by you).  This patch is to be
 applied on top of the first one.

 Not a fan.  Here's a revised version that provides REINDEX USER TABLES,
 which can only be used without a database name; other modes are not
 affected i.e. they continue to require a database name.
 
 Yeah, I think I like this better than allowing all of them without the
 database name.

Why?  It's just a noise word!

 I also renamed
 your proposed reindexdb's --usertables to --user-tables.
 
 I agree with this change.

Me, too.

 Oh, I just noticed that if you say reindexdb --all --user-tables, the
 latter is not honored.  Must fix before commit.
 
 Definitely.

Okay, I'll look at that.

 Is someone going to prepare an updated patch?  Vik?

Yes, I will update the patch.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-09-03 Thread Heikki Linnakangas

On 09/03/2014 12:23 AM, Andres Freund wrote:

On 2014-09-02 17:21:03 -0400, Tom Lane wrote:

Heikki Linnakangas hlinnakan...@vmware.com writes:

I was going to suggest using WaitLatchOrSocket instead of sleeping in 1
second increment, but I see that WaitLatchOrSocket() doesn't currently
support waiting for a socket to become writeable, without also waiting
for it to become readable. I wonder how difficult it would be to lift
that restriction.


My recollection is that there was a reason for that, but I don't recall
details any more.


http://git.postgresql.org/pg/commitdiff/e42a21b9e6c9b9e6346a34b62628d48ff2fc6ddf

In my prototype I've changed the API that errors set both
READABLE/WRITABLE. Seems to work


Andres, would you mind posting the WIP patch you have? That could be a 
better foundation for this patch.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO item for protocol revision: Negotiate encryption in connection handshake

2014-09-03 Thread Magnus Hagander
On Wed, Sep 3, 2014 at 12:17 PM, Craig Ringer cr...@2ndquadrant.com wrote:
 Hi all

 Another thing I keep on wishing Pg's protocol had is an after-connection
 negotiation for transport encryption, like STARTTLS .

 Right now, the client has to guess if the server requires, permits, or
 rejects SSL, and decide whether to start with SSL or !SSL. If that
 fails, it has to try the other one.

 The way it's managed in pg_hba.conf means that users usually just get
 confusing errors like:

 FATAL: no pg_hba.conf entry for host 192.168.0.1, user postgres,
 database whatever, SSL off

 without the client app being given the opportunity to be told by the
 server Please upgrade to transport level security before proceeding.

 I like how IMAP does it, where the server announces its capabilities.

 Reasonable to aim for in a protocol v4?

Yeah, it definitely does I think. Should be in the form of some more
generic capabilities negotiation though, even if we only have SSL to
begin with.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC on WAL-logging hash indexes

2014-09-03 Thread Vik Fearing
On 08/20/2014 02:43 AM, Michael Paquier wrote:
 
 
 
 On Thu, Jun 19, 2014 at 6:40 PM, Vik Fearing vik.fear...@dalibo.com
 mailto:vik.fear...@dalibo.com wrote:
 
 On 04/30/2014 11:41 PM, Tom Lane wrote:
  We really oughta fix the WAL situation, not just band-aid around it.
 
 After re-reading this thread, it is not clear that anyone is going to
 work on it so I'll just ask:
 
 Is anyone working on this?
 
 If not, I'd like to put it on my plate.
 
 Vik, did you get time to look at that finally?

Yes, I am (very slowly) working on this.  I've got a decent learning
curve for WAL replay to get over and I figured this can't be urgent
considering how many years it's been like this so I'm sort of taking my
time.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joshua D. Drake


On 09/02/2014 03:50 PM, Jan Wieck wrote:


PL/pgSQL's syntax was modelled to look like PL/SQL. Which is a Ada/COBOL
lookalike.

Instead of trying to mimic what it was or a T-SQL thing instead ...
maybe it is time to come up with a true PostgreSQL specific PL for a
change?

Just for the sake of being something new, and not a copy of some old
opossum, that's rotting like road kill on the side of the highway for a
decade already.



Well, I don't think PostgreSQL needs its own PL. I mean we already have 
several (what other database has pl/javascript or pl/python?)


Besides, the idea of this community trying to build its own programming 
language... oh lord ;)


JD




Jan




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joshua D. Drake


On 09/02/2014 04:01 PM, Álvaro Hernández Tortosa wrote:


 It's not copying. It's easying a path for others to migrate and
come to Postgres.

 I'm interested why you are more interested in MSSQL. My reasons for
being interested in Oracle are:

- It has more users (biggest and above all, the main reason: we could
attract more)
- Postgres is perceived as similar to Oracle (so migration is likely
to be easier)

 That's all I want. Grow postgres userbase, attracting Oracle users :)


I find that we have more opportunity to replace MSSQL than Oracle. 
Obviously it depends on a lot of things but my goal is as yours, just 
with a different database.


JD




 Álvaro





--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inverse of pg_get_serial_sequence?

2014-09-03 Thread Robert Haas
On Fri, Aug 29, 2014 at 1:26 PM, Andres Freund and...@2ndquadrant.com wrote:
 We have pg_get_serial_sequence() mapping (relation, colum) to the
 sequence. What I'm missing right now is the inverse. I.e. given a
 sequence tell me the owner.
 describe.c has a query for that, and it's not too hard to write, but it
 still seems 'unfriendly' not to provide it.

 Does anybody dislike adding a function for that?

I'll go out on a limb and say that it sounds like pointless catalog
bloat to me.  I am all in favor of adding things like this where the
SQL query is painful to write (e.g. things involving pg_depend) but if
it's a simple SELECT query then, eh, not really excited about it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joel Jacobson
On Wed, Sep 3, 2014 at 3:17 PM, Joshua D. Drake j...@commandprompt.com wrote:
 Well, I don't think PostgreSQL needs its own PL. I mean we already have
 several (what other database has pl/javascript or pl/python?)

PostgreSQL already *have* it's own PL, it's called PL/pgSQL.

 Besides, the idea of this community trying to build its own programming
 language... oh lord ;)

I would agree it's too much of a challenge to invent a brand new
programming language,
I agree that's unrealistic, that's why I'm opting to do as much as
possible in the existing
language, and carefully think about what non-compatible important
changes we simply
cannot make to PL/pgSQL, as they by definition would break
compatibility (which we all
agree is not acceptable), but that *would* be possible with PL/pgSQL 2.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Robert Haas
On Fri, Aug 29, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-08-29 22:42:46 +0900, Michael Paquier wrote:
 Hi all,

 Using a plugin producing binary output, I came across this error:
 =# select data from pg_logical_slot_peek_changes('foo', NULL, NULL);
 ERROR:  0A000: output plugin cannot produce binary output
 LOCATION:  pg_logical_slot_get_changes_guts, logicalfuncs.c:404

 Shouldn't the error message be here something like binary output plugin
 cannot produce textual output? The plugin used in my case produces binary
 output, but what is requested from it with pg_logical_slot_peek_changes is
 textual output.

 I don't like the new message much. It's imo even more misleading than
 the old message. How about
 output pluing produces binary output but the sink only accepts textual data?

Maybe:

ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
produces only binary output
HINT: Use pg_logical_slot_peek_binary_changes instead.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.5] Custom Plan API

2014-09-03 Thread Robert Haas
On Sun, Aug 31, 2014 at 12:54 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 2014-08-29 13:33 GMT-04:00 Robert Haas robertmh...@gmail.com:
 On Wed, Aug 27, 2014 at 6:51 PM, Kouhei Kaigai kai...@ak.jp.nec.com wrote:
  I'd like to follow this direction, and start stripping the DDL support.

 ...please make it so.

 The attached patch eliminates DDL support.

 Instead of the new CREATE CUSTOM PLAN PROVIDER statement,
 it adds an internal function; register_custom_scan_provider
 that takes custom plan provider name and callback function
 to add alternative scan path (should have a form of CustomPath)
 during the query planner is finding out the cheapest path to
 scan the target relation.
 Also, documentation stuff is revised according to the latest
 design.
 Any other stuff keeps the previous design.

 Comments:

 1. There seems to be no reason for custom plan nodes to have MultiExec
 support; I think this as an area where extensibility is extremely
 unlikely to work out.  The MultiExec mechanism is really only viable
 between closely-cooperating nodes, like Hash and HashJoin, or
 BitmapIndexScan, BitmapAnd, BitmapOr, and BitmapHeapScan; and arguably
 those things could have been written as a single, more complex node.
 Are we really going to want to support a custom plan that can
 substitute for a Hash or BitmapAnd node?  I really doubt that's very
 useful.

 This intends to allows a particular custom-scan provider to exchange
 its internal data when multiple custom-scan node is stacked.
 So, it can be considered a facility to implement closely-cooperating nodes;
 both of them are managed by same custom-scan provider.
 An example is gpu-accelerated version of hash-join that takes underlying
 custom-scan node that will returns a hash table with gpu preferable data
 structure, but should not be a part of row-by-row interface.
 I believe it is valuable for some use cases, even though I couldn't find
 a use-case in ctidscan example.

Color me skeptical.  Please remove that part for now, and we can
revisit it when, and if, a plausible use case emerges.

 3. Is it really a good idea to invoke custom scan providers for RTEs
 of every type?  It's pretty hard to imagine that a custom scan
 provider can do anything useful with, say, RTE_VALUES.  Maybe an
 accelerated scan of RTE_CTE or RTE_SUBQUERY is practical somehow, but
 even that feels like an awfully big stretch.  At least until clear use
 cases emerge, I'd be inclined to restrict this to RTE_RELATION scans
 where rte-relkind != RELKIND_FOREIGN_TABLE; that is, put the logic in
 set_plain_rel_pathlist() rather than set_rel_pathlist().

 I'd like to agree. Indeed, it's not easy to assume a use case of
 custom-logic for non-plain relations.

 (We might even want to consider whether the hook in
 set_plain_rel_pathlist() ought to be allowed to inject a non-custom
 plan; e.g. substitute a scan of relation B for a scan of relation A.
 For example, imagine that B contains all rows from A that satisfy some
 predicate. This could even be useful for foreign tables; e.g.
 substitute a scan of a local copy of a foreign table for a reference
 to that table.  But I put all of these ideas in parentheses because
 they're only good ideas to the extent that they don't sidetrack us too
 much.)

 Hmm... It seems to me we need another infrastructure to take
 a substitute scan, because add_path() is called towards a certain
 RelOpInfo that is associated with the relation A.
 As long as custom-scan provider internally redirect a request for
 scan of A by substitute scan B (with taking care of all other stuff
 like relation locks), I don't think we need to put some other hooks
 outside from the set_plain_rel_pathlist().

OK, I see.  So this would have to be implemented as some new kind of
path anyway.  It might be worth allowing custom paths for scanning a
foreign table as well as a plain table, though - so any RTE_RELATION
but not other types of RTE.

 It came from the discussion I had long time before during patch
 reviewing of postgres_fdw. I suggested to use static table of
 FdwRoutine but I got a point that says some compiler raise
 error/warning to put function pointers on static initialization.
 I usually use GCC only, so I'm not sure whether this argue is
 right or not, even though the postgres_fdw_handler() allocates
 FdwRoutine using palloc() then put function pointers for each.

That's odd, because aset.c has used static initializers since forever,
and I'm sure someone would have complained by now if there were a
problem with that usage.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \watch versus \timing

2014-09-03 Thread Greg Stark
On Wed, Sep 3, 2014 at 12:48 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 OK, then as all the comments are basically addressed, here is an
 updated patch correcting the comment problems mentioned by Heikki.

I just tried this and found it doesn't cooperate well with AUTOCOMMIT
= 'off' and ON_ERROR_ROLLBACK = 'on'. Previously \watch would leave
the transaction in a normal state after C-c but now it leaves the
transaction in an aborted state. I assume it previously did a
savepoint around each execution and now it's not doing that at all.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Enable WAL archiving even in standby

2014-09-03 Thread Alexey Klyukin
On Wed, Aug 13, 2014 at 12:42 PM, Fujii Masao masao.fu...@gmail.com wrote:
 Hi,

 I'd propose the attached WIP patch which allows us to enable WAL archiving
 even in standby.
...
 I think that this feature is useful for the case, e.g., where large database
 needs to be replicated between remote servers. Imagine the situation where
 the replicated database gets corrupted completely in the remote standby.
 How should we address this problematic situation and restart the standby?

 One approach is to take a fresh backup from the master and restore it onto
 the standby. But since the database is large and there is long distance
 between two servers, this approach might take a surprisingly long time.

 Another approach is to restore the backup which was taken from the standby
 before. But most of many WAL files which the backup needs might exist only
 in the master (because WAL archiving cannot be enabled in the standby) and
 they need to be transfered from the master to the standby via long-distance
 network. So I think that this approach also would take a fairly long time.
 To shorten that time, you may think that archive_command in the master can
 be set so that it transfers WAL files from the master to the standby's
 archival storage. I agree that this setting can accelerate the database 
 restore
 process. But this causes every WAL files to be transfered between remote
 servers twice (one is by streaming replication, another is by 
 archive_command),
 and which is a waste of network bandwidth.

Well, in theory one can also use pg_receivexlog to get the WAL files
from master,
and then run them through the PITR on the slave without the
archive_cleanup command.

I'm not sure you can do the same if the source of the WAL files is a
cascading slave,
but I see no reasons why not to.

However, I find the patch  useful, since it allows accomplishing
things in a much more
straightforward way.


 Back to the patch. If archive_mode is set to always, archive_command is
 always used to archive WAL files even during recovery. Do we need to separate
 the command into two for master and standby, respectively? We can add
 something like standby_archive_command parameter which is used to archive
 only WAL files walreceiver writes. The other WAL files are archived by
 archive_command. I'm not sure if it's really worth separating the command
 that way. Is there any use case?

I don't see a good use case for doing things only on standby, but I can imagine
that some different archiving methods might be used depending on the role of
the archiver: on master, one may do, for instance, additional copy to the NFS
partition. Does it make sense to expose the server role ('is_master') via an
additional variable available to the recovery_command (i.e. %m)?


 The patch doesn't allow us to enable WAL archiving *only* during recovery.
 Should we support yet another archive_mode like standby which allows
 the archiver to be running only during recovery, but makes it end just after
 the server is promoted to master? I'm not sure if there is really use case for
 that.

I do not see much use for this as well.


 I've not included the update of document in the patch yet. If we agree to
 support this feature, I will do the remaining work.

I think it is useful, and I gave this patch a spin by, essentially, creating a
cascaded archive-only slave. I made a base backup from master, then
ran the standby from this base backup with archive_mode = 'always' and
archive_command copying files to the archive_location, then created another
base backup out of it (without including WAL files into the backup) and pointed
the recovery command of the final slave into the archive created by
the intermediate one.

Recovery worked, as well as the promotion of the intermediate slave to
the master,
the final slave just caught up with the timeline changes (with
recovery_timeline set to
'latest') and went on with the recovery.

One thing I've noticed is that pg_basebackup copies the postgresql.conf from the
standby verbatim, including the archive_mode, which means that if one runs
the cascaded replica without changing the archive_mode, that replica
will try to archive
the WAL, and if both the source and the replica are running on the same machine
(or attached to  NFS using the same mount points) even the destination
for archiving
will be the same. Should not be a big problem if one follows the
recommendation of not
overwriting the files that already exist at the destination, but it
would be nice to reset the
archive_mode flag to off.

I do not know much about the WAL-related code, but one thing that I
found strange
in the patch is  a separate file xlogarchive.h instead of putting
stuff into xlog.h?
Does not make much sense for a single enum, are you planning to put
more things there?

There was a single hunk when applying this against the latest master:
Hunk #4 succeeded at 4789 (offset -1 lines).

-- 
Regards,
Alexey Klyukin


-- 
Sent via pgsql-hackers mailing list 

Re: [HACKERS] psql \watch versus \timing

2014-09-03 Thread Fujii Masao
On Wed, Sep 3, 2014 at 10:56 PM, Greg Stark st...@mit.edu wrote:
 On Wed, Sep 3, 2014 at 12:48 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 OK, then as all the comments are basically addressed, here is an
 updated patch correcting the comment problems mentioned by Heikki.

Thanks a lot!

 I just tried this and found it doesn't cooperate well with AUTOCOMMIT
 = 'off' and ON_ERROR_ROLLBACK = 'on'. Previously \watch would leave
 the transaction in a normal state after C-c but now it leaves the
 transaction in an aborted state. I assume it previously did a
 savepoint around each execution and now it's not doing that at all.

No. Previously \watch used PSQLexec and it doesn't use savepoint.
If you enter Ctrl-C while \watch is waiting for the query to end,
\watch would leave the transaction in an aborted state whether
the patch has been applied or not. OTOH, if you enter Ctrl-C while
\watch is sleeping, the transaction remains in normal state.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RLS Design

2014-09-03 Thread Robert Haas
On Fri, Aug 29, 2014 at 8:16 PM, Brightwell, Adam
adam.brightw...@crunchydatasolutions.com wrote:
 Attached is a patch for RLS that was create against master at
 01363beae52700c7425cb2d2452177133dad3e93 and is ready for review.

 Overview:

 This patch provides the capability to create multiple named row level
 security policies for a table on a per command basis and assign them to be
 applied to specific roles/users.

 It contains the following changes:

 * Syntax:

 CREATE POLICY name ON table
 [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]
 [ TO { PUBLIC | role [, role ] } ]
 USING (condition)

 Creates a RLS policy named name on table.  Specifying a command is
 optional, but the default is ALL.  Specifying a role is options, but the
 default is PUBLIC.  If PUBLIC and other roles are specified, ONLY PUBLIC is
 applied and a warning is raised.

 ALTER POLICY name ON table
 [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]
 [ TO { PUBLIC | role [, role ] } ]
 USING (condition)

 Alter a RLS policy named name on table.  Specifying a command is
 optional, if provided then the policy's command is changed otherwise it is
 left as-is.  Specifying a role is optional, if provided then the policy's
 role is changed otherwise it is left as-is.  The condition must always be
 provided and is therefore always replaced.

This is not a full review of this patch; as we're mid-CommitFest, I
assume this will get added to the next CommitFest.

In earlier discussions, it was proposed (and I thought the proposal
was viewed favorably) that when enabling row-level security for a
table (i.e. before doing CREATE POLICY), you'd have to first flip the
table to a default-deny mode:

ALTER TABLE name ENABLE ROW LEVEL SECURITY;

In this design, I'm not sure what happens when there are policies for
some but not all users or some but not all actions.  Does creating a
INSERT policy for one particular user cause a default-deny policy to
be turned on for all other users and all other operations?  That might
be OK, but at the very least it should be documented more clearly.
Does dropping the very last policy then instantaneously flip the table
back to default-allow?

As far as I can tell from the patch, and that's not too far since I've
only looked at briefly, there's a default-deny policy only if there is
at least 1 policy that applies to your user ID for this operation.  As
far as making it easy to create a watertight combination of policies,
that seems like a bad plan.

+ elog(ERROR, Table \%s\ already has a policy named \%s\.
+  Use a different name for the policy or to modify this policy
+  use ALTER POLICY %s ON %s USING (qual),
+ RelationGetRelationName(target_table), stmt-policy_name,
+ RelationGetRelationName(target_table), stmt-policy_name);
+

That needs to be an ereport, be capitalized properly, and the hint, if
it's to be included at all, needs to go into errhint().

+  errhint(all roles are considered members
of public)));

Wrong message style for a hint.  Also, not sure that's actually
appropriate for a hint.

+ case EXPR_KIND_ROW_SECURITY:
+ return ROW SECURITY;

This is quite simply bizarre.  That's not the SQL syntax of anything.

+ | ROW SECURITY row_security_option
+ {
+ VariableSetStmt *n = makeNode(VariableSetStmt);
+ n-kind = VAR_SET_VALUE;
+ n-name = row_security;
+ n-args = list_make1(makeStringConst($3, @3));
+ $$ = n;
+ }

I object to this.  There's no reason that we should bloat the parser
to allow SET ROW SECURITY in lieu of SET row_security unless this is a
standard-mandated syntax with standard-mandated semantics, which I bet
it isn't.

  /*
+  * Although only on andoff are documented, we accept all likely
variants of
+  * on and off.
+  */
+ static const struct config_enum_entry row_security_options[] = {
+ {off, ROW_SECURITY_OFF, false},
+ {on, ROW_SECURITY_ON, false},
+ {true, ROW_SECURITY_ON, true},
+ {false, ROW_SECURITY_OFF, true},
+ {yes, ROW_SECURITY_ON, true},
+ {no, ROW_SECURITY_OFF, true},
+ {1, ROW_SECURITY_ON, true},
+ {0, ROW_SECURITY_OFF, true},
+ {NULL, 0, false}
+ };

Just make it a bool and you get all this for free.

+ /*
+  * is_rls_enabled -
+  *   determines if row-security is enabled by checking the value of the system
+  *   configuration row_security.
+  */
+ bool
+ is_rls_enabled()
+ {
+ char const *rls_option;
+
+ rls_option = GetConfigOption(row_security, true, false);
+
+ return (strcmp(rls_option, on) == 0);
+ }

Words fail me.

+ if (AuthenticatedUserIsSuperuser)
+ SetConfigOption(row_security, off, PGC_INTERNAL, PGC_S_OVERRIDE);

Injecting this kind of magic into InitializeSessionUserId(),
SetSessionAuthorization(), and SetCurrentRoleId() seems 

Re: [HACKERS] PL/PgSQL: EXIT USING ROLLBACK

2014-09-03 Thread Robert Haas
On Mon, Sep 1, 2014 at 5:08 AM, Joel Jacobson j...@trustly.com wrote:
 On Sat, Jul 26, 2014 at 8:39 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Basically my point is that this just seems like inventing another way to
 do what one can already do with RAISE, and it doesn't have much redeeming
 social value to justify the cognitive load of inventing another construct.

 The main difference is with RAISE EXCEPTION 'OK'; you cannot know if
 it was *your* line of code which throw the 'OK'-exception or if it
 came from some other function which was called in the block of code.

The real problem here is that if you're using PL/pgsql exceptions for
control-flow reasons, you are taking a huge performance hit for that
notational convenience.  I do agree that the syntax of PL/pgsql is
clunky and maybe we should fix that anyway, but I honestly can't
imagine too many people actually wanting to do this once they realize
what it does to the run time of their procedure (and in some cases,
the XID-consumption rate of their database).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RLS Design

2014-09-03 Thread Stephen Frost
Hey Robert,

On my phone at the moment but wanted to reply.

I'm working through a few of these issues already actually (noticed as I've
been going over it with Adam), but certainly appreciate the additional
review. We've not posted another update quite yet but plan to shortly.

Thanks!

Stephen


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Jan Wieck

On 09/03/2014 03:14 AM, Joel Jacobson wrote:

I'm in favour of Tom's idea. To merely make the plpgsql2 language a
way of explicitly saying you want
a specific exact combination of features/beaviour/settings which we
can implemented in plpgsql's existing codebase.

Since it was about 100 posts since Tom's post, maybe it's worth
repeating for those who missed it:


What I would think about is

c) plpgsql and plpgsql2 are the same code base, with a small number
of places that act differently depending on the language version.

We could alternatively get the result by inventing a bunch of pragma
declarations, or some similar notation, that control the behavioral
changes one-at-a-time.  That might even be worth doing anyway, in
case somebody likes some of the ideas and others not so much.  But
I'd see the language version as a convenient shorthand for enabling a
specified collection of pretty-localized incompatible behavior changes.
If they're not pretty localized, there's going to be a barrier to
uptake, very comparable to the python3 analogy mentioned upthread.

   regards, tom lane


I fully agree on this approach. It's maintainable and it will be
useful from day 1.


One can take that approach to another, more generic level. Like GUCs can 
be set on a ROLE base with ALTER USER or ALTER ROLE, PL specific GUCs 
could be set via ALTER LANGUAGE foo SET 


The possibility to CREATE LANGUAGE mybetterpl, pointing to the same PL 
handler function, exists already. And the same mechanism could be used 
by other languages, like PL/Python (for whatever such language might 
need such settings).


This way an application can define the language settings, it needs, by 
simply creating its own language, based on all the possible permutations 
of those PRAGMA/GUC settings.



Regards,
Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Scaling shared buffer eviction

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 +Background Reclaimer's Processing
 +-

 I suggest titling this section Background Reclaim.

 I don't mind changing it, but currently used title is based on similar
 title Background Writer's Processing.  It is used in previous
 paragraph.  Is there a reason to title this differently?

Oh, I didn't see that.  Seems like weird phrasing to me, but I guess
it's probably better to keep it consistent.

 +The background reclaimer is designed to move buffers to freelist that are

 I suggest replacing the first three words of this sentence with
 bgreclaimer.

 Again what I have used is matching with BgWriter's explanation. I thought
 it would be better if wording used is similar.

OK.

 +while (tmp_num_to_free  0)

 I am not sure it's a good idea for this value to be fixed at loop
 start and then just decremented.

 It is based on the idea what bgwriter does for num_to_scan and
 calling it once has advantage that we need to take freelist_lck
 just once.

Right, we shouldn't call it every loop iteration.  However, consider
this scenario: there are no remaining buffers on the list and the high
watermark is 2000.  We add 2000 buffers to the list.  But by the time
we get done, other backends have already done 500 more allocations, so
now there are only 1500 buffers on the list.  If this should occur, we
should add an additional 500 buffers to the list before we consider
sleeping.  We want bgreclaimer to be able to run continuously if the
demand for buffers is high enough.

 In freelist.c, it seems like a poor idea to have two spinlocks as
 consecutive structure members; they'll be in the same cache line,
 leading to false sharing.  If we merge them into a single spinlock,
 does that hurt performance?

 I have kept them separate so that backends searching for a buffer
 in freelist doesn't contend with bgreclaimer (while doing clock sweep)
 or clock sweep being done by other backends.  I think it will be bit
 tricky to devise a test where this can hurt, however it doesn't seem
 too bad to have two separate locks in this case.

It's not.  But if they are in the same cache line, they will behave
almost like one lock, because the CPU will lock the entire cache line
for each atomic op.  See Tom's comments upthread.

 Okay, but this patch hasn't changed anything w.r.t above comment,
 so I haven't changed it. Do you want me to remove second part of
 comment starting with (This can only happen?

Right.  Clearly it can happen again once we have this patch: that's
the entire point of the patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Audit of logout

2014-09-03 Thread Fujii Masao
On Thu, Aug 28, 2014 at 11:23 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Wed, Aug 27, 2014 at 5:19 PM, Fujii Masao masao.fu...@gmail.com wrote:

 On Sat, Aug 23, 2014 at 3:44 PM, Amit Kapila amit.kapil...@gmail.com
 wrote:
  On Tue, Aug 5, 2014 at 8:04 PM, Fujii Masao masao.fu...@gmail.com
  wrote:
  Changing PGC_SU_BACKEND parameter (log_connections) is
  visible even with a non-super user client due to above code.
  Shouldn't it be only visible for super-user logins?
 
  Simple steps to reproduce the problem:
  a. start Server (default configuration)
  b. connect with superuser
  c. change in log_connections to on in postgresql.conf
  d. perform select pg_reload_conf();
  e. connect with non-super-user
  f.  show log_connections;  --This step shows the value as on,
 --whereas I think it should have
  been
  off

 In this case, log_connections is changed in postgresql.conf and it's
 reloaded, so ISTM that it's natural that even non-superuser sees the
 changed value. No? Maybe I'm missing something.

 Yeah, you are right.

 With the latest patch, I am getting one regression failure on windows.
 Attached is the regression diff.

Thanks for testing the patch!

That regression failure looks strange, I'm not sure yet why that happened...
Does it happen only on Windows?

 Can we improve this line a bit?
 !  * BACKEND options are the same as SU_BACKEND ones, but they can
 BACKEND options can be set same as SU_BACKEND ones, ..

Yep.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RLS Design

2014-09-03 Thread Stephen Frost
Robert,

Alright, I can't help it so I'll try and reply from my phone for a couple
of these. :)

On Wednesday, September 3, 2014, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Aug 29, 2014 at 8:16 PM, Brightwell, Adam
 adam.brightw...@crunchydatasolutions.com javascript:; wrote:
  Attached is a patch for RLS that was create against master at
  01363beae52700c7425cb2d2452177133dad3e93 and is ready for review.
 
  Overview:
 
  This patch provides the capability to create multiple named row level
  security policies for a table on a per command basis and assign them to
 be
  applied to specific roles/users.
 
  It contains the following changes:
 
  * Syntax:
 
  CREATE POLICY name ON table
  [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]
  [ TO { PUBLIC | role [, role ] } ]
  USING (condition)
 
  Creates a RLS policy named name on table.  Specifying a command is
  optional, but the default is ALL.  Specifying a role is options, but the
  default is PUBLIC.  If PUBLIC and other roles are specified, ONLY PUBLIC
 is
  applied and a warning is raised.
 
  ALTER POLICY name ON table
  [ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]
  [ TO { PUBLIC | role [, role ] } ]
  USING (condition)
 
  Alter a RLS policy named name on table.  Specifying a command is
  optional, if provided then the policy's command is changed otherwise it
 is
  left as-is.  Specifying a role is optional, if provided then the policy's
  role is changed otherwise it is left as-is.  The condition must always
 be
  provided and is therefore always replaced.

 This is not a full review of this patch; as we're mid-CommitFest, I
 assume this will get added to the next CommitFest.


As per usual, the expectation is that the patch is reviewed and updated
during the commitfest.  Given that the commitfest isn't even over according
to the calendar it seems a bit premature to talk about the next one, but
certainly if it's not up to a commitable level before the end of this
commitfest then it'll be submitted for the next.


 In earlier discussions, it was proposed (and I thought the proposal
 was viewed favorably) that when enabling row-level security for a
 table (i.e. before doing CREATE POLICY), you'd have to first flip the
 table to a default-deny mode:


I do recall that (now that you remind me- clearly it had been lost during
the subsequent discussion, from my point of view at least) and agree that
it'd be useful. I don't believe it'll be difficult to address.


 ALTER TABLE name ENABLE ROW LEVEL SECURITY;


Sounds reasonable to me.


 + elog(ERROR, Table \%s\ already has a policy named \%s\.
 +  Use a different name for the policy or to modify this
 policy
 +  use ALTER POLICY %s ON %s USING (qual),
 + RelationGetRelationName(target_table), stmt-policy_name,
 + RelationGetRelationName(target_table), stmt-policy_name);
 +

That needs to be an ereport, be capitalized properly, and the hint, if
 it's to be included at all, needs to go into errhint().


Already addressed.


 +  errhint(all roles are considered members
 of public)));

 Wrong message style for a hint.  Also, not sure that's actually
 appropriate for a hint.


Fair enough. Will address.


 + case EXPR_KIND_ROW_SECURITY:
 + return ROW SECURITY;

 This is quite simply bizarre.  That's not the SQL syntax of anything.


Will address.


 + | ROW SECURITY row_security_option
 + {
 + VariableSetStmt *n = makeNode(VariableSetStmt);
 + n-kind = VAR_SET_VALUE;
 + n-name = row_security;
 + n-args = list_make1(makeStringConst($3, @3));
 + $$ = n;
 + }

 I object to this.  There's no reason that we should bloat the parser
 to allow SET ROW SECURITY in lieu of SET row_security unless this is a
 standard-mandated syntax with standard-mandated semantics, which I bet
 it isn't.


Agreed. Seemed like a nice idea but it's not necessary.


   /*
 +  * Although only on andoff are documented, we accept all likely
 variants of
 +  * on and off.
 +  */
 + static const struct config_enum_entry row_security_options[] = {
 + {off, ROW_SECURITY_OFF, false},
 + {on, ROW_SECURITY_ON, false},
 + {true, ROW_SECURITY_ON, true},
 + {false, ROW_SECURITY_OFF, true},
 + {yes, ROW_SECURITY_ON, true},
 + {no, ROW_SECURITY_OFF, true},
 + {1, ROW_SECURITY_ON, true},
 + {0, ROW_SECURITY_OFF, true},
 + {NULL, 0, false}
 + };

 Just make it a bool and you get all this for free.


Right- holdover from an earlier attempt to make it more complicated but now
we've simplified it and so it should just be a bool.



 + if (AuthenticatedUserIsSuperuser)
 + SetConfigOption(row_security, off, PGC_INTERNAL,
 PGC_S_OVERRIDE);

 Injecting this kind of magic into InitializeSessionUserId(),
 SetSessionAuthorization(), 

Re: [HACKERS] pg_receivexlog and replication slots

2014-09-03 Thread Robert Haas
On Sun, Aug 31, 2014 at 9:45 AM, Magnus Hagander mag...@hagander.net wrote:
 Do we really want those Asserts? There is not a single Assert in
 bin/pg_basebackup today - as is the case for most things in bin/. We
 typically use regular if statements for things that can happen, and
 just ignore the others I think - since the callers are fairly simple
 to trace.

I have no opinion on whether we want these particular Assert() calls,
but I note that using Assert() in front-end code only became possible
in February of 2013, as a result of commit
e1d25de35a2b1f809e8f8d7b182ce0af004f3ec9.  So the lack of assertions
there may not be so much because people thought it was a bad idea as
that it didn't use to work.  Generally, I favor the use of Assert() in
front-end code in the same scenarios in which we use it in back-end
code: for checks that shouldn't burden production builds but are
useful during development.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inverse of pg_get_serial_sequence?

2014-09-03 Thread Andres Freund
On 2014-09-03 09:31:50 -0400, Robert Haas wrote:
 On Fri, Aug 29, 2014 at 1:26 PM, Andres Freund and...@2ndquadrant.com wrote:
  We have pg_get_serial_sequence() mapping (relation, colum) to the
  sequence. What I'm missing right now is the inverse. I.e. given a
  sequence tell me the owner.
  describe.c has a query for that, and it's not too hard to write, but it
  still seems 'unfriendly' not to provide it.
 
  Does anybody dislike adding a function for that?
 
 I'll go out on a limb and say that it sounds like pointless catalog
 bloat to me.  I am all in favor of adding things like this where the
 SQL query is painful to write (e.g. things involving pg_depend) but if
 it's a simple SELECT query then, eh, not really excited about it.

There's not really a simple select for it, is there? psql uses:

/* Get the column that owns this sequence */
printfPQExpBuffer(buf, SELECT pg_catalog.quote_ident(nspname) 
|| '.' ||
  \n   
pg_catalog.quote_ident(relname) || '.' ||
  \n   
pg_catalog.quote_ident(attname)
  \nFROM pg_catalog.pg_class c
\nINNER JOIN pg_catalog.pg_depend d ON 
c.oid=d.refobjid
 \nINNER JOIN pg_catalog.pg_namespace n ON 
n.oid=c.relnamespace
  \nINNER JOIN 
pg_catalog.pg_attribute a ON (
  \n a.attrelid=c.oid AND
  \n a.attnum=d.refobjsubid)
   \nWHERE 
d.classid='pg_catalog.pg_class'::pg_catalog.regclass
 \n AND 
d.refclassid='pg_catalog.pg_class'::pg_catalog.regclass
  \n AND d.objid=%s
  \n AND d.deptype='a',
  oid);

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Andres Freund
On 2014-09-03 09:36:32 -0400, Robert Haas wrote:
 On Fri, Aug 29, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com wrote:
  On 2014-08-29 22:42:46 +0900, Michael Paquier wrote:
  Hi all,
 
  Using a plugin producing binary output, I came across this error:
  =# select data from pg_logical_slot_peek_changes('foo', NULL, NULL);
  ERROR:  0A000: output plugin cannot produce binary output
  LOCATION:  pg_logical_slot_get_changes_guts, logicalfuncs.c:404
 
  Shouldn't the error message be here something like binary output plugin
  cannot produce textual output? The plugin used in my case produces binary
  output, but what is requested from it with pg_logical_slot_peek_changes is
  textual output.
 
  I don't like the new message much. It's imo even more misleading than
  the old message. How about
  output pluing produces binary output but the sink only accepts textual 
  data?
 
 Maybe:
 
 ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
 produces only binary output
 HINT: Use pg_logical_slot_peek_binary_changes instead.

That level has no knowledge of what it's used by, so I think that'd
require bigger changes than worth it.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LIMIT for UPDATE and DELETE

2014-09-03 Thread Robert Haas
On Mon, Sep 1, 2014 at 8:06 AM, Marko Tiikkaja ma...@joh.to wrote:
 Ideally?  Yeah, that would be great.  But I don't see anyone volunteering to
 do that work, and I think holding back a useful feature (ORDER BY with
 UPDATE/DELETE) in hopes of getting someone to volunteer to do it is insane.
 Now, you're free to argue that ORDER BY with UPDATE/DELETE isn't that
 useful, of course, but I'm sure there are lots of people who agree with me.

I still agree with Tom.  Arbitrary restrictions on which features can
be used in combination with each other piss off and alienate users.
We've put quite a bit of effort into making table inheritance not suck
(e.g. statistics on inheritance trees, Merge Append, etc.).  Making it
suck more because you don't think it's as important as your feature
is, in my opinion, not cool.

This is not to say that I don't like the feature.  I like it a lot.
But I like a product where you can be sure that if walking works and
chewing gum works you can also walk and chew gum at the same time even
more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Better support of exported snapshots with pg_dump

2014-09-03 Thread Robert Haas
On Mon, Sep 1, 2014 at 5:30 AM, Andres Freund and...@2ndquadrant.com wrote:
 Hi,

 Currently pg_dump does not allow a user to specify an exported snapshot
 name that he would like to use for a dump using SET TRANSACTION SNAPSHOT
 (now pg_export_snapshot is only used for parallel pg_dump within it). I
 imagine that this would be handy to take a consistent dump of a given
 database after creating a logical replication slot on it. Thoughts?

 Yes, I always wanted that option.

I didn't find that option to be terribly important then, but I don't
see how we can possibly get by without it now, unless our goal is to
make logical decoding as hard to use as we possibly can.

Tom's got a good point about the order of locking vs. snapshot taking,
but I think the way to address that is by adding some capability to
temporarily lock out all DDL on non-temporary objects across the
entire system, rather than by trying to make pg_dump (or the walsender
creating the replication slot) lock every table.  Even if we could get
that to work, it still leaves the very-much-related problem that dumps
of databases containing many tables can easily exhaust the lock table.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 10:45 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-09-03 09:36:32 -0400, Robert Haas wrote:
 On Fri, Aug 29, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2014-08-29 22:42:46 +0900, Michael Paquier wrote:
  Hi all,
 
  Using a plugin producing binary output, I came across this error:
  =# select data from pg_logical_slot_peek_changes('foo', NULL, NULL);
  ERROR:  0A000: output plugin cannot produce binary output
  LOCATION:  pg_logical_slot_get_changes_guts, logicalfuncs.c:404
 
  Shouldn't the error message be here something like binary output plugin
  cannot produce textual output? The plugin used in my case produces binary
  output, but what is requested from it with pg_logical_slot_peek_changes is
  textual output.
 
  I don't like the new message much. It's imo even more misleading than
  the old message. How about
  output pluing produces binary output but the sink only accepts textual 
  data?

 Maybe:

 ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
 produces only binary output
 HINT: Use pg_logical_slot_peek_binary_changes instead.

 That level has no knowledge of what it's used by, so I think that'd
 require bigger changes than worth it.

ERROR: this logical decoding plugin can only produce binary output
ERROR: logical decoding plugin %s can only produce binary output

?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inverse of pg_get_serial_sequence?

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 10:44 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-09-03 09:31:50 -0400, Robert Haas wrote:
 On Fri, Aug 29, 2014 at 1:26 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
  We have pg_get_serial_sequence() mapping (relation, colum) to the
  sequence. What I'm missing right now is the inverse. I.e. given a
  sequence tell me the owner.
  describe.c has a query for that, and it's not too hard to write, but it
  still seems 'unfriendly' not to provide it.
 
  Does anybody dislike adding a function for that?

 I'll go out on a limb and say that it sounds like pointless catalog
 bloat to me.  I am all in favor of adding things like this where the
 SQL query is painful to write (e.g. things involving pg_depend) but if
 it's a simple SELECT query then, eh, not really excited about it.

 There's not really a simple select for it, is there? psql uses:

 /* Get the column that owns this sequence */
 printfPQExpBuffer(buf, SELECT 
 pg_catalog.quote_ident(nspname) || '.' ||
   \n   
 pg_catalog.quote_ident(relname) || '.' ||
   \n   
 pg_catalog.quote_ident(attname)
   \nFROM pg_catalog.pg_class 
 c
 \nINNER JOIN pg_catalog.pg_depend d 
 ON c.oid=d.refobjid
  \nINNER JOIN pg_catalog.pg_namespace n ON 
 n.oid=c.relnamespace
   \nINNER JOIN 
 pg_catalog.pg_attribute a ON (
   \n a.attrelid=c.oid AND
   \n a.attnum=d.refobjsubid)
\nWHERE 
 d.classid='pg_catalog.pg_class'::pg_catalog.regclass
  \n AND 
 d.refclassid='pg_catalog.pg_class'::pg_catalog.regclass
   \n AND d.objid=%s
   \n AND d.deptype='a',
   oid);

Oh, OK.  Yeah, that's kind of hairy.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Bruce Momjian
On Tue, Sep  2, 2014 at 08:46:36PM -0400, Christopher Browne wrote:
 3.  Is there anything to be learned from Tutorial D?  That is, Date  Darwen's
 would-be alternative to SQL of their Third Manifesto?

What would a set-oriented-language PL look like, such as APL?  I guess
Perl has arrays, so it is kind-of set-oriented.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LIMIT for UPDATE and DELETE

2014-09-03 Thread Marko Tiikkaja

On 9/3/14 4:46 PM, Robert Haas wrote:

Making it
suck more because you don't think it's as important as your feature
is, in my opinion, not cool.


I really can't see how that would make inheritance suck *more*.  You 
can't do UPDATE .. ORDER BY now, and you wouldn't be able to do it after 
the patch.  Yeah, sure, perhaps people using inheritance might feel left 
out, but surely that would just motivate them to work on improving it.



.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Andres Freund
On 2014-09-03 10:59:17 -0400, Robert Haas wrote:
 On Wed, Sep 3, 2014 at 10:45 AM, Andres Freund and...@2ndquadrant.com wrote:
  On 2014-09-03 09:36:32 -0400, Robert Haas wrote:
  On Fri, Aug 29, 2014 at 9:48 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
   On 2014-08-29 22:42:46 +0900, Michael Paquier wrote:
   Hi all,
  
   Using a plugin producing binary output, I came across this error:
   =# select data from pg_logical_slot_peek_changes('foo', NULL, NULL);
   ERROR:  0A000: output plugin cannot produce binary output
   LOCATION:  pg_logical_slot_get_changes_guts, logicalfuncs.c:404
  
   Shouldn't the error message be here something like binary output plugin
   cannot produce textual output? The plugin used in my case produces 
   binary
   output, but what is requested from it with pg_logical_slot_peek_changes 
   is
   textual output.
  
   I don't like the new message much. It's imo even more misleading than
   the old message. How about
   output pluing produces binary output but the sink only accepts textual 
   data?
 
  Maybe:
 
  ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
  produces only binary output
  HINT: Use pg_logical_slot_peek_binary_changes instead.
 
  That level has no knowledge of what it's used by, so I think that'd
  require bigger changes than worth it.
 
 ERROR: this logical decoding plugin can only produce binary output
 ERROR: logical decoding plugin %s can only produce binary output

ERROR: logical decoding plugin %s produces binary output, but sink only copes 
with textual data

Not sure about 'sink'. Maybe 'receiving side' or 'receiver'?

Not 100% sure if the name is available in that site, but if not it can
be left of without hurting much.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Bruce Momjian
On Wed, Sep  3, 2014 at 07:54:09AM +0200, Pavel Stehule wrote:
 I am not against to improve a PL/pgSQL. And I repeat, what can be done and can
 be done early:
 
 a) ASSERT clause -- with some other modification to allow better static 
 analyze
 of DML statements, and enforces checks in runtime.
 
 b) #option or PRAGMA clause with GUC with function scope that enforce check on
 processed rows after any DML statement
 
 c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
 DIAGNOSTICS rc = ROW_COUNT

All these ideas are being captured somewhere, right?  Where?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Marko Tiikkaja

On 9/3/14 5:05 PM, Bruce Momjian wrote:

On Wed, Sep  3, 2014 at 07:54:09AM +0200, Pavel Stehule wrote:

I am not against to improve a PL/pgSQL. And I repeat, what can be done and can
be done early:

a) ASSERT clause -- with some other modification to allow better static analyze
of DML statements, and enforces checks in runtime.

b) #option or PRAGMA clause with GUC with function scope that enforce check on
processed rows after any DML statement

c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
DIAGNOSTICS rc = ROW_COUNT


All these ideas are being captured somewhere, right?  Where?


I'm working on a wiki page with all these ideas.  Some of them break 
backwards compatibility somewhat blatantly, some of them could be added 
into PL/PgSQL if we're okay with reserving a keyword for the feature. 
All of them we think are necessary.



.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Pavel Stehule
2014-09-03 17:05 GMT+02:00 Bruce Momjian br...@momjian.us:

 On Wed, Sep  3, 2014 at 07:54:09AM +0200, Pavel Stehule wrote:
  I am not against to improve a PL/pgSQL. And I repeat, what can be done
 and can
  be done early:
 
  a) ASSERT clause -- with some other modification to allow better static
 analyze
  of DML statements, and enforces checks in runtime.
 
  b) #option or PRAGMA clause with GUC with function scope that enforce
 check on
  processed rows after any DML statement


these two yes


  
  c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
  DIAGNOSTICS rc = ROW_COUNT


this is my fresh

some smarty designed asserts can be used for static analyses too.

I am able to analyze plan of DML statements, and I can raise warning if
expected rows is not 1 or if there are not filter over unique index

some

UPDATE ... WHERE id = 1;
ASSERT(PROCESSED_ROW_COUNT = 1);

And I can recheck in plpgsql_check, and it can enforce fast check in runtime

Pavel





 All these ideas are being captured somewhere, right?  Where?

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +



Re: [HACKERS] RLS Design

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 10:40 AM, Stephen Frost sfr...@snowman.net wrote:
 This is not a full review of this patch; as we're mid-CommitFest, I
 assume this will get added to the next CommitFest.

 As per usual, the expectation is that the patch is reviewed and updated
 during the commitfest.  Given that the commitfest isn't even over according
 to the calendar it seems a bit premature to talk about the next one, but
 certainly if it's not up to a commitable level before the end of this
 commitfest then it'll be submitted for the next.

The first version of this patch that was described as ready for
review was submitted on August 29th.  The previous revision was
submitted on August 18th.  Both of those dates are after the
CommitFest deadline of August 15th.  So from where I sit this is not
timely submitted for this CommitFest.  The last version before August
was submitted in April (there's a link to a version supposedly
submitted in June in the CommitFest application, but it doesn't point
to an email with a patch attached).  I don't want to (and don't feel I
should have to) decide between dropping everything to review an
untimely-submitted patch and having it get committed with no review
from anyone who wasn't involved in writing it.

 + if (AuthenticatedUserIsSuperuser)
 + SetConfigOption(row_security, off, PGC_INTERNAL,
 PGC_S_OVERRIDE);

 Injecting this kind of magic into InitializeSessionUserId(),
 SetSessionAuthorization(), and SetCurrentRoleId() seems 100%
 unacceptable to me.

 I was struggling with the right way to address this and welcome suggestions.
 The primary issue is that I really want to support a superuser turning it
 on, so we can't simply have it disabled for all superusers all the time. The
 requirement that it not be enabled by default for superusers makes sense,
 but how far does that extend and how do we address upgrades?  In particular,
 can we simply set row_security=off as a custom GUC setting when superusers
 are created or roles altered to be made superusers?  Would we do that in
 pg_upgrade?

I think you need to have the GUC have one default value, not one
default for superusers and another default for everybody else.  I
previously proposed making the GUC on/off/force, with on meaning
apply row-level security unless we have permission to bypass it,
either because we are the table owner or the superuser, off meaning
error out if we would be forced to apply row-level security, and
force meaning always apply row-level security even if we have
permission to bypass it.  I still think that's a good proposal.
There may be other reasonable alternatives as well, but making changes
to one GUC magically change other GUCs under the hood isn't one of
them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 11:06 AM, Andres Freund and...@2ndquadrant.com wrote:
  Maybe:
 
  ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
  produces only binary output
  HINT: Use pg_logical_slot_peek_binary_changes instead.
 
  That level has no knowledge of what it's used by, so I think that'd
  require bigger changes than worth it.

 ERROR: this logical decoding plugin can only produce binary output
 ERROR: logical decoding plugin %s can only produce binary output

 ERROR: logical decoding plugin %s produces binary output, but sink only 
 copes with textual data

 Not sure about 'sink'. Maybe 'receiving side' or 'receiver'?

 Not 100% sure if the name is available in that site, but if not it can
 be left of without hurting much.

I was trying to avoid mentioning the word sink because we don't
actually have a real term for that.  From the user's perspective, it's
not going to be obvious that the function they invoked is the sink or
receiver; to them, it's just an interface - if anything, it's a
*sender* of the changes to them.

In case I lose that argument, please at least write allows instead
of copes with; the latter I think is too informal for an error
message.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LIMIT for UPDATE and DELETE

2014-09-03 Thread Robert Haas
On Wed, Sep 3, 2014 at 11:02 AM, Marko Tiikkaja ma...@joh.to wrote:
 On 9/3/14 4:46 PM, Robert Haas wrote:
 Making it
 suck more because you don't think it's as important as your feature
 is, in my opinion, not cool.

 I really can't see how that would make inheritance suck *more*.  You can't
 do UPDATE .. ORDER BY now, and you wouldn't be able to do it after the
 patch.  Yeah, sure, perhaps people using inheritance might feel left out,
 but surely that would just motivate them to work on improving it.

I think it's entirely reasonable for us to require that all new SQL
features should be required to work with or without inheritance.  If
we took the opposition position, and said that the only things that
need to work with inheritance are the ones that existed at the time
inheritance was introduced, then we'd not need to worry about it not
only for this feature but for row-level security and SKIP LOCKED and
GROUPING SETS and, going back a bit further, materialized views and
security-barrier views and LATERAL and CTEs and on and on.  Perhaps
not all of those require any special handling for inheritance
hierarchies, but some of them surely did, and if even 10% of the SQL
features that we have added since table inheritance were allowed to
opt out of supporting it, we'd have a broken and unusable feature
today.

Now some people might argue that we have that anyway, but the fact is
that a lot of people are using inheritance today, even with all its
flaws, and they wouldn't be if there were a long laundry list of
limitations that didn't apply to regular tables.  We should be looking
to lift the limitations that currently exist, not add more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misleading error message in logical decoding for binary plugins

2014-09-03 Thread Andres Freund
On 2014-09-03 11:23:21 -0400, Robert Haas wrote:
 On Wed, Sep 3, 2014 at 11:06 AM, Andres Freund and...@2ndquadrant.com wrote:
   Maybe:
  
   ERROR: pg_logical_slot_peek_changes cannot be used with a plugin that
   produces only binary output
   HINT: Use pg_logical_slot_peek_binary_changes instead.
  
   That level has no knowledge of what it's used by, so I think that'd
   require bigger changes than worth it.
 
  ERROR: this logical decoding plugin can only produce binary output
  ERROR: logical decoding plugin %s can only produce binary output
 
  ERROR: logical decoding plugin %s produces binary output, but sink only 
  copes with textual data
 
  Not sure about 'sink'. Maybe 'receiving side' or 'receiver'?
 
  Not 100% sure if the name is available in that site, but if not it can
  be left of without hurting much.
 
 I was trying to avoid mentioning the word sink because we don't
 actually have a real term for that.

I understand the hesitation. I don't like it either, but I don't think
it gets clearer by leaving it off entirely.

  From the user's perspective, it's
 not going to be obvious that the function they invoked is the sink or
 receiver; to them, it's just an interface - if anything, it's a
 *sender* of the changes to them.

Is 'logical output method' perhaps better? It'd coincide with the terms
in the code and docs too.

 In case I lose that argument, please at least write allows instead
 of copes with; the latter I think is too informal for an error
 message.

Ok, sure.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [BUGS] Re: BUG #9555: pg_dump for tables with inheritance recreates the table with the wrong order of columns

2014-09-03 Thread Bruce Momjian
On Mon, Sep  1, 2014 at 04:40:11PM -0400, Bruce Momjian wrote:
 On Mon, Sep  1, 2014 at 04:06:58PM -0400, Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
 NOTICE:  moving and merging column c with inherited definition
 DETAIL:  user-specified column moved to the location of the inherited
 column
  
  Dept of nitpicking: errdetail messages are supposed to be complete
  sentences, properly capitalized and punctuated.  Please re-read the
  style guidelines if you have forgotten them.
 
 Oh, yeah;  updated patch attached.

OK, patch applied.  This will warn about reordering that happens via
SQL, and via pg_dump restore.  Do we want to go farther and preserve
column ordering by adding ALTER TABLE [constraint] ISLOCAL and have
pg_dump reuse binary-upgrade mode?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-03 Thread Robert Haas
On Wed, Aug 27, 2014 at 10:43 PM, Peter Geoghegan p...@heroku.com wrote:
 Example usage:

 INSERT INTO upsert(key, val) VALUES(1, 'insert') ON CONFLICT UPDATE
 SET val = 'update';

I think that syntax is a dramatic improvement over your previous
proposals.  The only part I don't entire like is this:

 INSERT INTO upsert(key, val) VALUES(1, 'insert') ON CONFLICT WITHIN
 upsert_pkey UPDATE SET val = 'update';

It seems to me that it would be better to specify a conflicting column
set rather than a conflicting index name.

I don't have much in the way of comments about the implementation, at
least not right at the moment, but...

 Essentially, the implementation has all stages of query processing
 During the execution of the parent ModifyTable, a special auxiliary
 subquery (the UPDATE ModifyTable) is considered as a special case.
 This is not a subplan of the ModifyTable node in the conventional
 sense, and so does not appear within EXPLAIN output.

...that sounds wonky.

 I already mentioned the inability to reference rejected rows in an
 UPDATE, as well as my unease about VACUUM interlocking, both of which
 are open item. Also, some of the restrictions that I already mentioned
 - on updatable views, inheritance, and foreign tables - are probably
 unnecessary. We should be able to come with reasonable behavior for at
 least some of those.

If you've noted my comments on the UPDATE/DELETE .. ORDER BY .. LIMIT
thread, you won't be surprised to hear that I think those restrictions
will need to be lifted - especially for inheritance, but probably the
others as well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Immediate standby promotion

2014-09-03 Thread Robert Haas
On Mon, Sep 1, 2014 at 7:14 AM, Fujii Masao masao.fu...@gmail.com wrote:
 I think there is one downside as well for this proposal that
 apart from data loss, it can lead to uncommitted data occupying
 space in database which needs to be later cleaned by vacuum.
 This can happen with non-immediate promote as well, but the
 chances with immediate are more.  So the gain we got by doing
 immediate promotion can lead to slow down of operations in some
 cases.  It might be useful if we mention this in docs.

 Yep, the immediate promotion might be more likely to cause
 the recovery to end before replaying WAL data of VACUUM. But, OTOH,
 I think that the immediate promotion might be more likely to cause
 the recovery to end before replaying WAL data which will generate
 garbage data. So I'm not sure if it's worth adding that note to the doc.

-1 for documenting that.  This is mostly a general PostgreSQL
phenomenon and has little to do with immediate promotion specifically.
I think anything we write here is likely to be more confusing than
helpful.

 Agreed. So I'm thinking to change the code as follows.

 if (immediate_promote)
 ereport(LOG, (errmsg(received immediate promote request)));
 else
 ereport(LOG, (errmsg(received promote request)));

+1 for that version.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP armor headers

2014-09-03 Thread Jeff Janes
On Fri, Aug 15, 2014 at 1:55 AM, Marko Tiikkaja ma...@joh.to wrote:

 Hi,


 On 8/8/14 3:18 PM, I wrote:

 Currently there's no way to generate or extract armor headers from the
 PGP armored format in pgcrypto.  I've written a patch to add the
 support.


 Latest version of the patch here, having fixed some small coding issues.


I've built this and tested the installation of the extension, the upgrade
from earlier versions, and the basic functions, with and without
--enable-cassert

I did occasionally get some failures with 'make check -C contrib/pgcrypto',
but I can't reproduce it.  I might have accidentally had some mixture of
binaries some with cassert and some without.

No other problems to report.

I didn't do a detailed code review, and I am not a security expert, so I
will leave it to the signed-up reviewer to change the status once he takes
a look.

One quibble in the documentation, an error is returned.  Errors get
raised, not returned.

This patch will conflict with the pgp signature patch due to
the pgcrypto--1.2.sql and kin.

Cheers,

Jeff


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Álvaro Hernández Tortosa


On 03/09/14 15:24, Joshua D. Drake wrote:


On 09/02/2014 04:01 PM, Álvaro Hernández Tortosa wrote:


 It's not copying. It's easying a path for others to migrate and
come to Postgres.

 I'm interested why you are more interested in MSSQL. My reasons for
being interested in Oracle are:

- It has more users (biggest and above all, the main reason: we could
attract more)
- Postgres is perceived as similar to Oracle (so migration is likely
to be easier)

 That's all I want. Grow postgres userbase, attracting Oracle 
users :)


I find that we have more opportunity to replace MSSQL than Oracle. 
Obviously it depends on a lot of things but my goal is as yours, just 
with a different database.



Honestly, I don't care whether MSSQL or Oracle. What I want is to 
attract more users, get Postgres out of where it is and appeal even more 
users. With that regard, Oracle or MSSQL doesn't matter to me.


That's why if you have some time, I'd love to listen to why do you 
think there is more opportunity to replace MSSQL. We may continue that 
privately as is a little bit off-topic.


Thanks!

Álvaro



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-03 Thread Peter Geoghegan
On Wed, Sep 3, 2014 at 9:51 AM, Robert Haas robertmh...@gmail.com wrote:
 Essentially, the implementation has all stages of query processing
 During the execution of the parent ModifyTable, a special auxiliary
 subquery (the UPDATE ModifyTable) is considered as a special case.
 This is not a subplan of the ModifyTable node in the conventional
 sense, and so does not appear within EXPLAIN output.

 ...that sounds wonky.

Which part? It certainly wouldn't be helpful if the (say) auxiliary
plan's sequential scan appeared within EXPLAIN output. That's just
an implementation detail. Note that the structure of the plan is
highly restricted, since it needs to be driven by the insert (or,
rather, the insert's conflicts, including conflicts not visible to the
command's MVCC snapshot). There won't be any interesting variation in
the plan. Although, that said, the implementation should probably
display any Filter: ... conditions implied by the special UPDATE
qual.

 If you've noted my comments on the UPDATE/DELETE .. ORDER BY .. LIMIT
 thread, you won't be surprised to hear that I think those restrictions
 will need to be lifted - especially for inheritance, but probably the
 others as well.

Sure.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Robert Haas
On Tue, Sep 2, 2014 at 5:47 PM, Álvaro Hernández Tortosa a...@nosys.es wrote:
 Yeah, we differ there. I think having an Oracle compatibility layer in
 PostgreSQL would be the-next-big-thing we could have. Oracle is has orders
 of magnitude bigger user base than postgres has; and having the ability to
 attract them would bring us many many more users which, in turn, would
 benefit us all very significantly.

 It would be my #1 priority to do in postgres (but yes, I know -guess-
 how hard and what resources that would require). But dreaming is free :)

There are a number of reasons why this isn't really practical.

First, Oracle compatibility isn't one feature.  The compatibility
layer that exists in EnterpriseDB's Advanced Server product consists
of many different changes to many different parts of the system.  A
few of those changes are simple syntax compatibility, where we do the
exact same thing PostgreSQL does but with different syntax, but a lot
of them are functional enhancements.  Even within SPL, there's a whole
bunch of different changes to a whole bunch of different areas, and
most of those are functional enhancements rather than just tweaking
syntax.  So, if you tried to implement a new, Oracle-compatible PL,
you'd find that you don't have one or a small number of changes to
make, but a long series of features ranging from small to very large.
You'd also find that adding a new PL, without changing any other parts
of the server, only bridges a small part of the compatibility gap.

Second, if you did manage to develop something which was significantly
more compatible with Oracle than PostgreSQL or PL/pgsql is today,
you'd probably find that the community wouldn't accept it.  It's
almost misleading to think of Oracle as a database; it's an enormous
software ecosystem with facilities for doing just about everything
under the sun, and many of those things more than one way.  For
example, in 9.4, EnterpriseDB will be releasing a UTL_HTTP package
that contains many of the same interfaces that are present in Oracle.
The interface decisions made by Oracle Corporation are reasonable in
view of their architecture, but I am quite sure that this community
would not want, for example, to return long text values as SETOF
VARCHAR(2000) rather than TEXT, just because Oracle does that.  And
rightly so: I wouldn't want PostgreSQL to follow any other product
that slavishly whether I worked at EnterpriseDB or not.  This kind of
thing crops up over and over again, and it only works to say that
PostgreSQL should choose the Oracle behavior in every case if you
believe that the primary mission of PostgreSQL should be to copy
Oracle, and I don't.  I also don't think it's a bad thing that
Advanced Server makes different decisions than PostgreSQL in some
cases.  A further problem is that, in this particular case, you'd
probably here the argument from PostgreSQL hackers that they really
don't want to be burdened with maintaining an HTTP client in the core
server when the same thing could be done from an extension, and that's
a valid argument, too.  It is also valid for EnterpriseDB to make a
different decision for itself, based on business priorities.

Now, none of that is to say that we wouldn't do well to give a little
more thought to Oracle compatibility than we do.  We've either made or
narrowly avoided a number of decisions over the years which introduced
- or threatened to introduce - minor, pointless incompatibilities with
other database products, Oracle included.  That really doesn't benefit
anyone.  To take another example, I've been complaining about the fact
that PostgreSQL 8.3+ requires far more typecasts in stored procedures
than any other database I'm aware of for years, probably since before
I joined EnterpriseDB.  And I still think we're kidding ourselves to
think that we've got that right when nobody else is doing something
similar.  I don't think the community should reverse that decision to
benefit EnterpriseDB, or to be compatible with Oracle: I think the
community should reverse that decision because it's stupid, and the
precedent of other systems demonstrates that it is possible to do
better.  Oracle's handling of reserved words also seems to be
considerably less irritating than ours, and I'd propose that we
improve that in PostgreSQL too, if I knew how to do it.
Unfortunately, I suspect that requires jettisoning bison and rolling
our own parser generator, and it's hard to argue that would be a good
investment of effort for the benefit we'd get.

Anyway, to get back around to the topic of PL/SQL compatibility
specifically, if you care about that issue, pick one thing that exists
in PL/SQL but not in PL/pgsql and try to do something about it.  Maybe
it'll be something that EnterpiseDB has already done something about,
in which case, if your patch gets committed, Advanced Server will lose
a bit of distinction as compared with PostgreSQL.  Or maybe it'll be
something that EnterpriseDB hasn't done 

Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread David G Johnston
This is more of an SQL request the pl/pgsql but is/has there been thought to
adding the ternary if/then opeator?  Something like:

boolean_exp ? val_if_true : val_if_false

using ? by itself would be OK but not ideal - and the addition of the 
doesn't seem hateful...

Sorry if this is deemed off-topic but I just went to write

CASE WHEN boolean_exp THEN val_if_true ELSE val_if_false END

And the fact there is as much standard code as there is custom bothered me
just as is being discussed on this thread.

I'm going to go write a ifthen(bool, anyelement, anyelement) function
now

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PL-pgSQL-2-tp5817121p5817608.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Pavel Stehule
2014-09-03 21:01 GMT+02:00 David G Johnston david.g.johns...@gmail.com:

 This is more of an SQL request the pl/pgsql but is/has there been thought
 to
 adding the ternary if/then opeator?  Something like:

 boolean_exp ? val_if_true : val_if_false

 using ? by itself would be OK but not ideal - and the addition of the 
 doesn't seem hateful...

 Sorry if this is deemed off-topic but I just went to write

 CASE WHEN boolean_exp THEN val_if_true ELSE val_if_false END

 And the fact there is as much standard code as there is custom bothered me
 just as is being discussed on this thread.

 I'm going to go write a ifthen(bool, anyelement, anyelement) function
 now


if you use a SQL (SQL macro, then it can be effective)

postgres=# CREATE OR REPLACE FUNCTION if(bool, anyelement, anyelement)
RETURNS anyelement AS $$SELECT CASE WHEN $1 THEN $2 ELSE $3 END $$ LANGUAGE
sql;
CREATE FUNCTION
postgres=# CREATE OR REPLACE FUNCTION fx(text) RETURNS text AS $$ BEGIN
RAISE NOTICE '%', $1; RETURN $1; END$$ LANGUAGE plpgsql;
CREATE FUNCTION
postgres=# SELECT if(false, fx('msg1'), fx('msg2'));
NOTICE:  msg2
  if
--
 msg2
(1 row)

postgres=# SELECT if(true, fx('msg1'), fx('msg2'));
NOTICE:  msg1
  if
--
 msg1
(1 row)

Only necessary parameters are evaluated

Pavel



 David J.




 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/PL-pgSQL-2-tp5817121p5817608.html
 Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



[HACKERS] xslt_process deprecated?

2014-09-03 Thread Mark

Hi,
I'd like to use the xslt_process function but it is in part of the 
documentation that is deprecated.  I don't want to use something that is 
going to disappear and if there is a better alternative I'd like to use 
it, however I cannot find an equivalent in the documentation.  I could 
well be looking in the wrong place, apologies if I've just been too 
blind to see it.


Can anyone help?

Thanks,
Mark.
--



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Jeff Janes
On Fri, Aug 15, 2014 at 12:55 AM, Marko Tiikkaja ma...@joh.to wrote:

 Hi,


 On 8/7/14 12:15 PM, I wrote:

 Here's v2 of the patch.  I've changed the info-extracting code to not
 look for signatures beyond the data, which also meant that it had to
 parse one-pass signatures (which it didn't do before).  This matches the
 behaviour of the main decryption code.


 Here's the latest version where I've added the option to extract the
 creation time from the signatures.



There is trivial sgml patch application conflict due to a grammar
correction in 05258761bf12a64befc9caec1947b254cdeb74c5

I wanted to start simple so I have a file which is signed, but not
encrypted.  I can't figure out what to do with it.  All of the functions
seem to require that it also be encrypted.  I tried providing an empty
password for  pgp_sym_signatures but it didn't work.

Is there a way to deal with this situation?

Thanks

Jeff


Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Marko Tiikkaja

On 2014-09-03 9:36 PM, Jeff Janes wrote:

I wanted to start simple so I have a file which is signed, but not
encrypted.  I can't figure out what to do with it.  All of the functions
seem to require that it also be encrypted.  I tried providing an empty
password for  pgp_sym_signatures but it didn't work.


Right.  This patch only adds support for signing data when encrypting it 
at the same time.  There's no support for detached signatures, nor is 
there support for anything other than signatures of encrypted data.  I 
should have been more clear on that in my initial email. :-(



.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-09-03 Thread Robert Haas
On Tue, Sep 2, 2014 at 4:18 PM, Martijn van Oosterhout
klep...@svana.org wrote:
 On Tue, Sep 02, 2014 at 09:44:17AM -0400, Bruce Momjian wrote:
 On Sun, Aug 31, 2014 at 10:45:29PM +0200, Martijn van Oosterhout wrote:
  There is one situation where you need to be more flexible, and that is
  if you ever want to support online repartitioning. To do that you have
  to distinguish between I want to insert tuple X, which partition
  should it go into and I want to know which partitions I need to look
  for partition_key=Y.

 I am unclear why having information per-partition rather than on the
 parent table helps with online reparitioning.

 An example:

 We have three partitions, one for X0 (A), one for 0=X5 (B) and one
 for X=5 (C).  These are in three different tables.

 Now we give the command to merge the last two partitions BC. You now
 have the choice to lock the table while you move all the tuples from C
 to B.

 Or you can make some adjustments such that new tuples that would have gone
 to C now go to B. And if there is a query for X=10 that you look in
 *both* B  C. Then the existing tuples can be moved from C to B at any
 time without blocking any other operations.

 Is this clearer? If you up front decide that which partition to query
 will be determined by a function that can only return one table, then
 the above becomes impossible.

 Robert's idea of using normal table inheritance means we can access/move
 the data independently of the partitioning system.  My guess is that we
 will need to do repartitioning with some tool, rather than as part of
 normal database operation.

 Doing it as some tool seems like a hack to me. And since the idea was (I
 thought) that partitions would not be directly accessable from SQL, it
 has to be in the database itself.

I agree.  My main point about reusing the inheritance stuff is that
we've done over the years is that we shouldn't reinvent the wheel, but
rather build on what we've already got.

If the proposed design somehow involved treating all of the partitions
as belonging to the same TID space (which doesn't really seem
possible, but let's suspend disbelief) so that you could have a single
index that covers all the partitions, and the system would somehow
work out which TIDs live in which physical files, then it would be
reasonable to view the storage layer as an accident that higher levels
of the system don't need to know anything about.

But the actual proposal involves having multiple relations that have
to get planned just like real tables, and that means all the
optimizations that we've done on gathering statistics for inheritance
hierarchies, and MergeAppend, and every other bit of planner smarts
that we have will be applicable to this new method, too.  Let's not do
anything that forces us to reinvent all of those things.

Now, to be fair, one could certainly argue (and I would agree) that
the existing optimizations are insufficient.  In particular, the fact
that SELECT * FROM partitioned_table WHERE not_the_partitioning_key =
1 has to be planned separately for every partition is horrible, and
the fact that SELECT * FROM partitioned_table WHERE partitioning_key =
1 has to use an algorithm that is both O(n) in the partition count and
has a relatively high constant factor to exclude all of the
non-matching partitions also sucks.  But I think we're better off
trying to view those as further optimizations that we can apply to
certain special cases of partitioning - e.g. when the partitioning
syntax is used, constrain all the tables to have identical tuple
descriptors and matching indexes (and maybe constraints) so that when
you plan, you can do it once and then used the transposed plan for all
partitions.  Figuring out how to do run-time partition pruning would
be awesome, too.

But I don't see that any of this stuff gets easier by ignoring what's
already been built; then you're likely to spend all your time
reinventing the crap we've already done, and any cases where the new
system misses an optimization that's been achieved in the current
system become unpleasant dilemmas for our users.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2014-09-03 Thread Kevin Grittner
Marti Raudsepp ma...@juffo.org wrote:
 On Mon, Sep 1, 2014 at 9:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 The concept of lightweight relations that pop into existence when a
 certain kind of trigger definition is used somewhere in the function
 stack, without a CREATE TABLE, without being discoverable in
 information_schema etc., I find needs some more justification than
 I've seen in this thread. So far I've only heard that it's more
 convenient to implement in the current PostgreSQL code base.

It is required by the SQL standard.

 I'm sure more questions would pop up in practice, but as Heikki
 mentioned: Are such relations also visible to other functions called
 by the trigger function?
 * If yes, this introduces non-obvious dependencies between functions.
 What happens when one trigger with delta relations invokes another
 trigger, does the previous one get shadowed or overwritten?

This is indeed a killer objection.  As things stand in the patch, a
function called from a trigger function might have the table of the
same name (if it's not a not schema-qualified reference) shadowed,
or it might not -- depending on whether it was already planned.
That's obviously not acceptable.  Passing the metadata from the
TriggerData structure to the PLpgSQL_execstate structure to the
PLpgSQL_expr structure and on to the ParseState structure, and
passing it down to child ParseState structures as needed, along
with similar passing of the Tuplestorestate pointer (and associated
name) to the execution state structures should fix that.

 What are the interactions with search_path?

Pretty much the same as the interactions of RTEs with search_path.
If the apparent relation name is not schema-qualified, parse
analysis first tries to resolve the name as an RTE, and if that
fails it tries to resolve it as a named tuplestore, and if that
fails it goes to the catalogs using search_path.

 Can an unprivileged function override relation names when calling
 a SECURITY DEFINER function?

By changing things to the way Heikki and Tom suggest, any called
functions are not aware of or affected by a named tuplestore in the
caller's context.  (Changing *back*, actually -- I had this largely
done that way before; but it seemed like a rather fragile relay
race, passing the baton from one structure to another at odd
places.  I guess there's no helping that.  Or maybe once I post a
version changed back to that someone can show me something I missed
that makes it better.)

 You could argue that CREATE TEMP TABLE already has some of these
 problems, but it's very rare that people actually need to use that. If
 delta relations get built on this new mechanism, avoiding won't be an
 option any more.

Not true -- you don't have them unless you request them in CREATE
TRIGGER.  Nobody can be using this now, so a table owner must
*choose* to add the REFERENCING clause to the CREATE TRIGGER
statement for it to matter in the trigger function that is then
referenced.  Perhaps if we implement the ability to specify the
trigger code in the CREATE TRIGGER statement itself (rather than
requiring that a trigger function be created first) it will be
easier to look at and cope with.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Display of timestamp in pg_dump custom format

2014-09-03 Thread Bruce Momjian
On Thu, May  1, 2014 at 12:09:34PM -0400, Bruce Momjian wrote:
 On Thu, May  1, 2014 at 12:33:51PM +1200, Gavin Flower wrote:
  On 01/05/14 12:04, Bruce Momjian wrote:
  On Thu, May  1, 2014 at 08:27:49AM +1200, Gavin Flower wrote:
  On 01/05/14 02:51, Bruce Momjian wrote:
  The table of contents for pg_restore -l shows the time the archive was
  made as local time (it uses ctime()):
  
   ; Archive created at Wed Apr 30 10:03:28 2014
  
  Is this clear enough that it is local time?  Should we display this
  better, perhaps with a time zone designation?
  
  I think it would be good to include the time zone, as we are all
  very international these days - and in Australia, adjacent states
  have different dates for the summer time transition!
  
  Personally, I would like to see the date in the format 2014-04-30,
  but having the day of the week is good.
  
  Milliseconds might be useful, if you want to check logs files.
  OK, I will work on it for 9.5.  Thanks.
  
  So the it would then read something like:
  
  ; Archive created at Wed 2014-04-30 10:03:28.042 NZST
  
  (but with the correct appropriate time zone designation)?
 
 I think we would use a numeric offset.

I ended up going with the string-based timezone as I was worried that
the sign of the timezone could easily confuse people because the SQL
timezone offset sign is often different from the OS timezone.  The new
output is:

;
; Archive created at Wed Sep  3 16:12:21 2014 EST   --
; dbname: test
; TOC Entries: 8
; Compression: -1
; Dump Version: 1.12-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 9.5devel
; Dumped by pg_dump version: 9.5devel

Patch attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
new file mode 100644
index 0018720..4296c11
*** a/src/bin/pg_dump/pg_backup_archiver.c
--- b/src/bin/pg_dump/pg_backup_archiver.c
*** PrintTOCSummary(Archive *AHX, RestoreOpt
*** 969,975 
  	if (ropt-filename)
  		SetOutput(AH, ropt-filename, 0 /* no compression */ );
  
! 	ahprintf(AH, ;\n; Archive created at %s, ctime(AH-createDate));
  	ahprintf(AH, ; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n,
  			 AH-archdbname, AH-tocCount, AH-compression);
  
--- 969,975 
  	if (ropt-filename)
  		SetOutput(AH, ropt-filename, 0 /* no compression */ );
  
! 	ahprintf(AH, ;\n; Archive created at %.24s %s\n, ctime(AH-createDate), *tzname);
  	ahprintf(AH, ; dbname: %s\n; TOC Entries: %d\n; Compression: %d\n,
  			 AH-archdbname, AH-tocCount, AH-compression);
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2014-09-03 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:
 Marti Raudsepp ma...@juffo.org wrote:

  What are the interactions with search_path?

 
 Pretty much the same as the interactions of RTEs with search_path.
 If the apparent relation name is not schema-qualified, parse
 analysis first tries to resolve the name as an RTE, and if that
 fails it tries to resolve it as a named tuplestore, and if that
 fails it goes to the catalogs using search_path.

Argh.  s/RTE/CTE/

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Jeff Janes
On Wed, Sep 3, 2014 at 12:43 PM, Marko Tiikkaja ma...@joh.to wrote:

 On 2014-09-03 9:36 PM, Jeff Janes wrote:

 I wanted to start simple so I have a file which is signed, but not
 encrypted.  I can't figure out what to do with it.  All of the functions
 seem to require that it also be encrypted.  I tried providing an empty
 password for  pgp_sym_signatures but it didn't work.


 Right.  This patch only adds support for signing data when encrypting it
 at the same time.  There's no support for detached signatures, nor is there
 support for anything other than signatures of encrypted data.  I should
 have been more clear on that in my initial email. :-(


OK, thanks.  How hard do you think it would to allow NULL (or empty
string?) passwords to gpg_sym_signatures and gpg_sym_decrypt_verify to
accommodate this?

I think docs section F.25.3 needs to be re-titled and expanded to reflect
signatures as well as encryption, and an explanation added about signatures
only being processed on encrypted data if that restriction can't be removed.

I've switched to using a signed plus symmetrically encrypted message for
testing.

One surprising thing so far is that the 3rd argument to
gpg_sym_decrypt_verify must be dearmored. I thought it would detect and
dearmor automatically.

Once I wrap it in dearmor, I get the ERROR:  No signature matching the key
id present in the message

The public key block I am giving it is for the keyid that is reported
by pgp_sym_signatures, so I don't know what the problem might be.

When I get more time, I'll look at your examples from the regression tests
to see if I can figure it out.

Thanks,

Jeff


Re: [HACKERS] delta relations in AFTER triggers

2014-09-03 Thread Marti Raudsepp
On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner kgri...@ymail.com wrote:
 Marti Raudsepp ma...@juffo.org wrote:
 The concept of lightweight relations that pop into existence when a
 certain kind of trigger definition is used somewhere in the function
 stack, without a CREATE TABLE, without being discoverable in
 information_schema etc., I find needs some more justification than
 I've seen in this thread. So far I've only heard that it's more
 convenient to implement in the current PostgreSQL code base.

 It is required by the SQL standard.

I had a cursory read of the SQL 20nn draft and I don't get this
impression. The only place I could find discussing the behavior of
transition tables is in Foundation 4.39.1 General description of
triggers, which says:

Special variables make the data in the transition table(s) available
to the triggered action. For a statement-level
trigger the variable is one whose value is a transition table.

There is no information about the scoping of such variables, so I
assume it refers to a regular locally scoped variable.

Did I miss something? Are you reading a different version of the spec?

Regards,
Marti


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for psql History Display on MacOSX

2014-09-03 Thread Stepan Rutz
Hello again, just my thoughts…

in psql  \s without a file is nice for me iff going through less (e.g. pager), 
but for the most part it doesn't work at all on mac-osx. so nothing to lose for 
the mac psql users.

regards,
stepan

Am 03.09.2014 um 07:45 schrieb Noah Misch n...@leadboat.com:

 On Tue, Sep 02, 2014 at 09:49:56AM -0400, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
 I'm with you that far.  Given a patch that does not change \s /tmp/foo and
 that makes \s equivalent to \s /tmp/foo + \! cat /tmp/foo /dev/tty,
 back-patch by all means.  No patch posted on this thread is so surgical, 
 hence
 my objection.  In particular, your latest patch revision changes \s 
 /tmp/foo
 to match the novel output the patch introduces for plain \s.  \s 
 /tmp/foo
 would no longer write data that libedit can reload as a history file.
 
 BTW, I failed last night to produce a coherent argument against that
 particular point, but consider this.  What are the main use-cases for
 \s to a file?  I argue that they are
 
  1. Create a human-readable record of what you did.
  2. Create the starting point for a SQL script file.
 
 I do not deny it's possible that somebody out there is also using \s for
 
  3. Create a file that I can overwrite ~/.psql_history with later.
 
 But if this is being done in the field at all, surely it is miles behind
 the applications listed above.
 
 I'm unprepared to speculate about the relative prevalence of those use cases.
 
 Now, if you are using libreadline, the output of \s has always been
 perfectly fit for purposes 1 and 2, because it's plain text of the
 history entries.  Moreover, it is *not* particularly fit for purpose 3,
 because intra-command newlines aren't encoded.  Yes, you could get
 libreadline to read the file, but multiline SQL commands will be seen
 as multiple history entries which is very far from convenient to use.
 (This adds to my suspicion that nobody is doing #3 in practice.)
 
 On the other hand, if you are using libedit, purpose 3 works great
 but the output is utterly unfit for either purpose 1 or 2.  Here
 are the first few lines of ~/.psql_history on one of my Macs:
 
 _HiStOrY_V2_
 explain\040verbose\^A\040\040select\0401\^Aunion\^A\040\040select\0402;
 \\q
 select\0404;
 explain\040verbose\^A\040\040select\0401\^Aunion\^A\040\040select\0402;
 select\04044;
 \\q
 \\s
 \\s\040foobar
 \\q
 
 What the proposed patch does is ensure that \s produces plain text
 regardless of which history library you are using.  I think arguing
 that we shouldn't do that is stretching the concept of backwards
 compatibility well past the breaking point.
 
 Given the negligible urgency to improve \s, the slightest compatibility hazard
 justifies punting this work from back-patch to master-only.
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] postgresql latency bgwriter not doing its job

2014-09-03 Thread Robert Haas
On Sat, Aug 30, 2014 at 2:04 PM, Andres Freund and...@2ndquadrant.com wrote:
 If the sort buffer is allocated when the checkpointer is started, not
 everytime we sort, as you've done in your version of the patch I think
 that risk is pretty manageable. If we really want to be sure nothing is
 happening at runtime, even if the checkpointer was restarted, we can put
 the sort array in shared memory.

I don't think that allocating the array at checkpointer start time
helps.  If it works, then you're strictly worse off than if you
allocate it at every checkpoint, because you're holding onto the
memory all the time instead of only when it's being used.  And if it
fails, what then?  Sure, you can have that copy of the checkpointer
process exit, but that does nothing good.  The postmaster will keep on
restarting it and it will keep on dying for lack of memory, and no
checkpoints will complete.  Oops.

So it seems to me that the possibly-sensible approaches are:

1. Allocate an array when we need to sort, and if the allocation
fails, have some kind of fallback strategy, like logging a WARNING an
writing the buffers out without sorting them.  If it succeeds, do the
checkpoint and then free the memory until we need it again.

2. Putting the array in shared_memory, so that once the server is
started, we can be sure the memory is allocated and the sort will
work.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY and heap_sync

2014-09-03 Thread Robert Haas
On Sat, Aug 30, 2014 at 2:26 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 If you insert tuples with COPY into a table created or truncated in the same
 transaction, at the end of the COPY it calls heap_sync.

 But there cases were people use COPY in a loop with a small amount of data
 in each statement.  Now it is calling heap_sync many times, and if NBuffers
 is large doing that gets very slow.

 Could the heap_sync be safely delayed until the end of the transaction,
 rather than the end of the COPY?

I don't think there's any data integrity problem with that, but if the
fsync() should fail it would be reported at commit time rather than in
response to the COPY.  That might be OK though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-09-03 Thread Peter Geoghegan
On Wed, Sep 3, 2014 at 9:51 AM, Robert Haas robertmh...@gmail.com wrote:
 INSERT INTO upsert(key, val) VALUES(1, 'insert') ON CONFLICT WITHIN
 upsert_pkey UPDATE SET val = 'update';

 It seems to me that it would be better to specify a conflicting column
 set rather than a conflicting index name.

I'm open to pursuing that, provided there is a possible implementation
that's robust against things like BEFORE triggers that modify
constrained attributes. It must also work well with partial unique
indexes. So I imagine we'd have to determine a way of looking up the
unique index only after BEFORE triggers fire. Unless you're
comfortable with punting on some of these cases by throwing an error,
then all of this is actually surprisingly ticklish. You've already
expressed concerns about the feature not playing nice with existing,
peripheral features though.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto: PGP signatures

2014-09-03 Thread Marko Tiikkaja

On 2014-09-03 10:33 PM, Jeff Janes wrote:

On Wed, Sep 3, 2014 at 12:43 PM, Marko Tiikkaja ma...@joh.to wrote:

Right.  This patch only adds support for signing data when encrypting it
at the same time.  There's no support for detached signatures, nor is there
support for anything other than signatures of encrypted data.  I should
have been more clear on that in my initial email. :-(



OK, thanks.  How hard do you think it would to allow NULL (or empty
string?) passwords to gpg_sym_signatures and gpg_sym_decrypt_verify to
accommodate this?


To sign without encrypting?  I think those should really be a different 
set of functions altogether.  But this patch is already humongous (on my 
standards, at least), so I think that should be done separately.



I think docs section F.25.3 needs to be re-titled and expanded to reflect
signatures as well as encryption, and an explanation added about signatures
only being processed on encrypted data if that restriction can't be removed.


I don't have an objection to that.  I fully acknowledge that the 
documentation doesn't state the limitations of signing should this patch 
be applied.



I've switched to using a signed plus symmetrically encrypted message for
testing.

One surprising thing so far is that the 3rd argument to
gpg_sym_decrypt_verify must be dearmored. I thought it would detect and
dearmor automatically.


I can't see that as an improvement to be honest.


Once I wrap it in dearmor, I get the ERROR:  No signature matching the key
id present in the message

The public key block I am giving it is for the keyid that is reported
by pgp_sym_signatures, so I don't know what the problem might be.


Have you tried with the debug=1 option?  (It's undocumented, but it was 
like that before this patch and I didn't touch it).



When I get more time, I'll look at your examples from the regression tests
to see if I can figure it out.


Thanks!  I'm happy to help if you run into any trouble!


.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] delta relations in AFTER triggers

2014-09-03 Thread Kevin Grittner
Marti Raudsepp ma...@juffo.org wrote:
 On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner kgri...@ymail.com wrote:
 Marti Raudsepp ma...@juffo.org wrote:
 The concept of lightweight relations that pop into existence when a
 certain kind of trigger definition is used somewhere in the function
 stack, without a CREATE TABLE, without being discoverable in
 information_schema etc., I find needs some more justification than
 I've seen in this thread. So far I've only heard that it's more
 convenient to implement in the current PostgreSQL code base.

 It is required by the SQL standard.

 I had a cursory read of the SQL 20nn draft and I don't get this
 impression. The only place I could find discussing the behavior of
 transition tables is in Foundation 4.39.1 General description of
 triggers, which says:

 Special variables make the data in the transition table(s) available
 to the triggered action. For a statement-level
 trigger the variable is one whose value is a transition table.

 There is no information about the scoping of such variables, so I
 assume it refers to a regular locally scoped variable.

 Did I miss something?

Apparently.  I did a search on the document and counted and got 101
occurrences of transition table.  I might be off by a few, but
that should be pretty close.  Perhaps this, from 4.14 most directly
answers your point:

| A transient table is a named table that may come into existence
| implicitly during the evaluation of a query expression or the
| execution of a trigger. A transient table is identified by a 
| query name if it arises during the evaluation of a query 
| expression, or by a transition table name if it arises during 
| the execution of a trigger. Such tables exist only for the 
| duration of the executing SQL-statement containing the query 
| expression or for the duration of the executing trigger.

 Are you reading a different version of the spec?

I'm looking at a draft of 200x from  2006-02-01.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Need Multixact Freezing Docs

2014-09-03 Thread Robert Haas
On Tue, Sep 2, 2014 at 8:18 PM, Bruce Momjian br...@momjian.us wrote:
 On Thu, Aug 28, 2014 at 09:32:17AM -0700, Josh Berkus wrote:
 On 08/28/2014 09:09 AM, Alvaro Herrera wrote:
  Josh Berkus wrote:
  On 04/16/2014 01:30 PM, Alvaro Herrera wrote:
  Josh Berkus wrote:
 
  You can see the current multixact value in pg_controldata output.  Keep
  timestamped values of that somewhere (a table?) so that you can measure
  consumption rate.  I don't think we provide SQL-level access to those
  values.
 
  Bleh.  Do we provide SQL-level access in 9.4?  If not, I think that's a
  requirement before release.
 
  Yeah, good idea.  Want to propose a patch?
 
  Yeah, lemme dig into this.  I really think we need it for 9.4, feature
  frozen or not.

 Got sidetracked by JSONB.

 I had a look at this and came upon a problem --- there is no multi-xid
 SQL data type, and in fact the system catalogs that store mxid values
 use xid, e.g.:

  relminmxid | xid   | not null

 With no mxid data type, there is no way to do function overloading to
 cause age to call the mxid variant.

 Should we use an explicit mxid_age() function name?  Add an mxid data
 type?

Maybe both.  But mxid_age() is probably the simpler way forward just to start.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] B-Tree support function number 3 (strxfrm() optimization)

2014-09-03 Thread Robert Haas
On Tue, Sep 2, 2014 at 4:41 PM, Peter Geoghegan p...@heroku.com wrote:
 HyperLogLog isn't sample-based - it's useful for streaming a set and
 accurately tracking its cardinality with fixed overhead.

OK.

 Is it the right decision to suppress the abbreviated-key optimization
 unconditionally on 32-bit systems and on Darwin?  There's certainly
 more danger, on those platforms, that the optimization could fail to
 pay off.  But it could also win big, if in fact the first character or
 two of the string is enough to distinguish most rows, or if Darwin
 improves their implementation in the future.  If the other defenses
 against pathological cases in the patch are adequate, I would think
 it'd be OK to remove the hard-coded checks here and let those cases
 use the optimization or not according to its merits in particular
 cases.  We'd want to look at what the impact of that is, of course,
 but if it's bad, maybe those other defenses aren't adequate anyway.

 I'm not sure. Perhaps the Darwin thing is a bad idea because no one is
 using Macs to run real database servers. Apple haven't had a server
 product in years, and typically people only use Postgres on their Macs
 for development. We might as well have coverage of the new code for
 the benefit of Postgres hackers that favor Apple machines. Or, to look
 at it another way, the optimization is so beneficially that it's
 probably worth the risk, even for more marginal cases.

 8 primary weights (the leading 8 bytes, frequently isomorphic to the
 first 8 Latin characters, regardless of whether or not they have
 accents/diacritics, or punctuation/whitespace) is twice as many as 4.
 But every time you add a byte of space to the abbreviated
 representation that can resolve a comparison, the number of
 unresolvable-without-tiebreak comparisons (in general) is, I imagine,
 reduced considerably. Basically, 8 bytes is way better than twice as
 good as 4 bytes in terms of its effect on the proportion of
 comparisons that are resolved only with abbreviated keys. Even still,
 I suspect it's still worth it to apply the optimization with only 4.

 You've seen plenty of suggestions on assessing the applicability of
 the optimization from me. Perhaps you have a few of your own.

My suggestion is to remove the special cases for Darwin and 32-bit
systems and see how it goes.

 That wouldn't be harmless - it would probably result in incorrect
 answers in practice, and would certainly be unspecified. However, I'm
 not reading uninitialized bytes. I call memset() so that in the event
 of the final strxfrm() blob being less than 8 bytes (which can happen
 even on glibc with en_US.UTF-8). It cannot be harmful to memcmp()
 every Datum byte if the remaining bytes are always initialized to NUL.

OK.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 1.2

2014-09-03 Thread Hannu Krosing
On 09/03/2014 05:09 PM, Marko Tiikkaja wrote:
 On 9/3/14 5:05 PM, Bruce Momjian wrote:
 On Wed, Sep  3, 2014 at 07:54:09AM +0200, Pavel Stehule wrote:
 I am not against to improve a PL/pgSQL. And I repeat, what can be
 done and can
 be done early:

 a) ASSERT clause -- with some other modification to allow better
 static analyze
 of DML statements, and enforces checks in runtime.

 b) #option or PRAGMA clause with GUC with function scope that
 enforce check on
 processed rows after any DML statement

 c) maybe introduction automatic variable ROW_COUNT as shortcut for GET
 DIAGNOSTICS rc = ROW_COUNT

 All these ideas are being captured somewhere, right?  Where?

 I'm working on a wiki page with all these ideas.  Some of them break
 backwards compatibility somewhat blatantly, some of them could be
 added into PL/PgSQL if we're okay with reserving a keyword for the
 feature. All of them we think are necessary.

Ok, here are my 0.5 cents worth of proposals for some features discussed
in this thread

They should be backwards compatible, but perhaps they are not very
ADA/SQL-kosher  ;)

They also could be implemented as macros first with possible
optimisations in the future


1. Conditions for number of rows returned by SELECT or touched by UPDATE
or DELETE
-

Enforcing number of rows returned/affected could be done using the
following syntax which is concise and clear (and should be in no way
backwards incompatible)

SELECT[1]   - select exactly one row, anything else raises error
SELECT[0:1]   - select zero or one rows, anything else raises error
SELECT[1:] - select one or more rows

plain SELECT is equivalent to SELECT[0:]

same syntax could be used for enforcing sane affected row counts
for INSERT and DELETE


A more SQL-ish way of doing the same could probably be called COMMAND
CONSTRAINTS
and look something like this

SELECT
...
CHECK (ROWCOUNT BETWEEN 0 AND 1);



2. Substitute for EXECUTE with string manipulation


using backticks `` for value/command substitution in SQL as an alternative
to EXECUTE string

Again it should be backwards compatible as , as currently `` are not
allowed inside pl/pgsql functions

Sample 1:

ALTER USER `current_user` PASSWORD newpassword;

would be expanded to

EXECUTE 'ALTER USER ' || current_user ||
' PASSWORD = $1' USING newpassword;

Sample2:

SELECT * FROM `tablename` WHERE `idcolumn` = idvalue;

this could be expanded to

EXECUTE 'SELECT * FROM ' || tablename ||
' WHERE quote_ident(idcolumn) = $1' USING idvalue;

Notice that the use of  around `` forced use of quote_ident()


3. A way to tell pl/pggsql not to cache plans fro normal queries
---

This could be done using a #pragma or special /* NOPLANCACHE */
comment as suggested by Pavel

Or we could expand the [] descriptor from 1. to allow more options

OR we could do it in SQL-ish way using like this:

SELECT
...
USING FRESH PLAN;


Best Regards

-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] B-Tree support function number 3 (strxfrm() optimization)

2014-09-03 Thread Robert Haas
On Tue, Sep 2, 2014 at 7:51 PM, Peter Geoghegan p...@heroku.com wrote:
 On Tue, Sep 2, 2014 at 12:22 PM, Robert Haas robertmh...@gmail.com wrote:
 Maybe we should get rid of the tiebreak case altogether: the second
 SortSupport object is just containing all the same values as the first
 one, with only the comparator being different.  Can't we just have
 both the abbreviated-comparator and authoritative-comparator as
 members of the SortSupport, and call whichever one is appropriate,
 instead of copying the whole SortSupport object?  That would have the
 nice property of avoiding the need for special handling in
 reversedirection_heap().

 I thought about that. I think that there are other disadvantages to
 explicitly having a second comparator, associated with a the same sort
 support state as the authoritative comparator: ApplySortComparator()
 expects to compare using ssup-comparator(). You'd have to duplicate
 that for your alternative/abbreviated comparator. It might be to our
 advantage to use the same ApplySortComparator() inline comparator
 muliple times in routines like comparetup_heap(), if not for clarity
 then for performance (granted, that isn't something I have any
 evidence for, but I wouldn't be surprised if it was noticeable). It
 might also be to our advantage to have a separate work space.

Well, the additional code needed in ApplySortComparator would be about
two lines long.  Maybe that's going to turn out to be too expensive to
do in all cases, so that we'll end up with ApplySortComparator and
ApplyAbbreviatedSortComparator, but even if we do that seems less
heavyweight than spawning a whole separate object for the tiebreak
case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql latency bgwriter not doing its job

2014-09-03 Thread Andres Freund
On 2014-09-03 17:08:12 -0400, Robert Haas wrote:
 On Sat, Aug 30, 2014 at 2:04 PM, Andres Freund and...@2ndquadrant.com wrote:
  If the sort buffer is allocated when the checkpointer is started, not
  everytime we sort, as you've done in your version of the patch I think
  that risk is pretty manageable. If we really want to be sure nothing is
  happening at runtime, even if the checkpointer was restarted, we can put
  the sort array in shared memory.
 
 I don't think that allocating the array at checkpointer start time
 helps.  If it works, then you're strictly worse off than if you
 allocate it at every checkpoint, because you're holding onto the
 memory all the time instead of only when it's being used.  And if it
 fails, what then?  Sure, you can have that copy of the checkpointer
 process exit, but that does nothing good.  The postmaster will keep on
 restarting it and it will keep on dying for lack of memory, and no
 checkpoints will complete.  Oops.

It's imo quite clearly better to keep it allocated. For one after
postmaster started the checkpointer successfully you don't need to be
worried about later failures to allocate memory if you allocate it once
(unless the checkpointer FATALs out which should be exceedingly rare -
we're catching ERRORs). It's much much more likely to succeed
initially. Secondly it's not like there's really that much time where no
checkpointer isn't running.

 So it seems to me that the possibly-sensible approaches are:
 
 1. Allocate an array when we need to sort, and if the allocation
 fails, have some kind of fallback strategy, like logging a WARNING an
 writing the buffers out without sorting them.  If it succeeds, do the
 checkpoint and then free the memory until we need it again.

I think if we want to go that way I vote for keeping the array allocated
and continuing to try to allocate it after allocation failures. And, as
you suggest, fall back to a simple sequential search through all
buffers.

 2. Putting the array in shared_memory, so that once the server is
 started, we can be sure the memory is allocated and the sort will
 work.

But I prefer this approach. If we ever want to have more than one
process writing out data for checkpoints we're going to need it
anyway. And that's something not that far away for large setups
imo. Especially due to checksums.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 1.2

2014-09-03 Thread Kevin Grittner
Hannu Krosing ha...@2ndquadrant.com wrote:

 [suggested syntax]

Interesting.  The only one that really offends me is:

 SELECT * FROM `tablename` WHERE `idcolumn` = idvalue;

I think that should be:

SELECT * FROM `tablename` WHERE `idcolumn` = idvalue;

i.e., I think the backticks belong on the outside.


--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] B-Tree support function number 3 (strxfrm() optimization)

2014-09-03 Thread Peter Geoghegan
On Wed, Sep 3, 2014 at 2:18 PM, Robert Haas robertmh...@gmail.com wrote:
 My suggestion is to remove the special cases for Darwin and 32-bit
 systems and see how it goes.

I guess it should still be a configure option, then. Or maybe there
should just be a USE_ABBREV_KEYS macro within pg_config_manual.h.

Are you suggesting that the patch be committed with the optimization
enabled on all platforms by default, with the option to revisit
disabling it if and when there is user push-back? I don't think that's
unreasonable, given the precautions now taken, but I'm just not sure
that's what you mean.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 1.2

2014-09-03 Thread Marko Tiikkaja

On 2014-09-03 23:19, Hannu Krosing wrote:

1. Conditions for number of rows returned by SELECT or touched by UPDATE
or DELETE
-

Enforcing number of rows returned/affected could be done using the
following syntax which is concise and clear (and should be in no way
backwards incompatible)

SELECT[1]   - select exactly one row, anything else raises error
SELECT[0:1]   - select zero or one rows, anything else raises error
SELECT[1:] - select one or more rows

plain SELECT is equivalent to SELECT[0:]

same syntax could be used for enforcing sane affected row counts
for INSERT and DELETE


I'm not sure how much I like that syntax in cases like:

  WITH t AS (
-- multi-line query here
  )
  SELECT[0:] foo, bar
  INTO _bat, _man
  FROM foo
  JOIN ..
  JOIN ..
  WHERE ..
  -- etc.

It seems quite well hidden compared to a single keyword at the beginning 
of the query.


It's also not clear whether all of this flexibility is required. 
Enforcing exactly one conveniently is my main priority.  Supporting 
the at most one case could be nice, too, but anything else feels like 
overkill.  Though if the syntax is based on numbers (and not a keyword), 
then I guess we get the flexibility for free anyway.


I also have my doubts about how easy it would be to implement this 
syntax given that we're using the real SQL parser.



.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyEnd doesn't adhere to its API contract

2014-09-03 Thread Bruce Momjian
On Fri, May  9, 2014 at 12:03:36PM -0400, Robert Haas wrote:
 On Thu, May 8, 2014 at 5:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Perhaps the text should be like this:
 
  The result is 1 if the termination message was sent; or in nonblocking
  mode, this may only indicate that the termination message was successfully
  queued.  (In nonblocking mode, to be certain that the data has been sent,
  you should next wait for write-ready and call functionPQflush/,
  repeating until it returns zero.)  Zero indicates that the function could
  not queue the termination message because of full buffers; this will only
  happen in nonblocking mode.  (In this case, wait for write-ready and try
  the PQputCopyEnd call again.)  If a hard error occurs, -1 is returned; you
  can use functionPQerrorMessage/function to retrieve details.
 
 That looks pretty good.   However, I'm realizing this isn't the only
 place where we probably need to clarify the language.  Just to take
 one example near at hand, PQputCopyData may also return 1 when it's
 only queued the data; it seems to try even less hard than PQputCopyEnd
 to ensure that the data is actually sent.

Uh, where are we on this?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5: Memory-bounded HashAgg

2014-09-03 Thread Tomas Vondra
On 4.9.2014 00:42, Tomas Vondra wrote:

 Attached are two CSV files contain both raw results (4 runs per query),
 and aggregated results (average of the runs), logs with complete logs
 and explain (analyze) plans of the queries for inspection.

Of course, I forgot to attach the CSV files ... here they are.

Tomas
dataset,work_mem,query,master,dense,jeff,hashjoin
large,1024MB,a,47398,49530,36110,30480
large,1024MB,b,13297,9326,9325,9378
large,1024MB,c,9824,7758,7890,
large,1024MB,d,9524,7503,7480,7543
large,128MB,a,46095,47118,43313,40579
large,128MB,b,13466,9446,22317,19860
large,128MB,c,9853,7759,7788,7835
large,128MB,d,9596,7508,7473,7567
large,256MB,a,45560,47484,41188,38177
large,256MB,b,13457,9248,11821,11839
large,256MB,c,9801,7761,7701,7980
large,256MB,d,9521,7516,7468,7555
large,512MB,a,45695,47548,37628,32669
large,512MB,b,13316,9321,9290,9387
large,512MB,c,9797,7759,7715,7836
large,512MB,d,9512,7510,7464,7616
large,64MB,a,42152,44886,44127,38459
large,64MB,b,13688,9984,23746,24419
large,64MB,c,9826,7766,7875,7810
large,64MB,d,9520,7505,7496,7566
medium,1024MB,a,7474,4045,4140,4175
medium,1024MB,b,2410,1773,1764,1807
medium,1024MB,c,1937,1550,1540,1551
medium,1024MB,d,1905,1504,1496,1512
medium,128MB,a,8901,8864,5901,5400
medium,128MB,b,2418,1772,1765,1794
medium,128MB,c,1946,1551,1540,1557
medium,128MB,d,1912,1502,1499,1517
medium,256MB,a,8714,9097,5714,4854
medium,256MB,b,2425,1774,1766,1794
medium,256MB,c,1971,1551,1539,1551
medium,256MB,d,1915,1503,1503,1523
medium,512MB,a,,9310,5210,4709
medium,512MB,b,2406,1772,1769,1793
medium,512MB,c,1937,1551,1540,1571
medium,512MB,d,1903,1503,1522,1526
medium,64MB,a,8542,8698,5973,4784
medium,64MB,b,2507,1794,2037,2080
medium,64MB,c,1973,1566,1578,1550
medium,64MB,d,1922,1498,1501,1506
small,1024MB,a,655,382,365,409
small,1024MB,b,240,176,196,178
small,1024MB,c,192,155,153,156
small,1024MB,d,215,152,150,152
small,128MB,a,657,398,364,410
small,128MB,b,255,176,174,178
small,128MB,c,216,178,153,156
small,128MB,d,204,152,150,153
small,256MB,a,657,383,363,409
small,256MB,b,233,187,173,178
small,256MB,c,194,155,177,195
small,256MB,d,192,152,151,152
small,512MB,a,653,384,364,420
small,512MB,b,255,176,174,203
small,512MB,c,191,155,178,156
small,512MB,d,191,152,157,152
small,64MB,a,842,885,418,442
small,64MB,b,234,178,174,179
small,64MB,c,193,157,154,195
small,64MB,d,193,152,188,152branch,work_mem,dataset,query,duration
master,64MB,small,a,869
master,64MB,small,a,832
master,64MB,small,a,831
master,64MB,small,a,836
master,64MB,small,b,236
master,64MB,small,b,232
master,64MB,small,b,233
master,64MB,small,b,233
master,64MB,small,c,195
master,64MB,small,c,193
master,64MB,small,c,192
master,64MB,small,c,192
master,64MB,small,d,192
master,64MB,small,d,193
master,64MB,small,d,193
master,64MB,small,d,192
master,64MB,medium,a,8338
master,64MB,medium,a,8545
master,64MB,medium,a,8861
master,64MB,medium,a,8423
master,64MB,medium,b,2536
master,64MB,medium,b,2567
master,64MB,medium,b,2492
master,64MB,medium,b,2431
master,64MB,medium,c,1993
master,64MB,medium,c,1960
master,64MB,medium,c,1959
master,64MB,medium,c,1981
master,64MB,medium,d,1929
master,64MB,medium,d,1919
master,64MB,medium,d,1920
master,64MB,medium,d,1919
master,64MB,large,a,41844
master,64MB,large,a,42201
master,64MB,large,a,42115
master,64MB,large,a,42449
master,64MB,large,b,13827
master,64MB,large,b,13653
master,64MB,large,b,13941
master,64MB,large,b,13332
master,64MB,large,c,9783
master,64MB,large,c,9802
master,64MB,large,c,9816
master,64MB,large,c,9903
master,64MB,large,d,9512
master,64MB,large,d,9526
master,64MB,large,d,9518
master,64MB,large,d,9523
master,128MB,small,a,655
master,128MB,small,a,654
master,128MB,small,a,664
master,128MB,small,a,656
master,128MB,small,b,254
master,128MB,small,b,249
master,128MB,small,b,250
master,128MB,small,b,267
master,128MB,small,c,205
master,128MB,small,c,205
master,128MB,small,c,207
master,128MB,small,c,245
master,128MB,small,d,207
master,128MB,small,d,196
master,128MB,small,d,214
master,128MB,small,d,200
master,128MB,medium,a,8704
master,128MB,medium,a,9786
master,128MB,medium,a,8542
master,128MB,medium,a,8571
master,128MB,medium,b,2417
master,128MB,medium,b,2417
master,128MB,medium,b,2420
master,128MB,medium,b,2418
master,128MB,medium,c,1951
master,128MB,medium,c,1946
master,128MB,medium,c,1944
master,128MB,medium,c,1942
master,128MB,medium,d,1910
master,128MB,medium,d,1909
master,128MB,medium,d,1915
master,128MB,medium,d,1915
master,128MB,large,a,45674
master,128MB,large,a,46041
master,128MB,large,a,43623
master,128MB,large,a,49043
master,128MB,large,b,13495
master,128MB,large,b,13509
master,128MB,large,b,13428
master,128MB,large,b,13430
master,128MB,large,c,9850
master,128MB,large,c,9852
master,128MB,large,c,9866
master,128MB,large,c,9845
master,128MB,large,d,9596
master,128MB,large,d,9623
master,128MB,large,d,9573
master,128MB,large,d,9590
master,256MB,small,a,656
master,256MB,small,a,658
master,256MB,small,a,658
master,256MB,small,a,657

Re: [HACKERS] PL/pgSQL 2

2014-09-03 Thread Joshua D. Drake


On 09/03/2014 11:48 AM, Robert Haas wrote:


Anyway, to get back around to the topic of PL/SQL compatibility
specifically, if you care about that issue, pick one thing that exists
in PL/SQL but not in PL/pgsql and try to do something about it.  Maybe
it'll be something that EnterpiseDB has already done something about,
in which case, if your patch gets committed, Advanced Server will lose
a bit of distinction as compared with PostgreSQL.  Or maybe it'll be
something that EnterpriseDB hasn't done anything about, and then
everybody comes out strictly ahead.  What I think you shouldn't do
(although you're free to ignore me) is continue thinking of Oracle
compatibility as one monolithic thing, because it isn't, or to pursue
of a course of trying to get the PostgreSQL community to slavishly
follow Oracle, because I think you'll fail, and even if you succeed I
don't think the results will actually be positive for PostgreSQL.


Well put Robert.

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-09-03 Thread Noah Yetter
I'm not sure it's fixed.  I am attempting a pg_upgrade from 9.2.8 to 9.3.5
and it dies like so:

(...many relations restoring successfully snipped...)
pg_restore: creating SEQUENCE address_address_id_seq
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1410; 1259 17670 SEQUENCE
address_address_id_seq javaprod
pg_restore: [archiver (db)] could not execute query: ERROR:  could not
create file base/16414/17670: File exists

Inspecting a copy of the source cluster, OID 17670 does indeed correspond
to address_address_id_seq, but inspecting the partially-upgraded cluster
that OID is taken by pg_toast_202359_index.  Again conferring with a copy
of the source (9.2.8) cluster, the relation corresponding to filenode
202359 does not have a toast table.

(I know pg-hackers isn't the right place to discuss admin issues, but this
thread is the only evidence of this bug I can find.  If anyone can suggest
a workaround I would be infinitely grateful.)


On Thu, Aug 7, 2014 at 12:57 PM, Bruce Momjian br...@momjian.us wrote:

 On Tue, Aug  5, 2014 at 07:31:21PM -0400, Bruce Momjian wrote:
  On Thu, Jul 10, 2014 at 06:38:26PM -0400, Bruce Momjian wrote:
   On Thu, Jul 10, 2014 at 06:17:14PM -0400, Bruce Momjian wrote:
Well, we are going to need to call internal C functions, often
 bypassing
their typical call sites and the assumption about locking, etc.
 Perhaps
this could be done from a plpgsql function.  We could add and drop a
dummy column to force TOAST table creation --- we would then only
 need a
way to detect if a function _needs_ a TOAST table, which was skipped
 in
binary upgrade mode previously.
   
That might be a minimalistic approach.
  
   I have thought some more on this.  I thought I would need to open
   pg_class in C and do complex backend stuff, but I now realize I can do
   it from libpq, and just call ALTER TABLE and I think that always
   auto-checks if a TOAST table is needed.  All I have to do is query
   pg_class from libpq, then construct ALTER TABLE commands for each item,
   and it will optionally create the TOAST table if needed.  I just have
 to
   use a no-op ALTER TABLE command, like SET STATISTICS.
 
  Attached is a completed patch which handles oid conflicts in pg_class
  and pg_type for TOAST tables that were not needed in the old cluster but
  are needed in the new one.  I was able to recreate a failure case and
  this fixed it.
 
  The patch need to be backpatched because I am getting more-frequent bug
  reports from users using pg_upgrade to leave now-end-of-life'ed 8.4.
  There is not a good work-around for pg_upgrade failures without this
  fix, but at least pg_upgrade throws an error.

 Patch applied through 9.3, with an additional Assert check. 9.2 code was
 different enough that there was too high a risk for backpatching.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] 9.5: Memory-bounded HashAgg

2014-09-03 Thread Tomas Vondra
On 20.8.2014 20:32, Robert Haas wrote:
 On Sun, Aug 17, 2014 at 1:17 PM, Tomas Vondra t...@fuzzy.cz wrote:
 Being able to batch inner and outer relations in a matching way is
 certainly one of the reasons why hashjoin uses that particular scheme.
 There are other reasons, though - for example being able to answer 'Does
 this group belong to this batch?' quickly, and automatically update
 number of batches.

 I'm not saying the lookup is extremely costly, but I'd be very surprised
 if it was as cheap as modulo on a 32-bit integer. Not saying it's the
 dominant cost here, but memory bandwidth is quickly becoming one of the
 main bottlenecks.
 
 Well, I think you're certainly right that a hash table lookup is more
 expensive than modulo on a 32-bit integer; so much is obvious.  But if
 the load factor is not too large, I think that it's not a *lot* more
 expensive, so it could be worth it if it gives us other advantages.

Yes, that may be true. I'm not opposed to Jeff's approach in general -
it's certainly a nice solution for cases with fixed size of the
aggregate states.

But I still don't see how it could handle the aggregates with growing
aggregate state (which is the case that troubles me, because that's what
we see in our workloads).

 As I see it, the advantage of Jeff's approach is that it doesn't
 really matter whether our estimates are accurate or not.  We don't
 have to decide at the beginning how many batches to do, and then
 possibly end up using too much or too little memory per batch if we're
 wrong; we can let the amount of memory actually used during execution
 determine the number of batches.  That seems good.  Of course, a hash

Yes. I think that maybe we could use Jeff's approach even for 'growing
aggregate state' case, assuming we can serialize the aggregate states
and release the memory properly.

First, the problem with the current hash table used in HashAggregate
(i.e. dynahash) is that it never actually frees memory - when you do
HASH_REMOVE it only moves it to a list of entries for future use.

Imagine a workload where you initially see only 1 tuple for each group
before work_mem gets full. At that point you stop adding new groups, but
the current ones will grow. Even if you know how to serialize the
aggregate states (which we don't), you're in trouble because the initial
state is small (only 1 tuple was passed to the group) and most of the
memory is stuck in dynahash.

 join can increase the number of batches on the fly, but only by
 doubling it, so you might go from 4 batches to 8 when 5 would really
 have been enough.  And a hash join also can't *reduce* the number of
 batches on the fly, which might matter a lot.  Getting the number of
 batches right avoids I/O, which is a lot more expensive than CPU.

Regarding the estimates, I don't see much difference between the two
approaches when handling this issue.

It's true you can wait with deciding how many partitions (aka batches)
to create until work_mem is full, at which point you have more
information than at the very beginning. You know how many tuples you've
already seen, how many tuples you expect (which is however only an
estimate etc.). And you may use that to estimate the number of
partitions to create.

That however comes at a cost - it's not really a memory-bounded hash
aggregate, because you explicitly allow exceeding work_mem as more
tuples for existing groups arrive.

Also, no one really says the initial estimate of how many tuples will be
aggregated is correct. It's about as unreliable as the group count
estimate. So how exactly are you going to estimate the partitions?

Considering this, I doubt being able to choose arbitrary number of
partitions (instead of only powers of 2) is really an advantage.

Reducing the number of partitions might matter, but in my experience
most estimation errors are underestimations. Because we assume
independence where in practice columns are dependent, etc.

I agree that getting the batches right is important, but OTOH when using
hash join using more smaller batches is often significantly faster than
using one large one. So it depends.

Whe I think we should prevent is under-estimating the number of batches,
because in that case you have to read the whole batch, write part of it
again and then read it again. Instead of just writing it once (into two
files). Reading a tuple from a batch only to write it to another batch
is not really efficient.


 But the situation here isn't comparable, because there's only one
 input stream.  I'm pretty sure we'll want to keep track of which
 transition states we've spilled due to lack of memory as opposed to
 those which were never present in the table at all, so that we can
 segregate the unprocessed tuples that pertain to spilled transition
 states from the ones that pertain to a group we haven't begun yet.

 Why would that be necessary or useful? I don't see a reason for tracking
 that / segregating the tuples.
 
 Suppose there are going to be three 

Re: [HACKERS] 9.5: Memory-bounded HashAgg

2014-09-03 Thread Tomas Vondra
On 4.9.2014 01:34, Tomas Vondra wrote:
 On 20.8.2014 20:32, Robert Haas wrote:

 As I see it, the advantage of Jeff's approach is that it doesn't
 really matter whether our estimates are accurate or not.  We don't
 have to decide at the beginning how many batches to do, and then
 possibly end up using too much or too little memory per batch if we're
 wrong; we can let the amount of memory actually used during execution
 determine the number of batches.  That seems good.  Of course, a hash

Also, you don't actually have to decide the number of batches at the
very beginning. You can start start with nbatch=1 and decide how many
batches to use when the work_mem is reached. I.e. at exactly the same
moment / using the same amount of info as with Jeff's approach. No?

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-09-03 Thread David G Johnston
Based upon the dates the noted patch is not in 9.3.5; which was released a
couple of weeks previous to it being committed.

David J.


nyetter wrote
 I'm not sure it's fixed.  I am attempting a pg_upgrade from 9.2.8 to 9.3.5
 and it dies like so:
 
 (...many relations restoring successfully snipped...)
 pg_restore: creating SEQUENCE address_address_id_seq
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 1410; 1259 17670 SEQUENCE
 address_address_id_seq javaprod
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not
 create file base/16414/17670: File exists
 
 Inspecting a copy of the source cluster, OID 17670 does indeed correspond
 to address_address_id_seq, but inspecting the partially-upgraded cluster
 that OID is taken by pg_toast_202359_index.  Again conferring with a copy
 of the source (9.2.8) cluster, the relation corresponding to filenode
 202359 does not have a toast table.
 
 (I know pg-hackers isn't the right place to discuss admin issues, but this
 thread is the only evidence of this bug I can find.  If anyone can suggest
 a workaround I would be infinitely grateful.)
 
 
 On Thu, Aug 7, 2014 at 12:57 PM, Bruce Momjian lt;

 bruce@

 gt; wrote:
 
 On Tue, Aug  5, 2014 at 07:31:21PM -0400, Bruce Momjian wrote:
  On Thu, Jul 10, 2014 at 06:38:26PM -0400, Bruce Momjian wrote:
   On Thu, Jul 10, 2014 at 06:17:14PM -0400, Bruce Momjian wrote:
Well, we are going to need to call internal C functions, often
 bypassing
their typical call sites and the assumption about locking, etc.
 Perhaps
this could be done from a plpgsql function.  We could add and drop
 a
dummy column to force TOAST table creation --- we would then only
 need a
way to detect if a function _needs_ a TOAST table, which was
 skipped
 in
binary upgrade mode previously.
   
That might be a minimalistic approach.
  
   I have thought some more on this.  I thought I would need to open
   pg_class in C and do complex backend stuff, but I now realize I can
 do
   it from libpq, and just call ALTER TABLE and I think that always
   auto-checks if a TOAST table is needed.  All I have to do is query
   pg_class from libpq, then construct ALTER TABLE commands for each
 item,
   and it will optionally create the TOAST table if needed.  I just have
 to
   use a no-op ALTER TABLE command, like SET STATISTICS.
 
  Attached is a completed patch which handles oid conflicts in pg_class
  and pg_type for TOAST tables that were not needed in the old cluster
 but
  are needed in the new one.  I was able to recreate a failure case and
  this fixed it.
 
  The patch need to be backpatched because I am getting more-frequent bug
  reports from users using pg_upgrade to leave now-end-of-life'ed 8.4.
  There is not a good work-around for pg_upgrade failures without this
  fix, but at least pg_upgrade throws an error.

 Patch applied through 9.3, with an additional Assert check. 9.2 code was
 different enough that there was too high a risk for backpatching.

 --
   Bruce Momjian  lt;

 bruce@

 gt;http://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +


 --
 Sent via pgsql-hackers mailing list (

 pgsql-hackers@

 )
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Pg-upgrade-and-toast-tables-bug-discovered-tp5810447p5817656.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Display of timestamp in pg_dump custom format

2014-09-03 Thread Gavin Flower

On 04/09/14 08:13, Bruce Momjian wrote:

On Thu, May  1, 2014 at 12:09:34PM -0400, Bruce Momjian wrote:

On Thu, May  1, 2014 at 12:33:51PM +1200, Gavin Flower wrote:

On 01/05/14 12:04, Bruce Momjian wrote:

On Thu, May  1, 2014 at 08:27:49AM +1200, Gavin Flower wrote:

On 01/05/14 02:51, Bruce Momjian wrote:

The table of contents for pg_restore -l shows the time the archive was
made as local time (it uses ctime()):

; Archive created at Wed Apr 30 10:03:28 2014

Is this clear enough that it is local time?  Should we display this
better, perhaps with a time zone designation?


I think it would be good to include the time zone, as we are all
very international these days - and in Australia, adjacent states
have different dates for the summer time transition!

Personally, I would like to see the date in the format 2014-04-30,
but having the day of the week is good.

Milliseconds might be useful, if you want to check logs files.

OK, I will work on it for 9.5.  Thanks.


So the it would then read something like:

 ; Archive created at Wed 2014-04-30 10:03:28.042 NZST

(but with the correct appropriate time zone designation)?

I think we would use a numeric offset.

I ended up going with the string-based timezone as I was worried that
the sign of the timezone could easily confuse people because the SQL
timezone offset sign is often different from the OS timezone.  The new
output is:

;
; Archive created at Wed Sep  3 16:12:21 2014 EST   --
; dbname: test
; TOC Entries: 8
; Compression: -1
; Dump Version: 1.12-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 9.5devel
; Dumped by pg_dump version: 9.5devel

Patch attached.

I would prefer the date in a sane numeric format to the left of the time 
(similar to what I suggested above), easier to sort (if a sort is 
required) - it is also easier to use regular expressions to select 
statement in an arbitrary date/time range.


I don't always know in advance that I need to debug something, so I tend 
to try and ensure that the relevant data is easy to find, even when I 
currently don't expect ever to do so.  This is a lesson that I have 
learnt from over 40 years of commercial programming experience using a 
variety of languages on a wide range of platforms.


Most likely, I will never need to worry about the precise format of 
Archive statement output, but ...



Cheers,
Gavin



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-09-03 Thread Bruce Momjian
On Wed, Sep  3, 2014 at 05:12:30PM -0600, Noah Yetter wrote:
 I'm not sure it's fixed.  I am attempting a pg_upgrade from 9.2.8 to 9.3.5 and
 it dies like so:
 
 (...many relations restoring successfully snipped...)
 pg_restore: creating SEQUENCE address_address_id_seq
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 1410; 1259 17670 SEQUENCE
 address_address_id_seq javaprod
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not create
 file base/16414/17670: File exists
 
 Inspecting a copy of the source cluster, OID 17670 does indeed correspond to
 address_address_id_seq, but inspecting the partially-upgraded cluster that OID
 is taken by pg_toast_202359_index.  Again conferring with a copy of the source
 (9.2.8) cluster, the relation corresponding to filenode 202359 does not have a
 toast table.
 
 (I know pg-hackers isn't the right place to discuss admin issues, but this
 thread is the only evidence of this bug I can find.  If anyone can suggest a
 workaround I would be infinitely grateful.)

Actually, there was a pg_upgrade fix _after_ the release of 9.3.5 which
explains this failure:

commit 4c6780fd17aa43ed6362aa682499cc2f9712cc8b
Author: Bruce Momjian br...@momjian.us
Date:   Thu Aug 7 14:56:13 2014 -0400

pg_upgrade: prevent oid conflicts with new-cluster TOAST tables

Previously, TOAST tables only required in the new cluster could 
cause
oid conflicts if they were auto-numbered and a later conflicting 
oid had
to be assigned.

Backpatch through 9.3

Any chance you can download the 9.3.X source tree and try that?  You
need an entire install, not just a new pg_upgrade binary.  I am
disapointed I could not fix this before 9.3.5 was released.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL 1.2

2014-09-03 Thread Hannu Krosing
On 09/04/2014 12:17 AM, Marko Tiikkaja wrote:
 On 2014-09-03 23:19, Hannu Krosing wrote:
 1. Conditions for number of rows returned by SELECT or touched by UPDATE
 or DELETE
 -


 Enforcing number of rows returned/affected could be done using the
 following syntax which is concise and clear (and should be in no way
 backwards incompatible)

 SELECT[1]   - select exactly one row, anything else raises error
 SELECT[0:1]   - select zero or one rows, anything else raises error
 SELECT[1:] - select one or more rows

 plain SELECT is equivalent to SELECT[0:]

 same syntax could be used for enforcing sane affected row counts
 for INSERT and DELETE

 I'm not sure how much I like that syntax in cases like:

   WITH t AS (
 -- multi-line query here
   )
   SELECT[0:] foo, bar
   INTO _bat, _man
   FROM foo
   JOIN ..
   JOIN ..
   WHERE ..
   -- etc.

 It seems quite well hidden compared to a single keyword at the
 beginning of the query.
What do you have in mind ?

Is your wiki page already available somewhere ?

 It's also not clear whether all of this flexibility is required.
 Enforcing exactly one conveniently is my main priority.
What do you want here on top of SELECT ... INTO STRICT ... ?
 Supporting the at most one case could be nice, too, but anything
 else feels like overkill.  Though if the syntax is based on numbers
 (and not a keyword), then I guess we get the flexibility for free anyway.

 I also have my doubts about how easy it would be to implement this
 syntax given that we're using the real SQL parser.
Definitely not trivial, but at least doable :)

Finding and processing SELECT[...] could probably even be done with
a (regex-based ?) pre-parser .


-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] B-tree descend for insertion locking

2014-09-03 Thread Peter Geoghegan
On Tue, Mar 18, 2014 at 4:12 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 When inserting into a B-tree index, all the pages are read-locked when
 descending the tree. When we reach the leaf page, the read-lock is exchanged
 for a write-lock.

 There's nothing wrong with that, but why don't we just directly grab a
 write-lock on the leaf page?


Whatever happened to this idea?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Display of timestamp in pg_dump custom format

2014-09-03 Thread Bruce Momjian
On Thu, Sep  4, 2014 at 12:02:19PM +1200, Gavin Flower wrote:
 I would prefer the date in a sane numeric format to the left of the
 time (similar to what I suggested above), easier to sort (if a sort
 is required) - it is also easier to use regular expressions to
 select statement in an arbitrary date/time range.
 
 I don't always know in advance that I need to debug something, so I
 tend to try and ensure that the relevant data is easy to find, even
 when I currently don't expect ever to do so.  This is a lesson that
 I have learnt from over 40 years of commercial programming
 experience using a variety of languages on a wide range of
 platforms.
 
 Most likely, I will never need to worry about the precise format of
 Archive statement output, but ...

I can't seem to find a way to get the timezone offset via C; see:


http://stackoverflow.com/questions/635780/why-does-glibc-timezone-global-not-agree-with-system-time-on-dst

On Linux, do 'man timezone' for details.  'timezone' has the non-DST
offset from GMT, and 'daylight' is a boolean which indicates DST, but
not how much time is different for DST, and I am not sure it is always
an hour.  In fact 'daylight' is documented as saying whether there is
every a daylight savings time, not that DST is active.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL format and API changes (9.5)

2014-09-03 Thread Michael Paquier
On Tue, Sep 2, 2014 at 9:23 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I committed the redo-routine refactoring patch. I kept the XLog prefix in
 the XLogReadBufferForRedo name; it's redundant, but all the other similar
 functions in xlogutils.c use the XLog prefix so it would seem inconsistent
 to not have it here.
Thanks! Even that will be helpful for a potential patch doing
consistency comparisons of FPW with current pages having WAL of a
record applied.

 I'll post a new version of the main patch shortly...
Looking forward to seeing it.

Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL format and API changes (9.5)

2014-09-03 Thread Michael Paquier
On Tue, Sep 2, 2014 at 9:23 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I committed the redo-routine refactoring patch. I kept the XLog prefix in
 the XLogReadBufferForRedo name; it's redundant, but all the other similar
 functions in xlogutils.c use the XLog prefix so it would seem inconsistent
 to not have it here.
Thanks! Even that will be helpful for a potential patch doing
consistency comparisons of FPW with current pages having WAL of a
record applied.

 I'll post a new version of the main patch shortly...
Looking forward to seeing it.

Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >