Re: [HACKERS] Typo fix in contrib/sepgsql/relation.c

2014-04-01 Thread Heikki Linnakangas

On 04/01/2014 08:58 AM, Amit Langote wrote:

Hi,

Just noticed the following in contrib/sepgsql/relation.c:

1 /* -
2  *
3  * contrib/sepgsql/label.c
4  *

Attached fixes this.


Thanks, fixed.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Negative Transition Aggregate Functions (WIP)

2014-04-01 Thread Dean Rasheed
On 31 March 2014 01:58, Florian Pflug f...@phlo.org wrote:
 Attached are updated patches that include the EXPLAIN changes mentioned
 above and updated docs.


These patches need re-basing --- they no longer apply to HEAD.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gaussian distribution pgbench

2014-04-01 Thread Fabien COELHO


Please find attached an updated version v13 for this patch.

I have (I hope) significanlty improved the documentation, including not so 
helpful mathematical explanation about the actual meaning of the threshold 
value. If a native English speaker could check the documentation, it would 
be nice!


I have improved the implementation of the exponential distribution so as 
to avoid a loop, which allows to lift the minimum threshold value 
constraint, and the exponential pgbench summary displays decile and 
first/last percent drawing probabilities. However, the same simplification 
cannot be applied on the gaussian distribution part which must rely on a 
loop, thus needs a minimal threshold for performance. I have also checked 
(see the 4 attached scripts) the actual distribution against the computed 
probabilities.



I disagree with the suggestion to remove the included gaussian  
exponential tests variants, because (1) it would mean removing the 
specific summaries as well, which are essential to help feel how the 
feature works; (2) the corresponding code in the source is rather 
straightforward; (3) the tests correspond to the schema and data created 
with -i, so it makes sense that they are stored in pgbench; (4) in order 
for this feature to be used, it is best that it is available directly and 
simply from pgbench, and not to be thought for elsewhere.



If this is a commit blocker, then the embedded script will have to be 
removed, but I really think that they add a significant value to pgbench 
and its non uniform features because they make it easy to test.



If Mitsumasa-san aggrees with these proposed changes, I would suggest to
apply this patch.

--
Fabiendiff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 7c1e59e..eb1ecb3 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -41,6 +41,7 @@
 #include math.h
 #include signal.h
 #include sys/time.h
+#include assert.h
 #ifdef HAVE_SYS_SELECT_H
 #include sys/select.h
 #endif
@@ -98,6 +99,8 @@ static int	pthread_join(pthread_t th, void **thread_return);
 #define LOG_STEP_SECONDS	5	/* seconds between log messages */
 #define DEFAULT_NXACTS	10		/* default nxacts */
 
+#define MIN_GAUSSIAN_THRESHOLD		2.0	/* minimum threshold for gauss */
+
 int			nxacts = 0;			/* number of transactions per client */
 int			duration = 0;		/* duration in seconds */
 
@@ -169,6 +172,14 @@ bool		is_connect;			/* establish connection for each transaction */
 bool		is_latencies;		/* report per-command latencies */
 int			main_pid;			/* main process id used in log filename */
 
+/* gaussian distribution tests: */
+double		stdev_threshold;   /* standard deviation threshold */
+booluse_gaussian = false;
+
+/* exponential distribution tests: */
+double		exp_threshold;   /* threshold for exponential */
+bool		use_exponential = false;
+
 char	   *pghost = ;
 char	   *pgport = ;
 char	   *login = NULL;
@@ -330,6 +341,88 @@ static char *select_only = {
 	SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n
 };
 
+/* --exponential case */
+static char *exponential_tpc_b = {
+	\\set nbranches  CppAsString2(nbranches)  * :scale\n
+	\\set ntellers  CppAsString2(ntellers)  * :scale\n
+	\\set naccounts  CppAsString2(naccounts)  * :scale\n
+	\\setrandom aid 1 :naccounts exponential :exp_threshold\n
+	\\setrandom bid 1 :nbranches\n
+	\\setrandom tid 1 :ntellers\n
+	\\setrandom delta -5000 5000\n
+	BEGIN;\n
+	UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n
+	SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n
+	UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n
+	UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n
+	INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n
+	END;\n
+};
+
+/* --exponential with -N case */
+static char *exponential_simple_update = {
+	\\set nbranches  CppAsString2(nbranches)  * :scale\n
+	\\set ntellers  CppAsString2(ntellers)  * :scale\n
+	\\set naccounts  CppAsString2(naccounts)  * :scale\n
+	\\setrandom aid 1 :naccounts exponential :exp_threshold\n
+	\\setrandom bid 1 :nbranches\n
+	\\setrandom tid 1 :ntellers\n
+	\\setrandom delta -5000 5000\n
+	BEGIN;\n
+	UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n
+	SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n
+	INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n
+	END;\n
+};
+
+/* --exponential with -S case */
+static char *exponential_select_only = {
+	\\set naccounts  CppAsString2(naccounts)  * :scale\n
+	\\setrandom aid 1 :naccounts exponential :exp_threshold\n
+	SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n
+};
+
+/* --gaussian case */
+static char *gaussian_tpc_b = {
+	\\set nbranches  CppAsString2(nbranches)  * :scale\n
+	\\set ntellers  CppAsString2(ntellers)  * :scale\n
+	\\set naccounts  

Re: [HACKERS] GSoC 2014 proposal

2014-04-01 Thread Heikki Linnakangas

On 03/30/2014 11:50 PM, Иван Парфилов wrote:

The implementation of this algorithm would be for data type cube and based
on GiST.

The key concept of BIRCH algorithm is clustering feature. Given a set of N
d-dimensional data points, the clustering feature CF of the set is defined
as the triple CF = (N,LS,SS), where LS is the linear sum and SS is the
square sum of data points. Clustering features are organized in a CF tree,
which is a height balanced tree with two parameters: branching factor B and
threshold T.

Because the structure of CF tree is similar to B+-tree we can use GiST for
implementation [2].
The GiST is a balanced tree structure like a B-tree, containing key,
pointer pairs. GiST key is a member of a user-defined class, and
represents some property that is true of all data items reachable from the
pointer associated with the key. The GiST provides a possibility to create
custom data types with indexed access methods and extensible set of
queries.


The BIRCH algorithm as described in the paper describes building a tree 
in memory. If I understood correctly, you're suggesting to use a 
pre-built GiST index instead. Interesting idea!


There are a couple of signifcant differences between the CF tree 
described in the paper and GiST:


1. In GiST, a leaf item always represents one heap tuple. In the CF 
tree, a leaf item represents a cluster, which consists of one or more 
tuples. So the CF tree doesn't store an entry for every input tuple, 
which makes it possible to keep it in memory.


2. In the CF tree, all entries in a leaf node must satisfy a threshold 
requirement, with respect to a threshold value T: the diameter (or 
radius) has to be less than T. GiST imposes no such restrictions. An 
item can legally be placed anywhere in the tree; placing it badly will 
just lead to degraded search performance, but it's still a legal GiST tree.


3. A GiST index, like any other index in PostgreSQL, holds entries also 
for deleted tuples, until the index is vacuumed. So you cannot just use 
information from a non-leaf node and use it in the result, as the 
information summarized at a non-leaf level includes noise from the dead 
tuples.


Can you elaborate how you are planning to use a GiST index to implement 
BIRCH? You might also want to take a look at SP-GiST; SP-GiST is more 
strict in where in the tree an item can be stored, and lets the operator 
class to specify exactly when a node is split etc.



We need to implement it to create GiST-based CF-tree to use it in BIRCH
algorithm.


*Example of usage(approximate):*

create table cube_test (v cube);

+ insert into cube_test values (cube(array[1.2, 0.4]), cube(array[0.5, 
-0.2]),


   cube(array[0.6, 1.0]),cube(array[1.0, 0.6]) );

create index gist_cf on cube_test using gist(v);

--Prototype(approximate)

--birch(maxNodeEntries, distThreshold, distFunction)

SELECT birch(4.1, 0.2, 1) FROM cube_test;

  cluster | val1 | val2

-+--+

   1  |  1.2 |  0.4

   0  |  0.5 | -0.2

   1  |  0.6 |  1.0

   1  |  1.0 |  0.6

Accordingly, in this GSoC project BIRCH algorithm for data type cube would
be implemented.


From the example, it seems that birch(...) would be an aggregate 
function. Aggregates in PostgreSQL currently work by scanning all the 
input data. That would certainly be a pretty straightforward way to 
implement BIRCH too. Every input tuple would be passed to the the 
so-called transition function (which you would write), which would 
construct a CF tree on-the-fly. At the end, the result would be 
constructed from the CF tree. With this approach, the CF tree would be 
kept in memory, and thrown away after the query.


That would be straightforward, but wouldn't involve GiST at all. To use 
an index to implement an aggregate would require planner/executor 
changes. That would be interesting, but offhand I have no idea what that 
would look like. We'll need more details on that.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2014 proposal

2014-04-01 Thread Heikki Linnakangas

On 03/30/2014 11:50 PM, Иван Парфилов wrote:

* Quantifiable results*

  Adding support of BIRCH algorithm for data type cube


Aside from the details of *how* that would work, the other question is:

Do we want this in contrib/cube? There are currently no clustering 
functions, or any other statistical functions or similar, in 
contrib/cube. Just basic contains/contained/overlaps operators. And 
B-tree comparison operators which are pretty useless for cube.


Do we want to start adding such features to cube, in contrib? Or should 
that live outside the PostgreSQL source tree, in an separate extension, 
so that it could live on its own release schedule, etc. If BIRCH goes 
into contrib/cube, that's an invitation to add all kinds of functions to it.


We received another GSoC application to add another clustering algorithm 
to the MADlib project. MADlib is an extension to PostgreSQL with a lot 
of different statistical tools, so MADlib would be a natural home for 
BIRCH too. But if it requires backend changes (ie. changes to GiST), 
then that needs to be discussed on pgsql-hackers, and it probably would 
be better to do a reference implementation in contrib/cube. MADlib could 
later copy it from there.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using arrays within structure in ECPG

2014-04-01 Thread Ashutosh Bapat
Hi MIchael,
I tried to fix the offset problem. PFA the patch. It does solve the problem
of setting wrong offset in ECPGdo() call.

But then there is problem of interpreting the result from server as an
array within array of structure. The problem is there is in
ecpg_get_data(). This function can not understand that the field is an
array of integers (or for that matter array of anything) and store all the
values in contiguous memory at the given address.



On Thu, Mar 27, 2014 at 11:05 PM, Michael Meskes mes...@postgresql.orgwrote:

 On Mon, Mar 24, 2014 at 11:52:30AM +0530, Ashutosh Bapat wrote:
  For all the members of struct employee, except arr_col, the size of array
  is set to 14 and next member offset is set of sizeof (struct employee).
 But
  for arr_col they are set to 3 and sizeof(int) resp. So, for the next row
  onwards, the calculated offset of arr_col member would not coincide with
  the real arr_col member's address.
 
  Am I missing something here?

 No, this looks like a bug to me. I haven't had time to look into the
 source codebut the offset definitely is off.

 Michael
 --
 Michael Meskes
 Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
 Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
 Jabber: michael.meskes at gmail dot com
 VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL




-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
diff --git a/src/interfaces/ecpg/preproc/type.c b/src/interfaces/ecpg/preproc/type.c
index 2982cb6..53983fe 100644
--- a/src/interfaces/ecpg/preproc/type.c
+++ b/src/interfaces/ecpg/preproc/type.c
@@ -296,21 +296,22 @@ ECPGdump_a_type(FILE *o, const char *name, struct ECPGtype * type, const int bra
 	  type-u.element,
 	  (ind_type == NULL) ? NULL : ((ind_type-type == ECPGt_NO_INDICATOR) ? ind_type : ind_type-u.element),
 	  prefix, ind_prefix);
 	break;
 default:
 	if (!IS_SIMPLE_TYPE(type-u.element-type))
 		base_yyerror(internal error: unknown datatype, please report this to pgsql-b...@postgresql.org);
 
 	ECPGdump_a_simple(o, name,
 	  type-u.element-type,
-	  type-u.element-size, type-size, NULL, prefix, type-u.element-counter);
+	  type-u.element-size, type-size, struct_sizeof ? struct_sizeof : NULL,
+	  prefix, type-u.element-counter);
 
 	if (ind_type != NULL)
 	{
 		if (ind_type-type == ECPGt_NO_INDICATOR)
 		{
 			char   *str_neg_one = mm_strdup(-1);
 			ECPGdump_a_simple(o, ind_name, ind_type-type, ind_type-size, str_neg_one, NULL, ind_prefix, 0);
 			free(str_neg_one);
 		}
 		else

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] datistemplate of pg_database does not behave as per description in documentation

2014-04-01 Thread Rajeev rastogi
On 27 March 2014 17:16, Tom Lane Wrote:

 I agree we need to make the docs match the code, but changing behavior
 that's been like that for ten or fifteen years isn't the answer.

Sounds good.

Please find the attached patch with only documentation change.

Thanks and Regards,
Kumar Rajeev Rastogi




datistemplate_issueV2.patch
Description: datistemplate_issueV2.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch for Todo Item : Provide fallback_application_name in contrib/pgbench, oid2name, and dblink

2014-04-01 Thread Adrian Vondendriesch
Hi,

Am Wed, 12 Feb 2014 13:47:41 -0500
schrieb Robert Haas robertmh...@gmail.com:
 On Mon, Feb 10, 2014 at 12:14 PM, Jeff Janes jeff.ja...@gmail.com
 wrote:
  Presumably whatever behavior difference you are seeing is somehow
  related to the use of PQconnectdbParams() rather than
  PQsetdbLogin(), but the fine manual does not appear to document a
  different between those functions as regards password-prompting
  behavior or .pgpass usage.
 
  It looks like PQsetdbLogin() has either NULL or empty string passed
  to it match 5432 in pgpass, while PQconnectdbParams() only has NULL
  match 5432 and empty string does not.  pgbench uses empty string if
  no -p is specified.
 
  This make pgbench behave the way I think is correct, but it hardly
  seems like the right way to fix it.
 
  [ kludge ]i
 
 Well, it seems to me that the threshold issue here is whether or not
 we should try to change the behavior of libpq.  If not, then your
 kludge might be the best option.   But if so then it isn't necessary.
 However, I don't feel confident to pass judgement on the what the
 libpq semantics should be.
 

I noticed that pgbnech doesn't use all variables from my PGSERVICE
definition. Then I came around and find this Thread.

 export PGSERVICE=test_db_head

 cat ~/.pg_service.conf
 [test_db_head]
 host=/tmp
 user=avo
 port=5496
 dbname=pgbench

 /usr/local/postgresql/head/bin/pgbench -s 1 -i
 Connection to database  failed:
 could not connect to server: No such file or directory
   Is the server running locally and accepting
   connections on Unix domain socket /tmp/.s.PGSQL.5432?

As you noticed before pgbench initialises some of its connection
parameters with empty string, this overwrites variables defined
in .pg_service.conf.

I patched the function conninfo_array_parse() which is used by
PQconnectStartParams to behave like PQsetdbLogin. The patch also
contains a document patch which clarify unspecified parameters. 

Now, PQconnectStartParams will handle empty strings in exactly the same
way as it handles NULL and pgbench run as expected:

 usr/local/postgresql/head/bin/pgbench -s 1 -i
 NOTICE:  table pgbench_history does not exist, skipping
 NOTICE:  table pgbench_tellers does not exist, skipping
 NOTICE:  table pgbench_accounts does not exist, skipping
 NOTICE:  table pgbench_branches does not exist, skipping
 creating tables...
 10 of 10 tuples (100%) done (elapsed 0.21 s, remaining 0.00
 s). vacuum...
 set primary keys...
 done.

Kind Regards

- Adrian
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d53c41f..253615e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4321,7 +4321,7 @@ conninfo_array_parse(const char *const * keywords, const char *const * values,
 		const char *pname = keywords[i];
 		const char *pvalue = values[i];
 
-		if (pvalue != NULL)
+		if (pvalue != NULL  pvalue[0] != '\0')
 		{
 			/* Search for the param record */
 			for (option = options; option-keyword != NULL; option++)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inheritance of foregn key constraints.

2014-04-01 Thread Andrzej Mazurkiewicz
Good Afternoon.

Enclosed please find continuation of the discussion of an accidental or 
malicious breaking a server consistency.

After reading please comment if there are more objections for changing the 
depedency type for trigger to constraint dependency from the 
DEPENDENCY_INTERNAL to DEPENDENCY_AUTO.

That change is necessary to reduce scope of modifications necessary for an 
implementation of the inheritance of foregn key constraints, particularly for 
removing of objects.

Kind Regards
Andrzej Mazurkiewicz


On Saturday 22 of March 2014 11:13:56 you wrote:
 Andrzej Mazurkiewicz andr...@mazurkiewicz.org writes:
  So in other words, somebody could (accidentally or maliciously) break the
  constraint by dropping one of its implementation triggers.  I doubt
  that's
  acceptable.

I have done some more digging in the subject.

All following tests are perfomed on my patched 9.3 postgres server where the 
depedency type for trigger to constraint dependency has been changed to the 
DEPENDENCY_AUTOMATIC.

It seems that if the trigger is internal (tgisinternal = true) it is not 
visible to the DROP TRIGGER command. So it cannot be deleted using DROP 
TRIGGER command, although the dependency type is DEPENDENCY_AUTOMATIC (ref. to 
the last SELECT).

Please have a look at the following actions.

Kind regards
Andrzej Mazurkiewicz



They are performed by a lipa user. The lipa user is not a superuser; 

postgres=# CREATE USER lipa;
CREATE ROLE
postgres=# CREATE DATABASE lipa OWNER lipa;
CREATE DATABASE


postgres93@tata:~$ psql -W lipa lipa
Password for user lipa: 
psql (9.3.3)
Type help for help.

lipa= SELECT CURRENT_USER;
 current_user 
--
 lipa
(1 row)

lipa= CREATE TABLE master (master_a int, CONSTRAINT pk_master PRIMARY KEY 
(master_a));
CREATE TABLE
lipa= CREATE TABLE detail (master_a int, detail_a int, CONSTRAINT fk0_detail 
FOREIGN KEY (master_a) REFERENCES master(master_a));
CREATE TABLE
lipa= SELECT oid, tgrelid, tgname FROM pg_trigger ;
  oid  | tgrelid |tgname
---+-+--
 19322 |   19313 | RI_ConstraintTrigger_a_19322
 19323 |   19313 | RI_ConstraintTrigger_a_19323
 19324 |   19318 | RI_ConstraintTrigger_c_19324
 19325 |   19318 | RI_ConstraintTrigger_c_19325
(4 rows)

lipa= DROP TRIGGER RI_ConstraintTrigger_c_19322 ON master;
ERROR:  trigger ri_constrainttrigger_c_19322 for table master does not 
exist
lipa= DROP TRIGGER RI_ConstraintTrigger_c_19322 ON detail;
ERROR:  trigger ri_constrainttrigger_c_19322 for table detail does not 
exist

lipa= SELECT oid, tgrelid, tgname, tgconstraint FROM pg_trigger ;
  oid  | tgrelid |tgname| tgconstraint 
---+-+--+--
 19322 |   19313 | RI_ConstraintTrigger_a_19322 |19321
 19323 |   19313 | RI_ConstraintTrigger_a_19323 |19321
 19324 |   19318 | RI_ConstraintTrigger_c_19324 |19321
 19325 |   19318 | RI_ConstraintTrigger_c_19325 |19321
(4 rows)

lipa= SELECT * FROM pg_depend WHERE refobjid = 19321;
 classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype 
-+---+--++--+-+-
2620 | 19322 |0 |   2606 |19321 |   0 | a
2620 | 19323 |0 |   2606 |19321 |   0 | a
2620 | 19324 |0 |   2606 |19321 |   0 | a
2620 | 19325 |0 |   2606 |19321 |   0 | a
(4 rows)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Including replication slot data in base backups

2014-04-01 Thread Michael Paquier
Hi all,

As of now, pg_basebackup creates an empty repository for pg_replslot/
in a base backup, forcing the user to recreate slots on other nodes of
the cluster with pg_create_*_replication_slot, or copy pg_replslot
from another node. This is not really user-friendly especially after a
failover where a given slave may not have the replication slot
information of the master that it is replacing.

The simple patch attached adds a new option in pg_basebackup, called
--replication-slot, allowing to include replication slot information
in a base backup. This is done by extending the command BASE_BACKUP in
the replication protocol.

As it is too late for 9.4, I would like to add it to the first commit
fest of 9.5. Comments are welcome.

Regards,
-- 
Michael
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 36d16a5..4748d08 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1837,7 +1837,7 @@ The commands accepted in walsender mode are:
   /varlistentry
 
   varlistentry
-termBASE_BACKUP [literalLABEL/literal 
replaceable'label'/replaceable] [literalPROGRESS/literal] 
[literalFAST/literal] [literalWAL/literal] [literalNOWAIT/literal] 
[literalMAX_RATE/literal replaceablerate/replaceable]/term
+termBASE_BACKUP [literalLABEL/literal 
replaceable'label'/replaceable] [literalPROGRESS/literal] 
[literalFAST/literal] [literalWAL/literal] [literalNOWAIT/literal] 
[literalREPLICATION_SLOT/literal] [literalMAX_RATE/literal 
replaceablerate/replaceable]/term
 listitem
  para
   Instructs the server to start streaming a base backup.
@@ -1909,6 +1909,18 @@ The commands accepted in walsender mode are:
/varlistentry
 
varlistentry
+termliteralREPLICATION_SLOT/literal/term
+listitem
+ para
+  By default, the backup will not include the replication slot
+  information in filenamepg_replslot/ and is created as an
+  empty repository. Specifying literalREPLICATION_SLOT/literal
+  permits to includes replication slot information in the backup.
+ /para
+/listitem
+   /varlistentry
+
+   varlistentry
 termliteralMAX_RATE/literal replaceablerate//term
 listitem
  para
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml 
b/doc/src/sgml/ref/pg_basebackup.sgml
index 6ce0c8c..be77f7b 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -210,6 +210,17 @@ PostgreSQL documentation
  /varlistentry
 
  varlistentry
+  termoption--replication-slot/option/term
+  listitem
+   para
+Include replication slot information in the base backup. This is added
+in filenamepg_replslot/filename by default empty if this option is
+not specified.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termoption-R/option/term
   termoption--write-recovery-conf/option/term
   listitem
@@ -254,7 +265,7 @@ PostgreSQL documentation
   termoption--xlogdir=replaceable 
class=parameterxlogdir/replaceable/option/term
   listitem
para
-Specifies the location for the transaction log directory. 
+Specifies the location for the transaction log directory.
 replaceablexlogdir/replaceable must be an absolute path.
 The transaction log directory can only be specified when
 the backup is in plain mode.
diff --git a/src/backend/replication/basebackup.c 
b/src/backend/replication/basebackup.c
index f611f59..4a86117 100644
--- a/src/backend/replication/basebackup.c
+++ b/src/backend/replication/basebackup.c
@@ -45,6 +45,7 @@ typedef struct
boolfastcheckpoint;
boolnowait;
boolincludewal;
+   boolreplication_slot;
uint32  maxrate;
 } basebackup_options;
 
@@ -71,6 +72,9 @@ static bool backup_started_in_recovery = false;
 /* Relative path of temporary statistics directory */
 static char *statrelpath = NULL;
 
+/* Include replication slot data in base backup? */
+static bool include_replication_slots = false;
+
 /*
  * Size of each block sent into the tar stream for larger files.
  */
@@ -131,6 +135,7 @@ perform_base_backup(basebackup_options *opt, DIR *tblspcdir)
datadirpathlen = strlen(DataDir);
 
backup_started_in_recovery = RecoveryInProgress();
+   include_replication_slots = opt-replication_slot;
 
startptr = do_pg_start_backup(opt-label, opt-fastcheckpoint, 
starttli,
  labelfile);
@@ -548,6 +553,7 @@ parse_basebackup_options(List *options, basebackup_options 
*opt)
boolo_nowait = false;
boolo_wal = false;
boolo_maxrate = false;
+   boolo_replication_slot = false;
 
MemSet(opt, 0, sizeof(*opt));
foreach(lopt, options)
@@ -618,6 +624,15 @@ 

Re: [HACKERS] Patch to add support of IF NOT EXISTS to others CREATE statements

2014-04-01 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Michael Paquier (michael.paqu...@gmail.com) wrote:
 Except if I am missing something, the second query means that it is
 going to replace the existing user test with a new one, with the
 settings specified in the 2nd query, all being default values. As the
 default for login is NOLOGIN, the user test should not be able to log
 in the server.

 That's more-or-less the behavior we're trying to work out.  I've been
 meaning to go back and look at what we've been doing with the existing
 COR cases and just haven't gotten to it yet.  The pertinent question
 being if we assume the user intended for the values not specified to be
 reset to their defaults, or not.

Yes, it has to be that way.  The entire argument for COR hinges on the
assumption that if you execute the statement, and it succeeds, the
properties of the object are equivalent to what they'd be if there had
been no predecessor object.  Otherwise it's just the same as CINE,
which offers no guarantees worth mentioning about the object's
properties.

I'm willing to bend that to the extent of saying that COR leaves in place
subsidiary properties that you might add *with additional statements* ---
for example, foreign keys for a table, or privilege grants for a role.
But the properties of the role itself have to be predictable from the COR
statement, or it's useless.

 Where this is a bit more interesting is in the case of sequences, where
 resetting the sequence to zero may cause further inserts into an
 existing table to fail.

Yeah.  Sequences do have contained data, which makes COR harder to define
--- that's part of the reason why we have CINE not COR for tables, and
maybe we have to do the same for sequences.  The point being exactly
that if you use CINE, you're implicitly accepting that you don't know
the ensuing state fully.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inheritance of foregn key constraints.

2014-04-01 Thread Fabrízio de Royes Mello
On Tue, Apr 1, 2014 at 9:13 AM, Andrzej Mazurkiewicz 
andr...@mazurkiewicz.org wrote:

 It seems that if the trigger is internal (tgisinternal = true) it is not
 visible to the DROP TRIGGER command. So it cannot be deleted using DROP
 TRIGGER command, although the dependency type is DEPENDENCY_AUTOMATIC
(ref. to
 the last SELECT).

 Please have a look at the following actions.

 They are performed by a lipa user. The lipa user is not a superuser;

 postgres=# CREATE USER lipa;
 CREATE ROLE
 postgres=# CREATE DATABASE lipa OWNER lipa;
 CREATE DATABASE


 postgres93@tata:~$ psql -W lipa lipa
 Password for user lipa:
 psql (9.3.3)
 Type help for help.

 lipa= SELECT CURRENT_USER;
  current_user
 --
  lipa
 (1 row)

 lipa= CREATE TABLE master (master_a int, CONSTRAINT pk_master PRIMARY KEY
 (master_a));
 CREATE TABLE
 lipa= CREATE TABLE detail (master_a int, detail_a int, CONSTRAINT
fk0_detail
 FOREIGN KEY (master_a) REFERENCES master(master_a));
 CREATE TABLE
 lipa= SELECT oid, tgrelid, tgname FROM pg_trigger ;
   oid  | tgrelid |tgname
 ---+-+--
  19322 |   19313 | RI_ConstraintTrigger_a_19322
  19323 |   19313 | RI_ConstraintTrigger_a_19323
  19324 |   19318 | RI_ConstraintTrigger_c_19324
  19325 |   19318 | RI_ConstraintTrigger_c_19325
 (4 rows)

 lipa= DROP TRIGGER RI_ConstraintTrigger_c_19322 ON master;
 ERROR:  trigger ri_constrainttrigger_c_19322 for table master does not
 exist
 lipa= DROP TRIGGER RI_ConstraintTrigger_c_19322 ON detail;
 ERROR:  trigger ri_constrainttrigger_c_19322 for table detail does not
 exist

 lipa= SELECT oid, tgrelid, tgname, tgconstraint FROM pg_trigger ;
   oid  | tgrelid |tgname| tgconstraint
 ---+-+--+--
  19322 |   19313 | RI_ConstraintTrigger_a_19322 |19321
  19323 |   19313 | RI_ConstraintTrigger_a_19323 |19321
  19324 |   19318 | RI_ConstraintTrigger_c_19324 |19321
  19325 |   19318 | RI_ConstraintTrigger_c_19325 |19321
 (4 rows)


Try using a quoted identifier:

DROP TRIGGER RI_ConstraintTrigger_c_19322 ON master;

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] WIP patch for Todo Item : Provide fallback_application_name in contrib/pgbench, oid2name, and dblink

2014-04-01 Thread Tom Lane
Adrian Vondendriesch adrian.vondendrie...@credativ.de writes:
 I patched the function conninfo_array_parse() which is used by
 PQconnectStartParams to behave like PQsetdbLogin. The patch also
 contains a document patch which clarify unspecified parameters. 

I see no documentation update here.  I'm also fairly concerned about the
implication that no connection parameter, now or in future, can ever have
an empty string as the correct value.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-04-01 Thread Robert Haas
On Fri, Mar 28, 2014 at 4:47 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Peter Geoghegan wrote:
 With the addition of LATERAL subqueries, Tom fixed up the mechanism
 for keeping track of which relations are visible for column references
 while the FROM clause is being scanned. That allowed
 errorMissingColumn() to give a more useful error to the one produced
 by the prior coding of that mechanism, with an errhint sometimes
 proffering: 'There is a column named foo in table bar, but it
 cannot be referenced from this part of the query'.

 I wondered how much further this could be taken. Attached patch
 modifies contrib/fuzzystrmatch, moving its Levenshtein distance code
 into core without actually moving the relevant SQL functions too. That
 change allowed me to modify errorMissingColumn() to make more useful
 suggestions as to what might have been intended under other
 circumstances, like when someone fat-fingers a column name.

 [local]/postgres=# select * from orders o join orderlines ol on o.orderid = 
 ol.orderids limit 1;
 ERROR:  42703: column ol.orderids does not exist
 LINE 1: ...* from orders o join orderlines ol on o.orderid = ol.orderid...
  ^
 HINT:  Perhaps you meant to reference the column ol.orderid.

 This sounds like a mild version of DWIM:
 http://www.jargondb.org/glossary/dwim

 Maybe it is just me, but I get uncomfortable when a program tries
 to second-guess what I really want.

It's not really DWIM, because the backend is still throwing an error.
It's just trying to help you sort out the error, along the way.
Still, I share some of your discomfort.  I see Peter's patch as an
example of a broader class of things that we could do - but I'm not
altogether sure that we want to do them.  There's a risk of adding not
only CPU cycles but also clutter.  If we do things that encourage
people to crank the log verbosity down, I think that's going to be bad
more often than it's good.  It strains credulity to think that this
patch alone would have that effect, but there might be quite a few
similar improvements that are possible.  So I think it would be good
to consider how far we want to go in this direction and where we think
we might want to stop.  That's not to say, let's not ever do this,
just, let's think carefully about where we want to end up.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inheritance of foregn key constraints.

2014-04-01 Thread Tom Lane
Andrzej Mazurkiewicz andr...@mazurkiewicz.org writes:
 After reading please comment if there are more objections for changing the 
 depedency type for trigger to constraint dependency from the 
 DEPENDENCY_INTERNAL to DEPENDENCY_AUTO.

I'm not sure which part of no you didn't understand, but we're not
doing that.

 lipa= DROP TRIGGER RI_ConstraintTrigger_c_19322 ON master;
 ERROR:  trigger ri_constrainttrigger_c_19322 for table master does not 
 exist

This has to do with case-folding and lack of double quotes, not anything
more subtle than that.  A correct test would've given results like this:

regression=# drop trigger RI_ConstraintTrigger_a_43528 on master;
ERROR:  cannot drop trigger RI_ConstraintTrigger_a_43528 on table master 
because constraint fk0_detail on table detail requires it
HINT:  You can drop constraint fk0_detail on table detail instead.

which is the behavior we need.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_stat_statements cluttered with DEALLOCATE dbdpg_p*

2014-04-01 Thread Fabien COELHO


Hello pgdevs,

I noticed that my pg_stat_statements is cluttered with hundreds of entries 
like DEALLOCATE dbdpg_p123456_7, occuring each only once.


It seems to me that it would be more helful if these similar entries where 
aggregated together, that is if the query normalization could ignore the 
name of the descriptor.


Any thoughts about this?

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Including replication slot data in base backups

2014-04-01 Thread Magnus Hagander
On Tue, Apr 1, 2014 at 2:24 PM, Michael Paquier
michael.paqu...@gmail.comwrote:

 Hi all,

 As of now, pg_basebackup creates an empty repository for pg_replslot/
 in a base backup, forcing the user to recreate slots on other nodes of
 the cluster with pg_create_*_replication_slot, or copy pg_replslot
 from another node. This is not really user-friendly especially after a
 failover where a given slave may not have the replication slot
 information of the master that it is replacing.

 The simple patch attached adds a new option in pg_basebackup, called
 --replication-slot, allowing to include replication slot information
 in a base backup. This is done by extending the command BASE_BACKUP in
 the replication protocol.


--replication-slots would be a better name (plural), or probably
--include-replication-slots. (and that comment also goes for the
BASE_BACKUP syntax and variables)


But. If you want to solve the failover case, doesn't that mean you need to
include it in the *replication* stream and not just the base backup?
Otherwise, what you're sending over might well be out of date set of slots
once the failover happens? What if the set of replication slots change
between the time of the basebackup and the failover?

As it is too late for 9.4, I would like to add it to the first commit
 fest of 9.5. Comments are welcome.


It's not too late to fix omissions in 9.4 features though, if we consider
it that. Which I think this could probably be considered as - if we think
the solution is the right one.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] pg_stat_statements cluttered with DEALLOCATE dbdpg_p*

2014-04-01 Thread Andrew Dunstan


On 04/01/2014 10:45 AM, Fabien COELHO wrote:


Hello pgdevs,

I noticed that my pg_stat_statements is cluttered with hundreds of 
entries like DEALLOCATE dbdpg_p123456_7, occuring each only once.


It seems to me that it would be more helful if these similar entries 
where aggregated together, that is if the query normalization could 
ignore the name of the descriptor.


Any thoughts about this?



You might find this relevant: 
http://blog.endpoint.com/2014/02/perl-dbdpg-postgresql-prepared-statement.html


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Including replication slot data in base backups

2014-04-01 Thread Andres Freund
On 2014-04-01 16:45:46 +0200, Magnus Hagander wrote:
 On Tue, Apr 1, 2014 at 2:24 PM, Michael Paquier
 michael.paqu...@gmail.comwrote:
 
  Hi all,
 
  As of now, pg_basebackup creates an empty repository for pg_replslot/
  in a base backup, forcing the user to recreate slots on other nodes of
  the cluster with pg_create_*_replication_slot, or copy pg_replslot
  from another node. This is not really user-friendly especially after a
  failover where a given slave may not have the replication slot
  information of the master that it is replacing.

What exactly is your usecase for copying the slots?

  The simple patch attached adds a new option in pg_basebackup, called
  --replication-slot, allowing to include replication slot information
  in a base backup. This is done by extending the command BASE_BACKUP in
  the replication protocol.

 --replication-slots would be a better name (plural), or probably
 --include-replication-slots. (and that comment also goes for the
 BASE_BACKUP syntax and variables)

I vote for --include-replication-slots.

 But. If you want to solve the failover case, doesn't that mean you need to
 include it in the *replication* stream and not just the base backup?

They pretty fundamentally can't be in the replication stream - there's
no way to make that work with cascading setups and such.

 Otherwise, what you're sending over might well be out of date set of slots
 once the failover happens? What if the set of replication slots change
 between the time of the basebackup and the failover?

An out of date slot doesn't seem really harmful for the failover
case. All that will happen is that it will reserve too many
resources. That should be fine.

 It's not too late to fix omissions in 9.4 features though, if we consider
 it that. Which I think this could probably be considered as - if we think
 the solution is the right one.

Yea, I think adding this would be fine if we deem it necessary.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP patch for Todo Item : Provide fallback_application_name in contrib/pgbench, oid2name, and dblink

2014-04-01 Thread Adrian Vondendriesch
Am 01.04.2014 16:32, schrieb Tom Lane:
 Adrian Vondendriesch adrian.vondendrie...@credativ.de writes:
 I patched the function conninfo_array_parse() which is used by
 PQconnectStartParams to behave like PQsetdbLogin. The patch also
 contains a document patch which clarify unspecified parameters. 
 
 I see no documentation update here.  I'm also fairly concerned about the
 implication that no connection parameter, now or in future, can ever have
 an empty string as the correct value.

If we want to preserve the possibility to except an empty string as
correct value, then pgbench should initialise some variables with
NULL instead of empty string.

Moreover it should be documented that unspecified means NULL and not
empty string, like in PQsetdbLogin.

However, attached you will find the whole patch, including documentation.

Kind Regards

- Adrian
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index be0d602..0bac166 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -136,10 +136,11 @@ PGconn *PQconnectdbParams(const char * const *keywords,
   /para
 
   para
-   If  any  parameter is unspecified, then the corresponding
-   environment variable (see xref linkend=libpq-envars)
-   is checked. If the  environment  variable is not set either,
-   then the indicated built-in defaults are used.
+   If  any  parameter is unspecified, then the corresponding environment
+   variable (see xref linkend=libpq-envars) is checked. Parameters are
+   treated as unspecified if they are either NULL or contain an empty string
+   ().  If the  environment  variable is not set either, then the
+   indicated built-in defaults are used.
   /para
 
   para
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d53c41f..253615e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4321,7 +4321,7 @@ conninfo_array_parse(const char *const * keywords, const char *const * values,
 		const char *pname = keywords[i];
 		const char *pvalue = values[i];
 
-		if (pvalue != NULL)
+		if (pvalue != NULL  pvalue[0] != '\0')
 		{
 			/* Search for the param record */
 			for (option = options; option-keyword != NULL; option++)


signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Including replication slot data in base backups

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 10:45 AM, Magnus Hagander mag...@hagander.net wrote:
 On Tue, Apr 1, 2014 at 2:24 PM, Michael Paquier michael.paqu...@gmail.com
 wrote:

 As of now, pg_basebackup creates an empty repository for pg_replslot/
 in a base backup, forcing the user to recreate slots on other nodes of
 the cluster with pg_create_*_replication_slot, or copy pg_replslot
 from another node. This is not really user-friendly especially after a
 failover where a given slave may not have the replication slot
 information of the master that it is replacing.

 The simple patch attached adds a new option in pg_basebackup, called
 --replication-slot, allowing to include replication slot information
 in a base backup. This is done by extending the command BASE_BACKUP in
 the replication protocol.

 --replication-slots would be a better name (plural), or probably
 --include-replication-slots. (and that comment also goes for the BASE_BACKUP
 syntax and variables)

 But. If you want to solve the failover case, doesn't that mean you need to
 include it in the *replication* stream and not just the base backup?
 Otherwise, what you're sending over might well be out of date set of slots
 once the failover happens? What if the set of replication slots change
 between the time of the basebackup and the failover?

As a general comment, I think that replication slots, while a great
feature, have more than the usual potential for self-inflicted injury.
 A replication slot prevents the global xmin from advancing (so your
tables will bloat) and WAL from being removed (so your pg_xlog
directory will fill up and take down the server).  The very last thing
you want to do is to keep around a replication slot that should have
been dropped, and I suspect a decent number of users are going to make
that mistake, just as they do with prepared transactions and backends
left idle in transaction.

So I view this proposal with a bit of skepticism for that reason.  If
you end up copying the replication slots when you didn't really want
to, or when you only wanted some of them, you will be sad.  In
particular, suppose you have a master and 2 standbys, each of which
has a replication slot.  The master fails; a standby is promoted.  If
the standby has the master's replication slots, that's wrong: perhaps
the OTHER standby's slot should stick around for the standby to
connect to, but the standby's OWN slot on the master shouldn't be kept
around.

It's also part of the idea here that a cascading standby should be
able to have its own slots for its downstream standbys.  It should
retain WAL locally for those standbys, but it should NOT retain WAL
for the master's other standbys.  This definitely doesn't work yet for
logical slots; I'm not sure about physical slots.  But it's part of
the plan, for sure.  Here again, copying the slots from the master is
the wrong thing.

Now, it would be great to have some more technology in this area.  It
would be pretty nifty if we could set things up so that the promotion
process could optionally assume and activate a configurable subset of
the master's slots at failover/switchover time - but the administrator
would need to also make sure those machines were going to reconnect to
the new master.  Or maybe we could find a way to automate that, too,
but either way I think we're going to need something a *lot* more
sophisticated than just copying all the slots.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements cluttered with DEALLOCATE dbdpg_p*

2014-04-01 Thread Fabien COELHO


I noticed that my pg_stat_statements is cluttered with hundreds of entries 
like DEALLOCATE dbdpg_p123456_7, occuring each only once.


It seems to me that it would be more helful if these similar entries where 
aggregated together, that is if the query normalization could ignore the 
name of the descriptor.


Any thoughts about this?


You might find this relevant: 
http://blog.endpoint.com/2014/02/perl-dbdpg-postgresql-prepared-statement.html


Indeed. Thanks for the pointer. I had guessed who the culprit was, and the 
new behavior mentioned in the blog entry may help when the new driver 
version hits my debian box.


In the mean time, ISTM that progress can be achieved on pg_stat_statements 
normalization as well.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Robert Haas
On Sun, Mar 30, 2014 at 10:04 AM, Bruce Momjian br...@momjian.us wrote:
 On Sat, Mar 29, 2014 at 06:33:39PM -0400, Bruce Momjian wrote:
 On Sat, Mar 29, 2014 at 06:16:19PM -0400, Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
   Are you saying most people like Has OIDs: yes, or the idea of just
   displaying _a_ line if there are OIDs?  Based on default_with_oids,
   perhaps we should display With OIDs.
 
   I agree it is no unanimous.  I am curious how large the majority has to
   be to change a psql display value.
 
  What I actually suggested was not *changing* the line when it's to be
  displayed, but suppressing it in the now-standard case where there's no
  OIDs.
 
  Personally I find the argument that backwards compatibility must be
  preserved to be pretty bogus; we have no hesitation in changing the
  output of \d anytime we add a new feature.  So I don't think there's
  a good compatibility reason why the line has to be spelled exactly
  Has OIDs: yes --- but there is a consistency reason, which is that
  everything else we print in this part of the \d output is of the form
  label: info.

 Ah, now I understand it --- you can argue that the new Replica
 Identity follows the same pattern, showing only for non-defaults (or at
 least it will once I commit the pending patch to do that).

 OK, I have now applied the conditional display of Replica Identity
 patch (which is how it was originally coded anyway).  The attached patch
 matches Tom's suggestion of displaying the same OID text, just
 conditionally.

 Seeing psql \d+ will have a conditional display line in PG 9.4, making
 OIDs conditional seems to make sense.

Frankly, I think this is all completely wrong-headed.  \d+ should
display *everything*.  That's what the + means, isn't it?  Coming up
with complex rules for which things get shown and which things get
hidden just makes the output harder to understand, without any
compensating benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC project suggestion: PIVOT ?

2014-04-01 Thread Robert Haas
On Mon, Mar 31, 2014 at 1:14 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 On 03/31/2014 12:49 PM, Michael Paquier wrote:
 On Mon, Mar 31, 2014 at 1:36 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 It's a nice idea, but the deadline to students send a proposal was 21th
 April.
 21st of March. All the details are here:
 http://www.postgresql.org/developer/summerofcode/

 Ah, thanks.

 There's always next year. Still curious about whether anyone's
 investigated it / tried it out.

I bet this is more complex than a typical GSoC student can handle.
Unless it's Alexander Korotkov.  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 12:39 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Monday, March 31, 2014, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Mar 28, 2014 at 1:06 AM, Kyotaro HORIGUCHI
 horiguchi.kyot...@lab.ntt.co.jp wrote:
  Mmm. I don't think it is relevant to this problem. The problem
  specific here is 'The database was running until just now, but
  shutdown the master (by pacemaker), then restart, won't run
  anymore'. Deleting backup_label after immediate shutdown is the
  radical measure but existing system would be saved by the option.

 I don't find that very radical at all.  The backup_label file is
 *supposed* to be removed on the master if it crashes during the
 backup; and it should never be removed from the backup itself.  At
 least that's how I understand it.  Unfortunately, people too often
 remove the file from the backup and, judging by your report, leave it
 there on the master.

 At first blush it seems pretty radical to me.  Just because the server was
 e-stopped doesn't mean the backup rsync/cp -r/scp etc. isn't still running,
 and it is not clear to me that yanking the backup label file out from under
 it wouldn't cause problems.  I mean, you already have problems if you are
 trying to restore from that backup, but the missing file might make those
 problems less obvious.

 Of course first blush is often wrong, but...

Well, I guess I was thinking mostly of the case where the whole
server's been restarted, in which case none of that stuff is still
running any more.  If there is only a database server crash, then I
agree it's murkier.  Still, you probably ought to kill off those
things if the database server crashes, and then restart the whole base
backup.  Otherwise I think you're going to be in trouble whether the
backup label file sticks around or not.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Bruce Momjian
On Tue, Apr  1, 2014 at 11:30:54AM -0400, Robert Haas wrote:
  OK, I have now applied the conditional display of Replica Identity
  patch (which is how it was originally coded anyway).  The attached patch
  matches Tom's suggestion of displaying the same OID text, just
  conditionally.
 
  Seeing psql \d+ will have a conditional display line in PG 9.4, making
  OIDs conditional seems to make sense.
 
 Frankly, I think this is all completely wrong-headed.  \d+ should
 display *everything*.  That's what the + means, isn't it?  Coming up
 with complex rules for which things get shown and which things get
 hidden just makes the output harder to understand, without any
 compensating benefit.

Well, there are lot of _other_ things we could display about the table
that we don't.  Are you suggesting we add those too?  What about
Replica Identity?  Should that always display?

The bottom line is we already have complex rules to display only what is
_reasonable_.  If you want everything, you have to look at the system
tables.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyData dont signal error

2014-04-01 Thread Robert Haas
On Mon, Mar 31, 2014 at 4:21 PM, steve k steven.c.koh...@nasa.gov wrote:
 I'd love to see an actual working example where an executing C++ program was
 able to in fact determine that copy data containing bad data that was sent
 by the C++ program was rejected by the server and subsequently the C++
 program was able to raise/log/notify specifically which block of data failed
 and then log information about it.  However, all I ever got was
 PGRES_COMMAND_OK whether or not the data that was sent was rejected as
 containing bad data or not.  Effectively these were silent failures.

With all respect, you're doing someting wrong.  There's plenty of
working C code that does just this, including some I have written.
You made the comment upthread that you found it amazing that an rdbms
engine as robust as PostGres seems to have this gaping hole in its
capabilities - and you're right, that would be remarkable.  It would
mean, for example, that users wouldn't be able to know whether their
backups restored OK.  But it turns out that psql and pg_restore can
detect this kind of problem just fine, which means your code should be
able to do the same, if it's written correctly.  So the problem is not
that PostgreSQL doesn't have this capability; it's that you have a bug
in your code.  I can't tell you what the bug is, because I haven't
seen or tried to analyze your code, but I *can* tell you that when
things work for other people and not for you, that's a sign that
you've goofed, not a sign that the feature is missing.

Admittedly, the libpq interface is somewhat confusing, and I often
find it necessary to refer to existing examples of code when trying to
figure out how to do things correctly.  We've been maintaining
backward compatibility for a really long time, and have accumulated a
number of warts along the way, and I'm not sure how much like the
current design things would look if we started over from scratch.  So
if you want to say, hey, this interface is confusing, and it's too
hard to figure out how to use it, I'd have a hard time disagreeing
with that.  But if you want to say, COPY error detection is impossible
under all circumstances because my code for COPY error detection
doesn't work, well, no.  Because psql and other utilities do the same
task just fine using the exact same interfaces.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 11:42 AM, Bruce Momjian br...@momjian.us wrote:
 On Tue, Apr  1, 2014 at 11:30:54AM -0400, Robert Haas wrote:
  OK, I have now applied the conditional display of Replica Identity
  patch (which is how it was originally coded anyway).  The attached patch
  matches Tom's suggestion of displaying the same OID text, just
  conditionally.
 
  Seeing psql \d+ will have a conditional display line in PG 9.4, making
  OIDs conditional seems to make sense.

 Frankly, I think this is all completely wrong-headed.  \d+ should
 display *everything*.  That's what the + means, isn't it?  Coming up
 with complex rules for which things get shown and which things get
 hidden just makes the output harder to understand, without any
 compensating benefit.

 Well, there are lot of _other_ things we could display about the table
 that we don't.  Are you suggesting we add those too?  What about
 Replica Identity?  Should that always display?

In \d+, I think it absolutely should.

 The bottom line is we already have complex rules to display only what is
 _reasonable_.  If you want everything, you have to look at the system
 tables.

I don't really agree with that.  I understand that there's some
information (like dependencies) that you can't get through psql
because we don't really have a principled idea for what an interface
to that would look like, but I don't think that's a good thing.  Every
time I have to write a query by hand to get some information instead
of being able to get it through a backslash command, that slows me
down considerably.  But I'm lucky in that I actually know enough to do
that, which most users don't.  Information that you can't get through
\d+ just isn't available to a large percentage of our user base
without huge effort.  We shouldn't be stingy about putting stuff in
there that people may need to see.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Tue, Apr  1, 2014 at 11:30:54AM -0400, Robert Haas wrote:
 Frankly, I think this is all completely wrong-headed.  \d+ should
 display *everything*.  That's what the + means, isn't it?  Coming up
 with complex rules for which things get shown and which things get
 hidden just makes the output harder to understand, without any
 compensating benefit.

 Well, there are lot of _other_ things we could display about the table
 that we don't.  Are you suggesting we add those too?  What about
 Replica Identity?  Should that always display?

 The bottom line is we already have complex rules to display only what is
 _reasonable_.  If you want everything, you have to look at the system
 tables.

Yeah.  All of the \d commands are compromises between verbosity and
displaying all needful information, so I don't think that Robert's
proposed approach is particularly helpful.  It would only lead to
requests for \d-plus-one-half mode once people realized that everything
is too much.  (I'd rather go in the direction of \d++ yielding extra
info, if we ever decide to have more than two verbosity levels.)

I do think there's some merit to the argument about it's been like
this for years, why change it?.  But if you reject backwards
compatibility as an overriding factor here, the currently-proposed
patch seems the sanest design to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 1, 2014 at 11:42 AM, Bruce Momjian br...@momjian.us wrote:
 The bottom line is we already have complex rules to display only what is
 _reasonable_.  If you want everything, you have to look at the system
 tables.

 I don't really agree with that.  I understand that there's some
 information (like dependencies) that you can't get through psql
 because we don't really have a principled idea for what an interface
 to that would look like, but I don't think that's a good thing.  Every
 time I have to write a query by hand to get some information instead
 of being able to get it through a backslash command, that slows me
 down considerably.  But I'm lucky in that I actually know enough to do
 that, which most users don't.  Information that you can't get through
 \d+ just isn't available to a large percentage of our user base
 without huge effort.  We shouldn't be stingy about putting stuff in
 there that people may need to see.

At least in this particular case, that's an uninteresting argument.
We aren't being stingy with information, because the proposed new display
approach provides *exactly the same information* as before.  (If you see
the Has OIDs line, it's got OIDs, otherwise it doesn't.)  What we are
being stingy about is display clutter, and I believe that's a good thing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyData dont signal error

2014-04-01 Thread steve k
Thanks Robert, 

I'm already there.  Obviously I'm the only one in the room that didn't get
the memo.  I've had some time to reflect on what might be done differently,
just not any time to try it.  If I get it to work I'll let everyone know. 
The code I was working with went away when the Network admins pushed
something that forced me to reboot and close all my temp file windows last
Friday.  Sorry for any troubles I've caused you all and I didn't mean to put
everyone on the defensive.  

It has occurred to me that I may have been examining the wrong results set. 
One of the things you mentioned is I often find it necessary to refer to
existing examples of code when trying to figure out how to do things
correctly.  I couldn't agree more.  Haven't seen one yet, but found plenty
of discussion that tap danced around one or more of the components of the
copy, put, end paradigm.  Maybe I should have just asked for a sample code
snippet but didn't after a day or so of frustration and trying to piece
together other people's incomplete samples.  It seems that none of the
discussion threads I looked at (doesn't mean there aren't any - before
everyone gets worked up) where people  were having similar questions also
never offered a working solution.  So I don't know if those folks gave up or
figured it out on their own.   In the end it comes down to how much time do
you have to google, read through a thread, find out that discussion thread
really has nothing to do with your topic of interest, repeat, finally try
something different, repeat?  Again, my apologies for lighting a fire under
everyone.  

Regards, 
Steve K. 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PQputCopyData-dont-signal-error-tp4302340p5798202.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Tomas Vondra
On 31.3.2014 21:04, Robert Haas wrote:
 On Thu, Mar 27, 2014 at 10:00 PM, Tomas Vondra t...@fuzzy.cz wrote:
 The patch also does one more thing - it changes how the arrays (in the
 aggregate state) grow. Originally it worked like this

 /* initial size */
 astate-alen = 64;

 /* when full, grow exponentially */
 if (astate-nelems = astate-alen)
 astate-alen *= 2;

 so the array length would grow like this 64 - 128 - 256 - 512 ...
 (note we're talking about elements, not bytes, so with with 32-bit
 integers it's actually 256B - 512B - 1024B - ...).

 While I do understand the point of this (minimizing palloc overhead), I
 find this pretty dangerous, especially in case of array_agg(). I've
 modified the growth like this:

 /* initial size */
 astate-alen = 4;

 /* when full, grow exponentially */
 if (astate-nelems = astate-alen)
 astate-alen += 4;

 I admit that might be a bit too aggressive, and maybe there's a better
 way to do this - with better balance between safety and speed. I was
 thinking about something like this:


 /* initial size */
 astate-alen = 4;

 /* when full, grow exponentially */
 if (astate-nelems = astate-alen)
 if (astate-alen  128)
 astate-alen *= 2;
 else
 astate-alen += 128;

 i.e. initial size with exponential growth, but capped at 128B.
 
 So I think this kind of thing is very sensible, but the last time I
 suggested something similar, I got told no:
 
 http://www.postgresql.org/message-id/caeylb_wlght7yjlare9ppert5rkd5zjbb15te+kpgejgqko...@mail.gmail.com
 
 But I think you're right and the objections previously raised are 
 wrong. I suspect that the point at which we should stop doubling is 
 higher than 128 elements, because that's only 8kB, which really
 isn't that big - and the idea that the resizing overhead takes only 
 amortized constant time is surely appealing. But I still think that 
 doubling *forever* is a bad idea, here and there. The fact that
 we've written the code that way in lots of places doesn't make it the
 right algorithm.

I've been thinking about it a bit more and maybe the doubling is not
that bad idea, after all. What I'd like to see is a solution that does
wastes less than some known fraction of the allocated memory, and
apparently that's what doubling does ...

Let's assume we have many buffers (arrays in array_agg), allocated in
this manner. Let's assume the buffers are independent, i.e. the doubling
is not somehow synchronized for the buffers.

Now, at arbitrary time the buffers should be ~75% full on average. There
will be buffers that were just doubled (50% full), buffers that will be
doubled soon (100% full) and buffers somewhere in between. But on
average the buffers should be 75%. That means we're wasting 25% memory
on average, which seems quite acceptable to me. We could probably use a
different growth rate (say 1.5x, resulting in 12.5% memory being
wasted), but I don't see this as the main problem (and I won't fight
for this part of array_agg patch).

The current array_agg however violates some of the assumptions
mentioned above, because it

(1) pre-allocates quite large number of items (64) at the beginning,
resulting in ~98% of memory being wasted initially

(2) allocates one memory context per group, with 8kB initial size, so
you're actually wasting ~99.999% of the memory

(3) thanks to the dedicated memory contexts, the doubling is pretty
much pointless up until you cross the 8kB boundary

IMNSHO these are the issues we really should fix - by lowering the
initial element count (64-4) and using a single memory context.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC proposal - make an unlogged table logged

2014-04-01 Thread Fabrízio de Royes Mello
On Fri, Mar 7, 2014 at 12:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 =?ISO-8859-1?Q?Fabr=EDzio_de_Royes_Mello?= fabriziome...@gmail.com
 writes:
  Do you think is difficult to implement ALTER TABLE ... SET UNLOGGED
 too?
  Thinking in a scope of one GSoC, of course.

 I think it's basically the same thing.  You might hope to optimize it;
 but you have to create (rather than remove) an init fork, and there's
 no way to do that in exact sync with the commit.  So for safety I think
 you have to copy the data into a new relfilenode.


Hi all,

In the GSoC proposal page [1] I received some suggestions to strech goals:

* ALTER TABLE name SET UNLOGGED. This is essentially the reverse of the
core proposal, which is ALTER TABLE name SET LOGGED. Yes, I think that
should definitely be included. It would be weird to have SET LOGGED but not
SET UNLOGGED.

* Allow unlogged indexes on logged tables.

* Implement ALTER TABLE name SET LOGGED without rewriting the whole
table, when wal_level = minimal.

* Allow unlogged materialized views.

Comments?


[1]
http://www.google-melange.com/gsoc/proposal/review/student/google/gsoc2014/fabriziomello/5629499534213120

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: 
 http://frabriziomello.blogspot.comhttp://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] GSoC proposal - make an unlogged table logged

2014-04-01 Thread Andres Freund
On 2014-04-01 13:37:57 -0300, Fabrízio de Royes Mello wrote:
 In the GSoC proposal page [1] I received some suggestions to strech goals:
 
 * ALTER TABLE name SET UNLOGGED. This is essentially the reverse of the
 core proposal, which is ALTER TABLE name SET LOGGED. Yes, I think that
 should definitely be included. It would be weird to have SET LOGGED but not
 SET UNLOGGED.

Yes, that makes sense.

 * Allow unlogged indexes on logged tables.

I don't think it's realistic to build the infrastructure necessary for
that as part of gsoc. The reasons have been explained somewhere in this
thread.

 * Implement ALTER TABLE name SET LOGGED without rewriting the whole
 table, when wal_level = minimal.

Yea, maybe.

 * Allow unlogged materialized views.

I don't think that's realistic either.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using arrays within structure in ECPG

2014-04-01 Thread Michael Meskes
Hi Ashutosh,

 I tried to fix the offset problem. PFA the patch. It does solve the
 problem of setting wrong offset in ECPGdo() call.

Thanks, looks correct to me.

 But then there is problem of interpreting the result from server as an
 array within array of structure. The problem is there is in
 ecpg_get_data(). This function can not understand that the field is an
 array of integers (or for that matter array of anything) and store all
 the values in contiguous memory at the given address.

I guess I know where that comes from, without actually looking at the
code, though. Nested arrays are not supported by ecpg and the
precompiler spits out an error message, just check preproc/type.c.
However, in your example you have the struct essantially sandwiched
between the arrays and the (too) simple check in that file doesn't
notice, but because the implementation is nevertheless lacking.

I'm sorry, but this sounds like a missing feature bug.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Tom Lane
In bug #9817 there's a complaint that the planner fails to consider
these expressions equivalent:
  foo('a'::text, 'b'::text)
  foo(variadic array['a'::text, 'b'::text])
when foo() is declared as taking variadic text[].

Such cases worked okay before 9.3, the reason being that the use of
VARIADIC disappeared after parsing, so that the two calls were in fact
identical.  However, once we added FuncExpr.funcvariadic, they're not
identical anymore.

I thought for a bit that we could fix this easily by having equal()
disregard funcvariadic; there is precedent for that, since it ignores
other fields that are just for display purposes and have no semantic
impact, such as CoercionForm and location.

Unfortunately, funcvariadic *does* have semantic impact on a few
functions that use get_fn_expr_variadic, such as format().  Since
the planner has no good way to know which ones those are, it cannot
safely ignore funcvariadic while matching expressions.

In short, commit 75b39e790 broke this rather badly, and I don't see
any easy way out.

We could possibly salvage something by redefining funcvariadic as only
being true if VARIADIC was used *and* the function is VARIADIC ANY,
so that it returns to not being different for semantically-equivalent
cases.  This would be a bit messy, since it would not un-break the
behavior for any already stored rules or indexes in 9.3 databases.
But I'm not sure there is any good way to make the problem magically
go away in 9.3 databases.

Thoughts?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Alvaro Herrera
How much of this problem can be attributed by the fact that repalloc has
to copy the data from the old array into the new one?  If it's large,
perhaps we could solve it by replicating the trick we use for
InvalidationChunk.  It'd be a bit messy, but the mess would be pretty
well contained, I think.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 I've been thinking about it a bit more and maybe the doubling is not
 that bad idea, after all.

It is not.  There's a reason why that's our standard behavior.

 The current array_agg however violates some of the assumptions
 mentioned above, because it
 (1) pre-allocates quite large number of items (64) at the beginning,
 resulting in ~98% of memory being wasted initially
 (2) allocates one memory context per group, with 8kB initial size, so
 you're actually wasting ~99.999% of the memory
 (3) thanks to the dedicated memory contexts, the doubling is pretty
 much pointless up until you cross the 8kB boundary

 IMNSHO these are the issues we really should fix - by lowering the
 initial element count (64-4) and using a single memory context.

The real issue here is that all those decisions are perfectly reasonable
if you expect that a large number of values will get aggregated --- and
even if you don't expect that, they're cheap insurance in simple cases.
It only gets to be a problem if you have a lot of concurrent executions
of array_agg, such as in a grouped-aggregate query.  You're essentially
arguing that in the grouped-aggregate case, it's better to optimize on
the assumption that only a very small number of values will get aggregated
(per hash table entry) --- which is possibly reasonable, but the argument
that it's okay to pessimize the behavior for other cases seems pretty
flimsy from here.

Actually, though, the patch as given outright breaks things for both the
grouped and ungrouped cases, because the aggregate no longer releases
memory when it's done.  That's going to result in memory bloat not
savings, in any situation where the aggregate is executed repeatedly.

I think a patch that stood a chance of getting committed would need to
detect whether the aggregate was being called in simple or grouped
contexts, and apply different behaviors in the two cases.  And you
can't just remove the sub-context without providing some substitute
cleanup mechanism.  Possibly you could keep the context but give it
some much-more-miserly allocation parameters in the grouped case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 12:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 1, 2014 at 11:42 AM, Bruce Momjian br...@momjian.us wrote:
 The bottom line is we already have complex rules to display only what is
 _reasonable_.  If you want everything, you have to look at the system
 tables.

 I don't really agree with that.  I understand that there's some
 information (like dependencies) that you can't get through psql
 because we don't really have a principled idea for what an interface
 to that would look like, but I don't think that's a good thing.  Every
 time I have to write a query by hand to get some information instead
 of being able to get it through a backslash command, that slows me
 down considerably.  But I'm lucky in that I actually know enough to do
 that, which most users don't.  Information that you can't get through
 \d+ just isn't available to a large percentage of our user base
 without huge effort.  We shouldn't be stingy about putting stuff in
 there that people may need to see.

 At least in this particular case, that's an uninteresting argument.
 We aren't being stingy with information, because the proposed new display
 approach provides *exactly the same information* as before.  (If you see
 the Has OIDs line, it's got OIDs, otherwise it doesn't.)  What we are
 being stingy about is display clutter, and I believe that's a good thing.

Although I agree with the general principle, I'm skeptical in this
case.  There are a bunch of table-level options, and I don't think
it's very reasonable to expect that users are going to remember which
ones are going to be displayed under which conditions, especially if
we change it from release to release.  If somebody doesn't see the
has OIDs line, are they going to conclude that the table doesn't
have OIDs, or are they going to conclude that psql doesn't ever
display that information and they need to query pg_class manually?
I'm sure at least some people will guess wrong.

Now, admittedly, this is not the hill I want to die on.  The future of
PostgreSQL doesn't rest on whatever ends up happening here.  But I
think what's going on on this thread is a lot of tinkering with stuff
that's not really broken.  I'm not saying don't ever change psql
output.  What I'm saying is that changing psql output that is
absolutely fine the way it is does not represent meaningful progress.
The replica identity and has OIDs lines are a negligible
percentage of what \d+ spits out - in a test I just did, 2 out of 37
lines on \d+ pg_class.  I can't accept that tinkering with that is
reducing clutter in any meaningful way; it's just change for the sake
of change.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC proposal - make an unlogged table logged

2014-04-01 Thread Heikki Linnakangas

On 03/07/2014 05:36 AM, Tom Lane wrote:

=?ISO-8859-1?Q?Fabr=EDzio_de_Royes_Mello?= fabriziome...@gmail.com writes:

Do you think is difficult to implement ALTER TABLE ... SET UNLOGGED too?
Thinking in a scope of one GSoC, of course.


I think it's basically the same thing.  You might hope to optimize it;
but you have to create (rather than remove) an init fork, and there's
no way to do that in exact sync with the commit.


You just have to include that information with the commit WAL record, no?

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Andres Freund
On 2014-04-01 13:36:02 -0400, Robert Haas wrote:
 I can't accept that tinkering with that is
 reducing clutter in any meaningful way; it's just change for the sake
 of change.

+1

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql \d+ and oid display

2014-04-01 Thread Jeff Janes
On Tue, Apr 1, 2014 at 8:42 AM, Bruce Momjian br...@momjian.us wrote:

 On Tue, Apr  1, 2014 at 11:30:54AM -0400, Robert Haas wrote:
   OK, I have now applied the conditional display of Replica Identity
   patch (which is how it was originally coded anyway).  The attached
 patch
   matches Tom's suggestion of displaying the same OID text, just
   conditionally.
  
   Seeing psql \d+ will have a conditional display line in PG 9.4, making
   OIDs conditional seems to make sense.
 
  Frankly, I think this is all completely wrong-headed.  \d+ should
  display *everything*.  That's what the + means, isn't it?  Coming up
  with complex rules for which things get shown and which things get
  hidden just makes the output harder to understand, without any
  compensating benefit.

 Well, there are lot of _other_ things we could display about the table
 that we don't.  Are you suggesting we add those too?


I am suggesting it for at least some other things.  I'm rather aggrieved
that \d+ without argument shows you the size and the description/comment
for every table, but \d+ foo does not show you the size and
description/comment of the specific table you just asked for.

Not so aggrieved that I wrote and submitted a patch, you might notice; but
I'll get to it eventually if no one beats me to it.

Cheers,

Jeff


Re: [HACKERS] Patch to add support of IF NOT EXISTS to others CREATE statements

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 10:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'm willing to bend that to the extent of saying that COR leaves in place
 subsidiary properties that you might add *with additional statements* ---
 for example, foreign keys for a table, or privilege grants for a role.
 But the properties of the role itself have to be predictable from the COR
 statement, or it's useless.

+1.

 Where this is a bit more interesting is in the case of sequences, where
 resetting the sequence to zero may cause further inserts into an
 existing table to fail.

 Yeah.  Sequences do have contained data, which makes COR harder to define
 --- that's part of the reason why we have CINE not COR for tables, and
 maybe we have to do the same for sequences.  The point being exactly
 that if you use CINE, you're implicitly accepting that you don't know
 the ensuing state fully.

Yeah.  I think CINE is more sensible than COR for sequences, for
precisely the reason that they do have contained data (even if it's
basically only one value).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyData dont signal error

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 12:14 PM, steve k steven.c.koh...@nasa.gov wrote:
 I'm already there.  Obviously I'm the only one in the room that didn't get
 the memo.  I've had some time to reflect on what might be done differently,
 just not any time to try it.  If I get it to work I'll let everyone know.
 The code I was working with went away when the Network admins pushed
 something that forced me to reboot and close all my temp file windows last
 Friday.  Sorry for any troubles I've caused you all and I didn't mean to put
 everyone on the defensive.

No problem.

 It has occurred to me that I may have been examining the wrong results set.

That definitely seems possible.  It is easier than it might be to mess
that up; there's really nothing in the API to warn you if you've made
that mistake.  And I've been there myself.

 One of the things you mentioned is I often find it necessary to refer to
 existing examples of code when trying to figure out how to do things
 correctly.  I couldn't agree more.  Haven't seen one yet, but found plenty
 of discussion that tap danced around one or more of the components of the
 copy, put, end paradigm.  Maybe I should have just asked for a sample code
 snippet but didn't after a day or so of frustration and trying to piece
 together other people's incomplete samples.

FWIW, I've generally found that the best examples are what's in the
core distribution.  I'd go and look at a tool like psql or pg_restore
and find the code that handles this, and then copy it and cut it down
to what you need.  You could go around looking for other snippets on
the Internet that are more self-contained, but there's too much chance
that they're actually wrong.  The code that implements the existing
core tools is more likely to be good code - not that it can never have
any bugs, but it gets a lot of exercise in real-world deployments, so
if something really obvious like error detection is broken then we can
be pretty sure a user will complain.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add support of IF NOT EXISTS to others CREATE statements

2014-04-01 Thread Fabrízio de Royes Mello
On Tue, Apr 1, 2014 at 2:46 PM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Apr 1, 2014 at 10:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  I'm willing to bend that to the extent of saying that COR leaves in place
  subsidiary properties that you might add *with additional statements* ---
  for example, foreign keys for a table, or privilege grants for a role.
  But the properties of the role itself have to be predictable from the COR
  statement, or it's useless.

 +1.

  Where this is a bit more interesting is in the case of sequences, where
  resetting the sequence to zero may cause further inserts into an
  existing table to fail.
 
  Yeah.  Sequences do have contained data, which makes COR harder to define
  --- that's part of the reason why we have CINE not COR for tables, and
  maybe we have to do the same for sequences.  The point being exactly
  that if you use CINE, you're implicitly accepting that you don't know
  the ensuing state fully.

 Yeah.  I think CINE is more sensible than COR for sequences, for
 precisely the reason that they do have contained data (even if it's
 basically only one value).


Well then I'll separate CINE for sequences for the previous rejected... is
this a material for 9.5?

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] GSoC proposal - make an unlogged table logged

2014-04-01 Thread Jim Nasby

On 3/4/14, 8:50 AM, Andres Freund wrote:

Can't that be solved by just creating the permanent relation in a new
relfilenode? That's equivalent to a rewrite, yes, but we need to do that
for anything but wal_level=minimal anyway.


Maybe I'm missing something, but doesn't this actually involve writing the data 
twice? Once into WAL and again into the relation itself?
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inheritance of foregn key constraints.

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 8:13 AM, Andrzej Mazurkiewicz
andr...@mazurkiewicz.org wrote:
 That change is necessary to reduce scope of modifications necessary for an
 implementation of the inheritance of foregn key constraints, particularly for
 removing of objects.

Nobody here is going to accept that goal as a valid reason to set the
dependency type to the wrong value.  The value we assign for the
dependency type has important user-visible semantics which we are not
going to break for the purpose of making some feature simpler to
implement.  Of course, PostgreSQL is open source, so you can change
your own copy however you like.  But such modifications won't be
accepted here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC proposal - make an unlogged table logged

2014-04-01 Thread Andres Freund
On 2014-04-01 12:56:04 -0500, Jim Nasby wrote:
 On 3/4/14, 8:50 AM, Andres Freund wrote:
 Can't that be solved by just creating the permanent relation in a new
 relfilenode? That's equivalent to a rewrite, yes, but we need to do that
 for anything but wal_level=minimal anyway.
 
 Maybe I'm missing something, but doesn't this actually involve writing the 
 data twice? Once into WAL and again into the relation itself?

Yes. But as I said, that's unavoidable for anything but
wal_level=minimal. If somebody wants to put in the additional nontrivial
work to make it work faster with wal_level=minimal, they can do so. But
the other case is more general and needs to be done anyway.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In bug #9817 there's a complaint that the planner fails to consider
 these expressions equivalent:
   foo('a'::text, 'b'::text)
   foo(variadic array['a'::text, 'b'::text])
 when foo() is declared as taking variadic text[].

My first reaction to this was who cares? after all, the user should
just write the expression the same way both times and then they won't
have this problem.  But after going and looking at the bug report I
see that the user wrote it the first way consistently, but pg_dump
blithely rewrote it to the second way.  I'm disinclined to view that
as a planner problem; it seems to me to be a pg_dump or ruleutils bug.
 If those two things don't have the same parse representation, then
pg_dump has no business treating them as equivalent - even if we were
to put enough smarts into the planner to paper over that
non-equivalence.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-04-01 Thread Peter Geoghegan
On Tue, Apr 1, 2014 at 7:25 AM, Robert Haas robertmh...@gmail.com wrote:
 There's a risk of adding not
 only CPU cycles but also clutter.  If we do things that encourage
 people to crank the log verbosity down, I think that's going to be bad
 more often than it's good.

While I share your concern here, I think that this is something that
is only likely to be seen in an interactive psql session, where it is
seen quite frequently. I am reasonably confident that it's highly
unusual to see ERRCODE_UNDEFINED_COLUMN in other settings. Not having
to do a mental context switch when writing an ad-hoc query has
considerable value. Even C compilers like Clang have this kind of
feedback.   This is a patch that was written out of personal
frustration with the experience of interacting with many different
databases. Things like the Python REPL don't do so much of this kind
of thing, but presumably that's because of Python's dynamic typing.
This is a HINT that can be given with fairly high confidence that
it'll be helpful - there just won't be that many things that the user
could have meant to choose from. I think it's even useful when the
suggested column is distant from the original suggestion (i.e.
errorMissingColumn() offers only what is clearly a wild guess),
because then the user knows that he or she has got it quite wrong.
Frequently, this will be because the wrong synonym for what should
have been written was used.

 It strains credulity to think that this
 patch alone would have that effect, but there might be quite a few
 similar improvements that are possible.  So I think it would be good
 to consider how far we want to go in this direction and where we think
 we might want to stop.  That's not to say, let's not ever do this,
 just, let's think carefully about where we want to end up.

Fair enough.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] four minor proposals for 9.5

2014-04-01 Thread Pavel Stehule
2014-03-27 17:56 GMT+01:00 Pavel Stehule pavel.steh...@gmail.com:

 Hello

 After week, I can to evaluate a community reflection:


 2014-03-19 16:34 GMT+01:00 Pavel Stehule pavel.steh...@gmail.com:

 Hello

 I wrote a few patches, that we use in our production. These patches are
 small, but I hope, so its can be interesting for upstream:

 1. cancel time - we log a execution time cancelled statements


 there is a interest



 2. fatal verbose - this patch ensure a verbose log for fatal errors. It
 simplify a investigation about reasons of error.


 not too much



 3. relation limit - possibility to set session limit for maximum size of
 relations. Any relation cannot be extended over this limit in session, when
 this value is higher than zero. Motivation - we use lot of queries like
 CREATE TABLE AS SELECT .. , and some very big results decreased a disk free
 space too much. It was high risk in our multi user environment. Motivation
 is similar like temp_files_limit.


 is not a interest



 4. track statement lock - we are able to track a locking time for query
 and print this data in slow query log and auto_explain log. It help to us
 with lather slow query log analysis.


 there is a interest

 So I'll prepare a some prototypes in next month for

 1. log a execution time for cancelled queries,
 2. track a query lock time


When I though about this proposal, I found so our implementation is
plus/minus hack, that can works well in GoodData, but can be inconsistent
with native postgresql. So in this proposal I'll plan some different than
we use, but I hope so it is more consistent with upstream.

So I miss a execution time for cancelled queries. Same time can be
interesting for some queries that was from any reasons - temp file limits
can stop queries after 5 minutes, some out of memory etc ..

It is not hard to implement printing duration for cancelled queries, but is
impossible do it for any kind of exception. But there is way. We can use a
log line prefix space. Now, there are not a possibility to print duration
- so we can introduce a new symbol %D as duration. Same technique I would
to use for printing lock time - it can be printing instead symbol %L.

so log entry should to look like

timestamp, duration, locktime ERROR, query was cancelled 

Anybody can activate or deactivate these features by using %D or %L in
log_line_prefix

Comments, notes?

Regards

Pavel





Regards

 Pavel



 Do you thinking so  these patches can be generally useful?

 Regards

 Pavel





Re: [HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 1, 2014 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In bug #9817 there's a complaint that the planner fails to consider
 these expressions equivalent:
 foo('a'::text, 'b'::text)
 foo(variadic array['a'::text, 'b'::text])
 when foo() is declared as taking variadic text[].

 My first reaction to this was who cares? after all, the user should
 just write the expression the same way both times and then they won't
 have this problem.  But after going and looking at the bug report I
 see that the user wrote it the first way consistently, but pg_dump
 blithely rewrote it to the second way.  I'm disinclined to view that
 as a planner problem; it seems to me to be a pg_dump or ruleutils bug.
  If those two things don't have the same parse representation, then
 pg_dump has no business treating them as equivalent - even if we were
 to put enough smarts into the planner to paper over that
 non-equivalence.

The point is that they *were* equivalent before 9.3, and so ruleutils
was entirely within its rights to not worry about which way it dumped
the expression; indeed, it couldn't, because the information was not
there as to which way the call had been written originally.  I do not
think it's appropriate to blame ruleutils for taking advantage of this
equivalence, because more than likely user applications have too.

Or in other words, what I wrote above is a more general statement of the
problem than what was complained of in bug #9817 ... but if we just hack
ruleutils to dump the cases differently, we will fail to fix the more
general problem.  So we can still expect future bug reports about that,
because it worked as-expected for years before 9.3.

There's also the point that even if we changed ruleutils' behavior
now, this would not fix existing dump files that have considered the
two forms interchangeable ever since VARIADIC existed.  And we
generally try hard to not break existing dump files.  To be even
more to the point: what you propose is incapable of fixing the precise
problem stated in the bug report, because it's complaining about a
dump taken from 9.1, and there is *no* way to make 9.1 produce a
dump that only uses VARIADIC if the original call did.  It hasn't
got the information.  Even using a newer version of pg_dump wouldn't
help that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Andres Freund
On 2014-04-01 12:52:54 -0400, Tom Lane wrote:
 We could possibly salvage something by redefining funcvariadic as only
 being true if VARIADIC was used *and* the function is VARIADIC ANY,
 so that it returns to not being different for semantically-equivalent
 cases.  This would be a bit messy, since it would not un-break the
 behavior for any already stored rules or indexes in 9.3 databases.
 But I'm not sure there is any good way to make the problem magically
 go away in 9.3 databases.

It's pretty damn ugly, but if we're going for magic in around those
edges, we could just force the new behaviour in readfuncs.c. IIUC all
the neccessary data for it is there.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-04-01 12:52:54 -0400, Tom Lane wrote:
 We could possibly salvage something by redefining funcvariadic as only
 being true if VARIADIC was used *and* the function is VARIADIC ANY,
 so that it returns to not being different for semantically-equivalent
 cases.  This would be a bit messy, since it would not un-break the
 behavior for any already stored rules or indexes in 9.3 databases.
 But I'm not sure there is any good way to make the problem magically
 go away in 9.3 databases.

 It's pretty damn ugly, but if we're going for magic in around those
 edges, we could just force the new behaviour in readfuncs.c. IIUC all
 the neccessary data for it is there.

I don't want either readfuncs or equalfuncs going in for catalog lookups,
which is what they'd have to do to fix it at that level (the key point
being they'd have to find out whether the called function is declared as
VARIADIC ANY).  Too much risk of unpleasant side effects if we do that.
The parser, on the other hand, has ready access to the function's
parameter list when building a FuncExpr.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Tomas Vondra
On 1.4.2014 19:08, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 I've been thinking about it a bit more and maybe the doubling is not
 that bad idea, after all.
 
 It is not.  There's a reason why that's our standard behavior.
 
 The current array_agg however violates some of the assumptions
 mentioned above, because it
 (1) pre-allocates quite large number of items (64) at the beginning,
 resulting in ~98% of memory being wasted initially
 (2) allocates one memory context per group, with 8kB initial size, so
 you're actually wasting ~99.999% of the memory
 (3) thanks to the dedicated memory contexts, the doubling is pretty
 much pointless up until you cross the 8kB boundary
 
 IMNSHO these are the issues we really should fix - by lowering the
 initial element count (64-4) and using a single memory context.
 
 The real issue here is that all those decisions are perfectly
 reasonable if you expect that a large number of values will get
 aggregated --- and even if you don't expect that, they're cheap
 insurance in simple cases.

Yes, if you expect a large number of values it's perfectly valid. But
what if those assumptions are faulty? Is it OK to fail because of OOM
even for trivial queries breaking those assumptions?

I'd like to improve that and make this work without impacting the
queries that match the assumptions.

 It only gets to be a problem if you have a lot of concurrent
 executions of array_agg, such as in a grouped-aggregate query. You're
 essentially arguing that in the grouped-aggregate case, it's better
 to optimize on the assumption that only a very small number of values
 will get aggregated (per hash table entry) --- which is possibly
 reasonable, but the argument that it's okay to pessimize the behavior
 for other cases seems pretty flimsy from here.

I'm not saying it's okay to pessimize the behavior of other cases. I
admit decreasing the initial size from 64 to only 4 items may be too
aggressive - let's measure the difference and tweak the number
accordingly. Heck, even 64 items is way lower than the 8kB utilized by
each per-group memory context right now.


 Actually, though, the patch as given outright breaks things for both
 the grouped and ungrouped cases, because the aggregate no longer
 releases memory when it's done. That's going to result in memory
 bloat not savings, in any situation where the aggregate is executed
 repeatedly.

Really? Can you provide query for which the current and patched code
behave differently?

Looking at array_agg_finalfn (which is the final function for
array_agg), I see it does this:

/*
 * Make the result.  We cannot release the ArrayBuildState because
 * sometimes aggregate final functions are re-executed.  Rather, it
 * is nodeAgg.c's responsibility to reset the aggcontext when it's
 * safe to do so.
 */
result = makeMdArrayResult(state, 1, dims, lbs,
   CurrentMemoryContext,
   false);

i.e. it sets release=false. So I fail to see how the current code
behaves differently from the patch? If it wasn't releasing the memory
before, it's not releasing memory before.

In both cases the memory gets released when the aggcontext gets released
in nodeAgg.c (as explained by the comment in the code).

However, after looking at the code now, I think it's actually wrong to
remove the MemoryContextDelete from makeMdArrayResult(). It does not
make any difference to the array_agg (which sets release=false anyway),
but it makes difference to functions calling makeArrayResult() as that
uses release=true. That however is not called by aggregate functions,
but from regexp_split_to_array, xpath and subplans.

 I think a patch that stood a chance of getting committed would need to
 detect whether the aggregate was being called in simple or grouped
 contexts, and apply different behaviors in the two cases.  And you
 can't just remove the sub-context without providing some substitute
 cleanup mechanism.  Possibly you could keep the context but give it
 some much-more-miserly allocation parameters in the grouped case.

I don't think the patch removes any cleanup mechanism (see above), but
maybe I'm wrong.

Yes, tweaking the parameters depending on the aggregate - whether it's
simple or grouped, or maybe an estimate number of elements in a group -
seems like a good idea.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 On 1.4.2014 19:08, Tom Lane wrote:
 Actually, though, the patch as given outright breaks things for both
 the grouped and ungrouped cases, because the aggregate no longer
 releases memory when it's done. That's going to result in memory
 bloat not savings, in any situation where the aggregate is executed
 repeatedly.

 Looking at array_agg_finalfn (which is the final function for
 array_agg), I see it does this:

 /*
  * Make the result.  We cannot release the ArrayBuildState because
  * sometimes aggregate final functions are re-executed.  Rather, it
  * is nodeAgg.c's responsibility to reset the aggcontext when it's
  * safe to do so.
  */
 result = makeMdArrayResult(state, 1, dims, lbs,
CurrentMemoryContext,
false);

 i.e. it sets release=false. So I fail to see how the current code
 behaves differently from the patch?

You're conveniently ignoring the callers that set release=true.
Reverse engineering a query that exhibits memory bloat is left
as an exercise for the reader (but in a quick look, I'll bet
ARRAY_SUBLINK subplans are one locus for problems).

It's possible that it'd work to use a subcontext only if release=true;
I've not dug through the code enough to convince myself of that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-04-01 Thread Tomas Vondra
On 1.4.2014 20:56, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 On 1.4.2014 19:08, Tom Lane wrote:
 You're conveniently ignoring the callers that set release=true.
 Reverse engineering a query that exhibits memory bloat is left
 as an exercise for the reader (but in a quick look, I'll bet
 ARRAY_SUBLINK subplans are one locus for problems).

No, I'm not. I explicitly mentioned those cases (although you're right I
concentrated mostly on cases with release=false, because of array_agg).

 It's possible that it'd work to use a subcontext only if
 release=true; I've not dug through the code enough to convince myself
 of that.

Maybe, though 'release' is not available in makeArrayResult() which is
where the memory context needs to be decided. So all the callers would
need to be modified to supply this parameter. But there only ~15 places
where makeArrayResult is called.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json/jsonb/hstore operator precedence

2014-04-01 Thread Jim Nasby

On 3/18/14, 12:13 PM, Greg Stark wrote:

Fwiw I'm finding myself repeatedly caught up by the operator
precedence rules when experimenting with jsonb:

stark=***# select  segment-'id' as id from flight_segments where
segment-'marketing_airline_code' 
segment-'operating_airline_code' ;
ERROR:  42883: operator does not exist: text  jsonb
LINE 2: ...segments where segment-'marketing_airline_code'  segment...
  ^
HINT:  No operator matches the given name and argument type(s). You
might need to add explicit type casts.
LOCATION:  op_error, parse_oper.c:722
Time: 0.407 ms
stark=***# select  segment-'id' as id from flight_segments where
(segment-'marketing_airline_code') 
(segment-'operating_airline_code') ;
  id
-
  45866185
  95575359


I don't think this is related to the jsonb patch -- json and hstore
have the same behaviour so jsonb is obviously going to follow suit.
The only option right now would be to use a higher precedence operator
like % or ^ for all of these data types which I'm not for. I suspect
it's a pipe dream to think we might be able to override the '.' and
changing the precedence of - and - would be fraught...

I think the best we can do is to highlight it in the docs.

Incidentally it's a good thing there wasn't an implicit cast
text-jsonb. In this case it would have resulted in just a confusing
error of jsonb-boolean not existing.


Wow, that really sucks. :(

What are cases where things would break if we changed the precedence of - and 
-? ISTM that's what we really should do if there's some way to manage the 
backwards compatibility...
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json/jsonb/hstore operator precedence

2014-04-01 Thread Andrew Dunstan


On 04/01/2014 03:40 PM, Jim Nasby wrote:

On 3/18/14, 12:13 PM, Greg Stark wrote:

Fwiw I'm finding myself repeatedly caught up by the operator
precedence rules when experimenting with jsonb:

stark=***# select  segment-'id' as id from flight_segments where
segment-'marketing_airline_code' 
segment-'operating_airline_code' ;
ERROR:  42883: operator does not exist: text  jsonb
LINE 2: ...segments where segment-'marketing_airline_code'  
segment...

  ^
HINT:  No operator matches the given name and argument type(s). You
might need to add explicit type casts.
LOCATION:  op_error, parse_oper.c:722
Time: 0.407 ms
stark=***# select  segment-'id' as id from flight_segments where
(segment-'marketing_airline_code') 
(segment-'operating_airline_code') ;
  id
-
  45866185
  95575359


I don't think this is related to the jsonb patch -- json and hstore
have the same behaviour so jsonb is obviously going to follow suit.
The only option right now would be to use a higher precedence operator
like % or ^ for all of these data types which I'm not for. I suspect
it's a pipe dream to think we might be able to override the '.' and
changing the precedence of - and - would be fraught...

I think the best we can do is to highlight it in the docs.

Incidentally it's a good thing there wasn't an implicit cast
text-jsonb. In this case it would have resulted in just a confusing
error of jsonb-boolean not existing.


Wow, that really sucks. :(

What are cases where things would break if we changed the precedence 
of - and -? ISTM that's what we really should do if there's some 
way to manage the backwards compatibility...



There is no provision for setting the precedence of any operators. The 
precedence is set in the grammar, and these all have the same 
precedence. What you're suggesting would a cure far worse than the 
disease, I strongly suspect. You just need to learn to live with this.


What really bugs me about the example is that  has a different 
precedence from =, which seems more than odd. The example works just 
fine if you use = instead of . But I guess it's been that way for a 
very long time and there's not much to be done about it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Ok to flip pg_constraint.condeferrable on 9.1?

2014-04-01 Thread Jerry Sievers
Hackers;  as per $subject...

We have an FK defined on a table large enough and 24x7  as to make a
redefining of same constraint  a painful solution.

Ran into a case where defining as deferrable initially immediate and
just running one batch job with deferred firing would solve a
concurrency problem that we discovered.

Grabbing a quick exclusive lock on the 2 related tables would not be a
problem if same might help avoid bad side-effects.

Developers are already working on a chunking solution to avoid the
long-running transaction that gave rise to this but I'd like  to
consider this if  it's not risky. 

Comments?

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] get_fn_expr_variadic considered harmful

2014-04-01 Thread Robert Haas
On Tue, Apr 1, 2014 at 2:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 1, 2014 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In bug #9817 there's a complaint that the planner fails to consider
 these expressions equivalent:
 foo('a'::text, 'b'::text)
 foo(variadic array['a'::text, 'b'::text])
 when foo() is declared as taking variadic text[].

 My first reaction to this was who cares? after all, the user should
 just write the expression the same way both times and then they won't
 have this problem.  But after going and looking at the bug report I
 see that the user wrote it the first way consistently, but pg_dump
 blithely rewrote it to the second way.  I'm disinclined to view that
 as a planner problem; it seems to me to be a pg_dump or ruleutils bug.
  If those two things don't have the same parse representation, then
 pg_dump has no business treating them as equivalent - even if we were
 to put enough smarts into the planner to paper over that
 non-equivalence.

 The point is that they *were* equivalent before 9.3, and so ruleutils
 was entirely within its rights to not worry about which way it dumped
 the expression; indeed, it couldn't, because the information was not
 there as to which way the call had been written originally.  I do not
 think it's appropriate to blame ruleutils for taking advantage of this
 equivalence, because more than likely user applications have too.

Sure, it was reasonable at the time, certainly.  But they're not
equivalent any more.  Either they've got to become equivalent again,
or you can't dump one as the other.  I'm happy with either one, but
the first rule of correct dumping is that what you dump out must, on
reload, produce an equivalent database.

 Or in other words, what I wrote above is a more general statement of the
 problem than what was complained of in bug #9817 ... but if we just hack
 ruleutils to dump the cases differently, we will fail to fix the more
 general problem.  So we can still expect future bug reports about that,
 because it worked as-expected for years before 9.3.

 There's also the point that even if we changed ruleutils' behavior
 now, this would not fix existing dump files that have considered the
 two forms interchangeable ever since VARIADIC existed.  And we
 generally try hard to not break existing dump files.  To be even
 more to the point: what you propose is incapable of fixing the precise
 problem stated in the bug report, because it's complaining about a
 dump taken from 9.1, and there is *no* way to make 9.1 produce a
 dump that only uses VARIADIC if the original call did.  It hasn't
 got the information.  Even using a newer version of pg_dump wouldn't
 help that.

Well, that argues for the choice of trying to make them equivalent
again, I suppose, but it sounds like there are some nasty edge cases
that won't easily be filed down.  I think your idea of redefining
funcvariadic to be true only for VARIADIC ANY is probably a promising
approach to that solution, but as you say it leaves some problems
unsolved.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json/jsonb/hstore operator precedence

2014-04-01 Thread Josh Berkus
On 04/01/2014 01:07 PM, Andrew Dunstan wrote:
 What really bugs me about the example is that  has a different
 precedence from =, which seems more than odd. The example works just
 fine if you use = instead of . But I guess it's been that way for a
 very long time and there's not much to be done about it.

... and it would probably break umpty-zillion existing apps if we
changed precedence.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-04-01 Thread Jim Nasby

On 4/1/14, 1:04 PM, Peter Geoghegan wrote:

It strains credulity to think that this
patch alone would have that effect, but there might be quite a few
similar improvements that are possible.  So I think it would be good
to consider how far we want to go in this direction and where we think
we might want to stop.  That's not to say, let's not ever do this,
just, let's think carefully about where we want to end up.

Fair enough.


I agree with the concern, but also have to say that I can't count how many 
times I could have used this. A big +1, at least in this case.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json/jsonb/hstore operator precedence

2014-04-01 Thread Jim Nasby

On 4/1/14, 3:07 PM, Andrew Dunstan wrote:

What are cases where things would break if we changed the precedence of - and 
-? ISTM that's what we really should do if there's some way to manage the 
backwards compatibility...



There is no provision for setting the precedence of any operators. The 
precedence is set in the grammar, and these all have the same precedence. What 
you're suggesting would a cure far worse than the disease, I strongly suspect. 
You just need to learn to live with this.

What really bugs me about the example is that  has a different precedence from =, 
which seems more than odd. The example works just fine if you use = instead of . 
But I guess it's been that way for a very long time and there's not much to be done about 
it.


I'm confused... first you say there's no precedence and then you're saying that 
there is? Which is it?

ISTM that most languages set the priority of de-referencing operators to be 
quite high, so I don't see how that would be a disaster?

Of course, changing the precedence of = and  certainly would be a disaster; 
I'm not suggesting that.
--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json/jsonb/hstore operator precedence

2014-04-01 Thread Andrew Dunstan


On 04/01/2014 05:42 PM, Jim Nasby wrote:

On 4/1/14, 3:07 PM, Andrew Dunstan wrote:
What are cases where things would break if we changed the precedence 
of - and -? ISTM that's what we really should do if there's some 
way to manage the backwards compatibility...



There is no provision for setting the precedence of any operators. 
The precedence is set in the grammar, and these all have the same 
precedence. What you're suggesting would a cure far worse than the 
disease, I strongly suspect. You just need to learn to live with this.


What really bugs me about the example is that  has a different 
precedence from =, which seems more than odd. The example works just 
fine if you use = instead of . But I guess it's been that way for a 
very long time and there's not much to be done about it.


I'm confused... first you say there's no precedence and then you're 
saying that there is? Which is it?


No I didn't say there was no precedence. Please reread what I said. I 
said there was no provision for setting the precedence. There is 
precedence of course, but it's hardcoded.




ISTM that most languages set the priority of de-referencing operators 
to be quite high, so I don't see how that would be a disaster?


The way the grammar works ALL the composite operators have the same 
precedence. It has no notion that - is a dereference operator. You're 
suggesting something without actually looking at the code. Look at 
gram.y and scan.l and you might understand.




Of course, changing the precedence of = and  certainly would be a 
disaster; I'm not suggesting that.


There is arguably nothing wrong with the precedence of - and -. The 
reason for the problem Greg reported is that  probably has its 
precedence set too low. And no, we can't change it.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using arrays within structure in ECPG

2014-04-01 Thread Merlin Moncure
On Tue, Apr 1, 2014 at 11:50 AM, Michael Meskes mes...@postgresql.org wrote:
 Hi Ashutosh,

 I tried to fix the offset problem. PFA the patch. It does solve the
 problem of setting wrong offset in ECPGdo() call.

 Thanks, looks correct to me.

 But then there is problem of interpreting the result from server as an
 array within array of structure. The problem is there is in
 ecpg_get_data(). This function can not understand that the field is an
 array of integers (or for that matter array of anything) and store all
 the values in contiguous memory at the given address.

 I guess I know where that comes from, without actually looking at the
 code, though. Nested arrays are not supported by ecpg and the
 precompiler spits out an error message, just check preproc/type.c.
 However, in your example you have the struct essantially sandwiched
 between the arrays and the (too) simple check in that file doesn't
 notice, but because the implementation is nevertheless lacking.

 I'm sorry, but this sounds like a missing feature bug.

Small aside:

I've often wondered if the right long term approach is to abstract
backend type code into a shared library that both the server and
(optionally) the client would link with.  That would make extending
support to exotic types in ecpg much easier.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Including replication slot data in base backups

2014-04-01 Thread Josh Berkus
On 04/01/2014 05:24 AM, Michael Paquier wrote:
 Hi all,
 
 As of now, pg_basebackup creates an empty repository for pg_replslot/
 in a base backup, forcing the user to recreate slots on other nodes of
 the cluster with pg_create_*_replication_slot, or copy pg_replslot
 from another node. This is not really user-friendly especially after a
 failover where a given slave may not have the replication slot
 information of the master that it is replacing.
 
 The simple patch attached adds a new option in pg_basebackup, called
 --replication-slot, allowing to include replication slot information
 in a base backup. This is done by extending the command BASE_BACKUP in
 the replication protocol.
 
 As it is too late for 9.4, I would like to add it to the first commit
 fest of 9.5. Comments are welcome.

Assuming it works, this seems like something we need to fix for 9.4.  No?


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] It seems no Windows buildfarm members are running find_typedefs

2014-04-01 Thread Tom Lane
The current typedefs list seems to be lacking any Windows-only typedefs.
Noticed while trying to pgindent postmaster.c.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Including replication slot data in base backups

2014-04-01 Thread Michael Paquier
On Tue, Apr 1, 2014 at 11:59 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-04-01 16:45:46 +0200, Magnus Hagander wrote:
 On Tue, Apr 1, 2014 at 2:24 PM, Michael Paquier
 michael.paqu...@gmail.comwrote:
  As of now, pg_basebackup creates an empty repository for pg_replslot/
  in a base backup, forcing the user to recreate slots on other nodes of
  the cluster with pg_create_*_replication_slot, or copy pg_replslot
  from another node. This is not really user-friendly especially after a
  failover where a given slave may not have the replication slot
  information of the master that it is replacing.

 What exactly is your use case for copying the slots?
I had in mind users that want to keep around base backups that could
be used for recovery operations like PITR using a base backup and
archives. It does not apply directly to a live standby, as it would
mean that this standby would be defined to retain WAL for other slaves
connected to the master.
Regards
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] It seems no Windows buildfarm members are running find_typedefs

2014-04-01 Thread Andrew Dunstan


On 04/01/2014 08:53 PM, Tom Lane wrote:

The current typedefs list seems to be lacking any Windows-only typedefs.
Noticed while trying to pgindent postmaster.c.






Hmm. odd. will check.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #9518: temporary login failure - missing pg_hba entry

2014-04-01 Thread Tom Lane
I wrote:
 IOW, it looks to me like intermittent failures in the reverse DNS lookup
 could disable matching by hostname, and nothing would be said in the
 postmaster log.  Why is there no complaint if check_hostname's call to
 pg_getnameinfo_all (line 600 in HEAD) fails?

 After sleeping on it, I think probably the reason it is like that is a
 desire to not clutter the postmaster log if there are some legitimate
 clients without rDNS entries.  That is, suppose pg_hba.conf has

   host foo.bar.com ...
   host 192.168.168.1 ...

 and you've not bothered to create a reverse-DNS entry for 192.168.168.1.
 We will try (and fail) to look up the rDNS entry while considering the
 foo.bar.com line.  We certainly don't want a failure there to prevent us
 from reaching the 192.168.168.1 line, and we don't really want to clutter
 the postmaster log with a bleat about it, either.  Hence the lack of any
 error logging in the existing code.  (The later cross-check on whether
 the forward DNS matches does have an error report, which maybe isn't such
 a great thing either from this standpoint.)

 The problem of course is that if the rDNS failure prevents us from
 matching to *any* line, we exit with no error more helpful than 
 missing pg_hba entry, which is not very desirable in this case.

 I guess we could do something like remember the fact that we tried and
 failed to do an rDNS lookup, and report it as DETAIL in the eventual
 missing pg_hba entry report.  Not quite sure if it's worth the trouble
 --- any thoughts?

 Another objection to the code as it stands is that if there are multiple
 pg_hba lines containing hostnames, we'll repeat the failing rDNS lookup
 at each one.  This is at best a huge waste of cycles (multiple network
 roundtrips, if the DNS server isn't local), and at worst inconsistent
 if things actually are intermittent and a later lookup attempt succeeds.
 I think we want to fix it to be sure that there's exactly one rDNS lookup
 attempt, occurring at the first line with a hostname.

Attached is a draft patch to deal with these issues.  While testing it,
I realized that the existing code is wrong in a couple more ways than
I'd thought.  In the first place, it does not specify the NI_NAMEREQD
flag to getnameinfo, which means that if getnameinfo can't resolve a
hostname for the client IP address, it'll just quietly return the numeric
address instead of complaining.  That is certainly not what we want here.
In the second place, postmaster.c happily forced the result of its own
getnameinfo call into port-remote_hostname, despite the lack of
NI_NAMEREQD in that call, not to mention the fact that it would do so even
if that call had failed outright (not that that's too likely without
NI_NAMEREQD, I guess).

Unfortunately, I don't think we want to add NI_NAMEREQD to postmaster.c's
call, since then we'd get nothing usable at all for port-remote_host for
a client without an rDNS record.  This means that we can't realistically
piggyback on that call to set port-remote_hostname.  I thought about
trying to detect whether the returned string was a numeric IP address or
not, but that doesn't look so easy, because IPv6 addresses can contain
hex characters.  So for instance a simple strspn character-set check
wouldn't be able to tell that abc123.de wasn't numeric.  So I just
stripped out that code in the attached patch.

I'm tempted to back-patch this, because AFAICS the misbehaviors mentioned
here are flat-out bugs, independently of whether the error logging is
improved or not.  Changing struct Port in back branches is a bit risky,
but we could put the added field at the end in the back branches.

Thoughts?

regards, tom lane

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 31ade0b..eb4c4ca 100644
*** a/src/backend/libpq/auth.c
--- b/src/backend/libpq/auth.c
*** ClientAuthentication(Port *port)
*** 425,439 
     NI_NUMERICHOST);
  
  #define HOSTNAME_LOOKUP_DETAIL(port) \
! (port-remote_hostname  \
!  ? (port-remote_hostname_resolv == +1	\
! 	? errdetail_log(Client IP address resolved to \%s\, forward lookup matches., port-remote_hostname) \
! 	: (port-remote_hostname_resolv == 0\
! 	   ? errdetail_log(Client IP address resolved to \%s\, forward lookup not checked., port-remote_hostname) \
! 	   : (port-remote_hostname_resolv == -1			\
! 		  ? errdetail_log(Client IP address resolved to \%s\, forward lookup does not match., port-remote_hostname) \
! 		  : 0)))		\
!  : 0)
  
  if (am_walsender)
  {
--- 425,449 
     NI_NUMERICHOST);
  
  #define HOSTNAME_LOOKUP_DETAIL(port) \
! (port-remote_hostname ? \
!  (port-remote_hostname_resolv == +1 ? \
!   errdetail_log(Client IP address resolved to \%s\, forward lookup matches., \
! port-remote_hostname) : \
!   port-remote_hostname_resolv == 0 ? \
!   errdetail_log(Client IP address resolved 

Re: [HACKERS] using arrays within structure in ECPG

2014-04-01 Thread Ashutosh Bapat
So, you are saying that we should try to catch such errors and report
during pre-compile time. That's better than silently corrupting the data.


On Tue, Apr 1, 2014 at 10:20 PM, Michael Meskes mes...@postgresql.orgwrote:

 Hi Ashutosh,

  I tried to fix the offset problem. PFA the patch. It does solve the
  problem of setting wrong offset in ECPGdo() call.

 Thanks, looks correct to me.

  But then there is problem of interpreting the result from server as an
  array within array of structure. The problem is there is in
  ecpg_get_data(). This function can not understand that the field is an
  array of integers (or for that matter array of anything) and store all
  the values in contiguous memory at the given address.

 I guess I know where that comes from, without actually looking at the
 code, though. Nested arrays are not supported by ecpg and the
 precompiler spits out an error message, just check preproc/type.c.
 However, in your example you have the struct essantially sandwiched
 between the arrays and the (too) simple check in that file doesn't
 notice, but because the implementation is nevertheless lacking.

 I'm sorry, but this sounds like a missing feature bug.

 Michael
 --
 Michael Meskes
 Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
 Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
 Jabber: michael.meskes at gmail dot com
 VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL




-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] It seems no Windows buildfarm members are running find_typedefs

2014-04-01 Thread Andrew Dunstan


On 04/01/2014 09:22 PM, Andrew Dunstan wrote:


On 04/01/2014 08:53 PM, Tom Lane wrote:

The current typedefs list seems to be lacking any Windows-only typedefs.
Noticed while trying to pgindent postmaster.c.






Hmm. odd. will check.




It's apparently causing the buildfarm to crash, which is why I must have 
disabled it. I'll chase that down tomorrow.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Fwd: [HACKERS] Proposal: variant of regclass

2014-04-01 Thread Amit Kapila
On Mon, Mar 31, 2014 at 7:08 PM, Yugo Nagata nag...@sraoss.co.jp wrote:
 Hi Amit Kapila,

 Thank you for your reviewing. I updated the patch to v5.

I have checked the latest version and found few minor improvements that
are required:

1.
! if (!missing_ok)
! ereport(ERROR,
! (errcode(ERRCODE_UNDEFINED_OBJECT),
! errmsg(type \%s\ does not exist,
! TypeNameToString(typeName)),
! parser_errposition(NULL, typeName-location)));

pfree(buf.data); seems to be missing in error cases.
Do you see any problem if we call it before calling LookupTypeName()
instead of calling at multiple places?

2.
+raising an error. In addition, neither functionto_regproc/function nor
+functionto_regoper/function doesn't raise an error when more than one
+object are found.

No need to use word *doesn't* in above sentence.


3.
+  * If the type name is not found, return InvalidOid if missing_ok
+  * = true, otherwise raise an error.

I can understand above comment, but I think it is better to improve it
by reffering other such instances. I like the explanation of missing_ok
in function header of relation_openrv_extended(). Could you please check
other places and improve this comment.

4. How is user going to distinguish between the cases when object-not-found
   and more-than-one-object.
   Do you think such a distinction is not required for user's of this API?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers