Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Michael Paquier
On Wed, Dec 10, 2014 at 6:50 PM, Noah Misch n...@leadboat.com wrote:
 On Mon, Dec 08, 2014 at 02:23:39AM +0100, Petr Jelinek wrote:
 On 08/12/14 00:56, Noah Misch wrote:
 The commit_ts test suite gives me the attached diff on a 32-bit MinGW build
 running on 64-bit Windows Server 2003.  I have not checked other Windows
 configurations; the suite does pass on GNU/Linux.

 Hmm I wonder if  now() needs to be changed to = now() in those queries
 to make them work correctly on that plarform, I don't have machine with that
 environment handy right now, so I would appreciate if you could try that, in
 case you don't have time for that, I will try to setup something later...

 I will try that, though perhaps not until next week.

FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
I also checked that changing  now() to = now() fixed the
problem, so your assumption was right, Petr.
Regards,
-- 
Michael


20141215_committs_mingw_fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ctidscan as an example of custom-scan (Re: [HACKERS] [v9.5] Custom Plan API)

2014-12-15 Thread Michael Paquier
On Mon, Dec 15, 2014 at 4:55 PM, Kouhei Kaigai kai...@ak.jp.nec.com wrote:
 I'm not certain whether we should have this functionality in contrib
 from the perspective of workload that can help, but its major worth is
 for an example of custom-scan interface.
worker_spi is now in src/test/modules. We may add it there as well, no?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Shigeru Hanada
Hi hackers,

I'm working on $SUBJECT and would like to get comments about the
design.  Attached patch is for the design below.  Note that the patch
requires Kaigai-san's custom foriegn join patch[1]

[1] 
http://www.postgresql.org/message-id/9a28c8860f777e439aa12e8aea7694f80108c...@bpxm15gp.gisp.nec.co.jp

Joins to be pushed down
===
We have two levels of decision about Join push-down, core and FDW.  I
think we should allow them to push down joins as much as we can unless
it doesn't break the semantics of join.  Anyway FDWs should decide
whether the join can be pushed down or not, on the basis of the FDW's
capability.

Here is the list of checks which should be done in core:

1. Join source relations
All of foreign tables used in a join should be managed by one foreign
data wrapper.  I once proposed that all source tables should belong to
one server, because at that time I assumed that FDWs use SERVER to
express physical place of data source.  But Robert's comment gave me
an idea that SERVER is not important for some FDWs, so now I think
check about server matching should be done by FDWs.

USER MAPPING is another important attribute of foreign scan/join, and
IMO it should be checked by FDW because some of FDWs don't require
USER MAPPING.  If an FDW want to check user mapping, all tables in the
join should belong to the same server and have same
RangeTablEntry#checkAsUser to ensure that only one user mapping is
derived.

2. Join type
Join type can be any, except JOIN_UNIQUE_OUTER and JOIN_UNIQUE_INNER,
though most of FDWs would support only INNER and OUTER.
Pushing down CROSS joins might seem inefficient, because obviously
CROSS JOIN always produces more result than retrieving all rows from
each foreign table separately.  However, some FDW might be able to
accelerate such join with cache or something.  So I think we should
leave such decision to FDWs.

Here is the list of checks which shold be done in postgres_fdw:

1. Join source relations
As described above, postgres_fdw (and most of SQL-based FDWs) needs to
check that 1) all foreign tables in the join belong to a server, and
2) all foreign tables have same checkAsUser.
In addition to that, I add extra limitation that both inner/outer
should be plain foreign tables, not a result of foreign join.  This
limiation makes SQL generator simple.  Fundamentally it's possible to
join even join relations, so N-way join is listed as enhancement item
below.

2. Join type
In the first proposal, postgres_fdw allows INNER and OUTER joins to be
pushed down.  CROSS, SEMI and ANTI would have much less use cases.

3. Join conditions and WHERE clauses
Join conditions should consist of semantically safe expressions.
Where the semantically safe means is same as WHERE clause push-down.

Planned enhancements for 9.5

These features will be proposed as enhancements, hopefully in the 9.5
development cycle, but probably in 9.6.

1. Remove unnecessary column from SELECT clause
Columns which are used for only join conditions can be removed from
the target list, as postgres_fdw does in simple foreign scans.

2. Support N-way joins
Mostly for difficulty of SQL generation, I didn't add support of N-Way joins.

3. Proper cost estimation
Currently postgres_fdw always gives 0 as the cost of a foreign join,
as a compromise.  This is because estimating costs of each join
without round-trip (EXPLAIN) is not easy.  A rough idea about that I
currently have is to use local statistics, but determining join method
used at remote  might require whole planner to run for the join
subtree.

Regards,
-- 
Shigeru HANADA


join_pushdown.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ctidscan as an example of custom-scan (Re: [HACKERS] [v9.5] Custom Plan API)

2014-12-15 Thread Kouhei Kaigai
 On Mon, Dec 15, 2014 at 4:55 PM, Kouhei Kaigai kai...@ak.jp.nec.com wrote:
  I'm not certain whether we should have this functionality in contrib
  from the perspective of workload that can help, but its major worth is
  for an example of custom-scan interface.
 worker_spi is now in src/test/modules. We may add it there as well, no?

Hmm, it makes sense for me. Does it also make sense to add a test-case to
the core regression test cases?

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Petr Jelinek

On 15/12/14 09:12, Michael Paquier wrote:

On Wed, Dec 10, 2014 at 6:50 PM, Noah Misch n...@leadboat.com wrote:

On Mon, Dec 08, 2014 at 02:23:39AM +0100, Petr Jelinek wrote:

On 08/12/14 00:56, Noah Misch wrote:

The commit_ts test suite gives me the attached diff on a 32-bit MinGW build
running on 64-bit Windows Server 2003.  I have not checked other Windows
configurations; the suite does pass on GNU/Linux.


Hmm I wonder if  now() needs to be changed to = now() in those queries
to make them work correctly on that plarform, I don't have machine with that
environment handy right now, so I would appreciate if you could try that, in
case you don't have time for that, I will try to setup something later...


I will try that, though perhaps not until next week.


FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
I also checked that changing  now() to = now() fixed the
problem, so your assumption was right, Petr.
Regards,



Cool, thanks, I think it was the time granularity problem in Windows.

--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Async execution of postgres_fdw.

2014-12-15 Thread Kyotaro HORIGUCHI
Hello, this is the 2nd session of 'intoroducing parallelism using
postgres_fdw'.

The two patch attached are as following,

- 0001-Async-exec-of-postgres_fdw.patch
  Main patch, which includes all functions.

- 0002-rename-PGConn-variable.patch
  Renaming the variable conn for readability. No functional
  effect.


* Outline of this patch

From some consideration after the previous discussion and
comments from others, I judged the original (WIP) patch was
overdone as the first step. So I reduced the patch to minimal
function. The new patch does the following,

- Wrapping PGconn by PgFdwConn in order to handle multiple scans
  on one connection.

- The core async logic was added in fetch_more_data().

- Invoking remote commands asynchronously in ExecInitForeignScan.

- Canceling async invocation if any other foreign scans,
  modifies, deletes use the same connection.

Cancellation is done by immediately fetching the return of
already-invoked acync command.


* Where this patch will be effective.

With upcoming inheritance-partition feature, this patch enables
stating and running foreign scans asynchronously. It will be more
effective for longer TAT or remote startup times, and larger
number of foreign servers. No negative performance effect on
other situations.


* Concerns about this patch.

- This breaks the assumption that scan starts at ExecForeignScan,
  not ExecInitForeignScan, which might cause some problem.

- error reporting code in do_sql_command is quite ugly..



regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
From 4b56fcd0687172e3cccb329bc17e78935657f58f Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi horiguchi.kyot...@lab.ntt.co.jp
Date: Fri, 28 Nov 2014 10:52:41 +0900
Subject: [PATCH 1/2] Async exec of postgres_fdw.

---
 contrib/postgres_fdw/connection.c   | 102 ---
 contrib/postgres_fdw/postgres_fdw.c | 191 
 contrib/postgres_fdw/postgres_fdw.h |  28 +-
 3 files changed, 242 insertions(+), 79 deletions(-)

diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 116be7d..8b1c738 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -44,7 +44,7 @@ typedef struct ConnCacheKey
 typedef struct ConnCacheEntry
 {
 	ConnCacheKey key;			/* hash key (must be first) */
-	PGconn	   *conn;			/* connection to foreign server, or NULL */
+	PgFdwConn	*conn;			/* connection to foreign server, or NULL */
 	int			xact_depth;		/* 0 = no xact open, 1 = main xact open, 2 =
  * one level of subxact open, etc */
 	bool		have_prep_stmt; /* have we prepared any stmts in this xact? */
@@ -93,7 +93,7 @@ static void pgfdw_subxact_callback(SubXactEvent event,
  * be useful and not mere pedantry.  We could not flush any active connections
  * mid-transaction anyway.
  */
-PGconn *
+PgFdwConn *
 GetConnection(ForeignServer *server, UserMapping *user,
 			  bool will_prep_stmt)
 {
@@ -160,16 +160,36 @@ GetConnection(ForeignServer *server, UserMapping *user,
 		entry-xact_depth = 0;	/* just to be sure */
 		entry-have_prep_stmt = false;
 		entry-have_error = false;
-		entry-conn = connect_pg_server(server, user);
+
+		/* This shoud be in the same memory context with the hashtable */
+		entry-conn = 
+			(PgFdwConn *) MemoryContextAllocZero(CacheMemoryContext,
+ sizeof(PgFdwConn));
+		
 		elog(DEBUG3, new postgres_fdw connection %p for server \%s\,
-			 entry-conn, server-servername);
+			 entry-conn-conn, server-servername);
 	}
 
+	if (entry-conn-conn == NULL)
+	{
+		entry-conn-conn = connect_pg_server(server, user);
+		entry-conn-nscans = 0;
+		entry-conn-async_state = PGFDW_CONN_IDLE;
+		entry-conn-async_scan = NULL;
+	}
 	/*
 	 * Start a new transaction or subtransaction if needed.
 	 */
 	begin_remote_xact(entry);
 
+	/*
+	 * Cancel async query if there's another foreign scan node sharing this
+	 * connection.
+	 */
+	if (++entry-conn-nscans  1 
+		entry-conn-async_state == PGFDW_CONN_ASYNC_RUNNING)
+		fetch_more_data(entry-conn-async_scan);
+
 	/* Remember if caller will prepare statements */
 	entry-have_prep_stmt |= will_prep_stmt;
 
@@ -182,7 +202,7 @@ GetConnection(ForeignServer *server, UserMapping *user,
 static PGconn *
 connect_pg_server(ForeignServer *server, UserMapping *user)
 {
-	PGconn	   *volatile conn = NULL;
+	PGconn   *volatile conn = NULL;
 
 	/*
 	 * Use PG_TRY block to ensure closing connection on error.
@@ -355,7 +375,12 @@ do_sql_command(PGconn *conn, const char *sql)
 
 	res = PQexec(conn, sql);
 	if (PQresultStatus(res) != PGRES_COMMAND_OK)
-		pgfdw_report_error(ERROR, res, conn, true, sql);
+	{
+		PgFdwConn tmpfdwconn;
+
+		tmpfdwconn.conn = conn;
+		pgfdw_report_error(ERROR, res, tmpfdwconn, true, sql);
+	}
 	PQclear(res);
 }
 
@@ -380,13 +405,13 @@ begin_remote_xact(ConnCacheEntry *entry)
 		const char *sql;
 
 		elog(DEBUG3, starting remote transaction on connection %p,
-			 entry-conn);
+			 entry-conn);
 
 		if 

Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Samrat Revagade
On Sat, Dec 13, 2014 at 10:49 PM, David Fetter da...@fetter.org wrote:

 On Fri, Dec 12, 2014 at 10:06:32AM -0500, Tom Lane wrote:
  Heikki Linnakangas hlinnakan...@vmware.com writes:
   I'd like to include pg_rewind in contrib.
 
  I don't object to adding the tool as such, but let's wait to see
  what happens with Peter's proposal to move contrib command-line
  tools into src/bin/.  If it should be there it'd be less code churn
  if it went into there in the first place.

 +1 for putting it directly in src/bin.



Yeah, +1 for putting it under src/bin.


Re: [HACKERS] Escaping from blocked send() reprised.

2014-12-15 Thread Kyotaro HORIGUCHI
Since I don't have clear idea how to promote this, I will remake
and be back with new patch based on Andres' for patches.

 Hmm.. Sorry for my stupidity.
 
  Why is that necessary? It seems really rather wrong to make
  BIO_set_retry_write() dependant on ProcDiePending? Especially as, at
  least in my testing, it's not even required because the be_tls_write()
  can just check the error properly?
 
 I mistook the previous conversation as it doesn't work as
 expected. I confirmed that it works fine.
 
 After all, it works as I expected. The parameter for
 ProcessClientWriteInterrupt() looks somewhat uneasy but the patch
 4 looks fine as a whole. Do you have anything to worry about in
 the patch?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter user/role CURRENT_USER

2014-12-15 Thread Kyotaro HORIGUCHI
Thank you.

 A new patch has been sent here but no review occurred, hence moving
 this item to CF 2014-12.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2014-12-15 Thread Andres Freund
On 2014-12-15 18:19:26 +0900, Kyotaro HORIGUCHI wrote:
 Since I don't have clear idea how to promote this, I will remake
 and be back with new patch based on Andres' for patches.

Do my patches miss any functionality you want?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-12-15 Thread Amit Langote

Claudio Freire wrote:
 On Sun, Dec 14, 2014 at 11:12 PM, Amit Langote
 langote_amit...@lab.ntt.co.jp wrote:
  On egress you need some direct way to compare the scan quals with the
  partitioning values.  I would imagine this to be similar to how scan
  quals are compared to the values stored in a BRIN index: each scan qual
  has a corresponding operator strategy and a scan key, and you can say
  aye or nay based on a small set of operations that can be run
  cheaply, again without any proof or running arbitrary expressions.
 
 
  My knowledge of this is far from being perfect, though to clear any
 confusions -
 
  As far as planning is concerned, I could not imagine how index access
 method way of pruning partitions could be made to work. Of course, I may
 be missing something.
 
 Let me be overly verbose, don't take it as patronizing, just answering
 in lots of detail why this could be a good idea to try.
 

Thanks for explaining. It helps.

 Normal indexes store a pointer for each key value of sorts. So B-Tree
 gets you a set of tids for each key, and so does GIN and hash.
 
 But BRIN is different in that it does the mapping differently. BRIN
 stores a compact, approximate representation of the set of keys within
 a page range. It can tell with some degree of (in)accuracy whether a
 key or key range could be part of that page range or not. The way it
 does this is abstracted out, but at its core, it stores a compressed
 representation of the key set that takes a constant amount of bits to
 store, and no more, no matter how many elements. What changes as the
 element it represents grows, is its accuracy.
 
 Currently, BRIN only supports min-max representations. It will store,
 for each page range, the minimum and maximum of some columns, and
 when
 you query it, you can compare range vs range, and discard whole page
 ranges.
 
 Now, what are partitions, if not page ranges?

Yes, I can see a partition as a page range. The fixed summary info in BRIN's 
terms would be range bounds in case this is a rang partition, list of values in 
case this is a list partition and hash value in case this is a hash partition.

There is debate on the topic but each of these partitions also happens to be a 
separate relation. IIUC, BRIN is an access method for a relation (say, 
top-level partitioned relation) that comes into play in executor if that access 
method survives as preferred access method by the planner. I cannot see a way 
to generalize it further and make it support each block range as a separate 
relation and then use it for partition pruning in planner. This is assuming a 
partitioned relation is planned as an Append node which contains a list of 
plans for surviving partition relations based on, say, restrict quals.

I may be thinking of BRIN as a whole as not being generalized enough but I may 
be wrong. Could you point out if so?

 A BRIN tuple is a min-max pair. But BRIN in more general, it could use
 other data structures to hold that compressed representation, if
 someone implemented them. Like bloom filters [0].
 
 A BRIN index is a complex data structure because it has to account for
 physically growing tables, but all the complexities vanish when you
 fix a block range (the BR in BRIN) to a partition. Then, a mere
 array of BRIN tuples would suffice.
 
 BRIN already contains the machinery to turn quals into something that
 filters out entire partitions, if you provide the BRIN tuples.
 

IIUC, that machinery comes into play when, say, a Bitmap Heap scan starts, 
right?

 And you could even effectively matain a BRIN index for the partitions
 (just a BRIN tuple per partition, dynamically updated with every
 insertion).
 
 If you do that, you start with empty partitions, and each insert
 updates the BRIN tuple. Avoiding concurrency loss in this case would
 be tricky, but in theory this could allow very general partition
 exclusion. In fact it could even work with constraint exclusion right
 now: you'd have a single-tuple BRIN index for each partition and
 benefit from it.
 
 But you don't need to pay the price of updating BRIN indexes, as
 min-max tuples for each partition can be produced while creating the
 partitions if the syntax already provides the information. Then, it's
 just a matter of querying this meta-data which just happens to have
 the form of a BRIN tuple for each partition.
 

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sequence Access Method WIP

2014-12-15 Thread Petr Jelinek

On 10/12/14 03:33, Petr Jelinek wrote:

On 24/11/14 12:16, Heikki Linnakangas wrote:

About the rough edges:
- The AlterSequence is not prettiest code around as we now have to
create new relation when sequence AM is changed and I don't know how to
do that nicely
- I am not sure if I did the locking, command order and dependency
handling in AlterSequence correcly


This version does AlterSequence differently and better. Didn't attach 
the gapless sequence again as that one is unchanged.



--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
diff --git a/src/backend/access/Makefile b/src/backend/access/Makefile
index 21721b4..818da15 100644
--- a/src/backend/access/Makefile
+++ b/src/backend/access/Makefile
@@ -8,6 +8,7 @@ subdir = src/backend/access
 top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
-SUBDIRS	= brin common gin gist hash heap index nbtree rmgrdesc spgist transam
+SUBDIRS	= brin common gin gist hash heap index nbtree rmgrdesc spgist \
+			  transam sequence
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index c16b38e..35818c0 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -321,6 +321,7 @@ static bool need_initialization = true;
 static void initialize_reloptions(void);
 static void parse_one_reloption(relopt_value *option, char *text_str,
 	int text_len, bool validate);
+static bytea *common_am_reloptions(RegProcedure amoptions, Datum reloptions, bool validate);
 
 /*
  * initialize_reloptions
@@ -821,7 +822,8 @@ untransformRelOptions(Datum options)
  * instead.
  *
  * tupdesc is pg_class' tuple descriptor.  amoptions is the amoptions regproc
- * in the case of the tuple corresponding to an index, or InvalidOid otherwise.
+ * in the case of the tuple corresponding to an index or sequence, InvalidOid
+ * otherwise.
  */
 bytea *
 extractRelOptions(HeapTuple tuple, TupleDesc tupdesc, Oid amoptions)
@@ -854,6 +856,9 @@ extractRelOptions(HeapTuple tuple, TupleDesc tupdesc, Oid amoptions)
 		case RELKIND_INDEX:
 			options = index_reloptions(amoptions, datum, false);
 			break;
+		case RELKIND_SEQUENCE:
+			options = sequence_reloptions(amoptions, datum, false);
+			break;
 		case RELKIND_FOREIGN_TABLE:
 			options = NULL;
 			break;
@@ -1299,13 +1304,31 @@ heap_reloptions(char relkind, Datum reloptions, bool validate)
 
 /*
  * Parse options for indexes.
+ */
+bytea *
+index_reloptions(RegProcedure amoptions, Datum reloptions, bool validate)
+{
+	return common_am_reloptions(amoptions, reloptions, validate);
+}
+
+/*
+ * Parse options for sequences.
+ */
+bytea *
+sequence_reloptions(RegProcedure amoptions, Datum reloptions, bool validate)
+{
+	return common_am_reloptions(amoptions, reloptions, validate);
+}
+
+/*
+ * Parse options for indexes or sequences.
  *
  *	amoptions	Oid of option parser
  *	reloptions	options as text[] datum
  *	validate	error flag
  */
-bytea *
-index_reloptions(RegProcedure amoptions, Datum reloptions, bool validate)
+static bytea *
+common_am_reloptions(RegProcedure amoptions, Datum reloptions, bool validate)
 {
 	FmgrInfo	flinfo;
 	FunctionCallInfoData fcinfo;
diff --git a/src/backend/access/sequence/Makefile b/src/backend/access/sequence/Makefile
new file mode 100644
index 000..01a0dc8
--- /dev/null
+++ b/src/backend/access/sequence/Makefile
@@ -0,0 +1,17 @@
+#-
+#
+# Makefile--
+#Makefile for access/sequence
+#
+# IDENTIFICATION
+#src/backend/access/sequence/Makefile
+#
+#-
+
+subdir = src/backend/access/sequence
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+
+OBJS = seqam.o seqlocal.o
+
+include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/access/sequence/seqam.c b/src/backend/access/sequence/seqam.c
new file mode 100644
index 000..ce57f16
--- /dev/null
+++ b/src/backend/access/sequence/seqam.c
@@ -0,0 +1,428 @@
+/*-
+ *
+ * seqam.c
+ *	  sequence access method routines
+ *
+ * Portions Copyright (c) 1996-2014, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/access/sequence/seqam.c
+ *
+ *
+ * Sequence access method allows the SQL Standard Sequence objects to be
+ * managed according to either the default access method or a pluggable
+ * replacement. Each sequence can only use one access method at a time,
+ * though different sequence access methods can be in use by different
+ * sequences at the same time.
+ *
+ * The SQL Standard assumes that each Sequence object is completely controlled
+ * from the current database node, preventing any form 

Re: [HACKERS] Support UPDATE table SET(*)=...

2014-12-15 Thread Atri Sharma
On Mon, Dec 8, 2014 at 6:06 AM, Michael Paquier michael.paqu...@gmail.com
wrote:

 On Wed, Nov 26, 2014 at 4:26 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  I think what's likely missing here is a clear design for the raw parse
  tree representation (what's returned by the bison grammar).  The patch
  seems to be trying to skate by without creating any new parse node types
  or fields, but that may well be a bad idea.  At the very least there
  needs to be some commentary added to parsenodes.h explaining what the
  representation actually is (cf commentary there for MultiAssignRef).
 
  Also, I think it's a mistake not to be following the MultiAssignRef
  model for the case of (*) = ctext_row.  We optimize the ROW-source
  case at the grammar stage when there's a fixed number of target columns,
  because that's a very easy transformation --- but you can't do it like
  that when there's not.  It's possible that that optimization should be
  delayed till later even in the existing case; in general, optimizing
  in gram.y is a bad habit that's better avoided ...
 Marking as returned with feedback based on those comments.


Thank you everybody for your comments.

I am indeed trying to avoid a new node since my intuitive feeling says that
we do not need a new node for a functionality which is present in other
commands albeit in different form. However, I agree that documentation
explaining parse representation is lacking and shall improve that. Also, in
accordance to Tom's comments, I shall look for not touching target list
directly and follow the path of MultiAssignRef.

I have moved patch to current CF and marked it as Waiting on Author since I
plan to resubmit after addressing the concerns.

Regards,

Atri


-- 
Regards,

Atri
*l'apprenant*


Re: [HACKERS] On partitioning

2014-12-15 Thread José Luis Tallón

On 12/15/2014 07:42 AM, Claudio Freire wrote:

[snip]


If you do that, you start with empty partitions, and each insert 
updates the BRIN tuple. Avoiding concurrency loss in this case would 
be tricky, but in theory this could allow very general partition 
exclusion. In fact it could even work with constraint exclusion right 
now: you'd have a single-tuple BRIN index for each partition and 
benefit from it. But you don't need to pay the price of updating BRIN 
indexes, as min-max tuples for each partition can be produced while 
creating the partitions if the syntax already provides the 
information. Then, it's just a matter of querying this meta-data which 
just happens to have the form of a BRIN tuple for each partition.


Yup. Indeed this is the way I outlined in my previous e-mail.

The only point being: Why bother with BRIN when we already have the 
range machinery, and it's trivial to add pointers to partitions from 
each range?


I suggested that BRIN would solve a situation when the amount of 
partitions is huge (say, thousands) and we might need to be able to 
efficiently locate the appropriate partition. In this situation, a 
linear search might become prohibitive, or the data structure (a simple 
B-Tree, maybe) become too big to be worth keeping in memory. This is 
where being able to store the partition index on disk would be 
interesting.


Moreover, I guess that ---by using this approach 
(B-Tree[range]-partition_id and/or BRIN)--- we could efficiently answer 
the question do we have any tuple with this key in some partition? 
which AFAICS is pretty close to us having global indexes.




Regards,

/ J.L.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP ]

2014-12-15 Thread Dilip kumar
On December 2014 17:31 Amit Kapila Wrote,


I suggest rather than removing, edit the comment to indicate
the idea behind code at that place.
Done

Okay, I think this part of code is somewhat similar to what
is done in pg_dump/parallel.c with some differences related
to handling of inAbort.  One thing I have noticed here which
could lead to a problem is that caller of select_loop() function
assumes that return value is less than zero only if there is a
cancel request which I think is wrong, because select system
call could also return -1 in case of error.  I am referring below
code in above context:
+ if (i  0)
+ {
+ /*
+  * This can only happen if user has sent the cancel request using
+  * Ctrl+C, Cancel is handled by 0th slot, so fetch the error result.
+  */
+
+ GetQueryResult(pSlot[0].connection, dbname, progname,
+completedb);

Now for abort case I am using special error code, and other than that case we 
will assert, this behavior is same as in pg_dump.


Hmm, theoretically I think new behaviour could lead to more I/O in
certain cases as compare to existing behaviour.  The reason for more I/O
is that in the new behaviour, while doing Analyze for a particular table at
different targets, in-between it has Analyze of different table as well,
so the pages in shared buffers or OS cache for a particular table needs to
be reloded again for a new target whereas currently it will do all stages
of Analyze for a particular table in one-go which means that each stage
of Analyze could get benefit from the pages of a table loaded by previous
stage.  If you agree, then we should try to avoid this change in new
behaviour.

I will work on this comment and post the updated patch..

I will move this patch to the latest commitfest.

Regards,
Dilip




vacuumdb_parallel_v20.patch
Description: vacuumdb_parallel_v20.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Christoph Berg
Hi,

a fellow Debian Developer found a minor glitch in
src/timezone/localtime.c, where binary search is used. Now I don't
think there is an actual problem (unless there's  2^30 timezones),
but it would at least make sense to mark the code as okayish so that
people running code scanners won't stumble over the issue again.

The attached patch added comments to address this.

Date: Sun, 30 Nov 2014 22:06:42 +0100
From: Niels Thykier ni...@thykier.net
Reply-To: Niels Thykier ni...@thykier.net, 771...@bugs.debian.org
To: Debian Bug Tracking System sub...@bugs.debian.org
Subject: [Pkg-postgresql-public] Bug#771580: postgresql-9.4: Minor binary-search
int overflow

Source: postgresql-9.4
Version: 9.4~rc1-1
Severity: minor


Hi,

I stumbled on the folowing snippet from src/timezone/localtime.c,
function pg_interpret_timezone_abbrev:

   {
   int lo = 0;
   int hi = sp-timecnt;

   while (lo  hi)
   {
   int mid = (lo + hi)  1;
   ^^^

This looks it is subject to a known int overflow, when (original) hi
is close to INT_MAX and the item being close to then end of the array.

~Niels

[The original report had a link here to the googleresearch blog , but
the PG list servers think it is spam :(]
diff --git a/src/timezone/localtime.c b/src/timezone/localtime.c
new file mode 100644
index 19a24e1..878e471
*** a/src/timezone/localtime.c
--- b/src/timezone/localtime.c
*** localsub(const pg_time_t *timep, long of
*** 1070,1076 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1;
  
  			if (t  sp-ats[mid])
  hi = mid;
--- 1070,1076 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1; /* overflow unlikely */
  
  			if (t  sp-ats[mid])
  hi = mid;
*** pg_next_dst_boundary(const pg_time_t *ti
*** 1423,1429 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1;
  
  			if (t  sp-ats[mid])
  hi = mid;
--- 1423,1429 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1; /* overflow unlikely */
  
  			if (t  sp-ats[mid])
  hi = mid;
*** pg_interpret_timezone_abbrev(const char
*** 1506,1512 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1;
  
  			if (t  sp-ats[mid])
  hi = mid;
--- 1506,1512 
  
  		while (lo  hi)
  		{
! 			int			mid = (lo + hi)  1; /* overflow unlikely */
  
  			if (t  sp-ats[mid])
  hi = mid;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replicating DROP commands across servers

2014-12-15 Thread Alvaro Herrera
Michael Paquier wrote:

 This patch has had no activity for the last two months, is in Needs
 Review state and has marked as reviewer Dimitri. As there is no
 activity from the reviewer, I am moving that to CF 2014-12 and
 removing Dimitri as reviewer. If someone wants to have a look at this
 patch, feel free to do so. Dimitri, if you are still planning to look
 at it, please re-add your name.

FWIW I intend to commit the first patch more or less as-is, and then add
a SQL function accessor to get_object_address to the second part and
commit that one also.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-12-15 Thread Claudio Freire
On Mon, Dec 15, 2014 at 8:09 AM, José Luis Tallón
jltal...@adv-solutions.net wrote:
 On 12/15/2014 07:42 AM, Claudio Freire wrote:

 [snip]


 If you do that, you start with empty partitions, and each insert updates
 the BRIN tuple. Avoiding concurrency loss in this case would be tricky, but
 in theory this could allow very general partition exclusion. In fact it
 could even work with constraint exclusion right now: you'd have a
 single-tuple BRIN index for each partition and benefit from it. But you
 don't need to pay the price of updating BRIN indexes, as min-max tuples for
 each partition can be produced while creating the partitions if the syntax
 already provides the information. Then, it's just a matter of querying this
 meta-data which just happens to have the form of a BRIN tuple for each
 partition.


 Yup. Indeed this is the way I outlined in my previous e-mail.

 The only point being: Why bother with BRIN when we already have the range
 machinery, and it's trivial to add pointers to partitions from each range?

The part of BRIN I find useful is not its on-disk structure, but all
the execution machinery that checks quals against BRIN tuples. It's
not a trivial part of code, and is especially useful since it's
generalizable. New BRIN operator classes can be created and that's an
interesting power to have in partitioning as well.

Casting from ranges into min-max BRIN tuples seems quite doable, so
both range and list notation should work fine. But BRIN works also for
the generic routing expression some people seem to really want, and
dynamically updated BRIN meta-indexes seem to be the only efficient
solution for that.

BRIN lacks some features, as you noted, so it does need some love
before it's usable for this. But they're features BRIN itself would
find useful so you take out two ducks in one shot.

 I suggested that BRIN would solve a situation when the amount of partitions
 is huge (say, thousands) and we might need to be able to efficiently locate
 the appropriate partition. In this situation, a linear search might become
 prohibitive, or the data structure (a simple B-Tree, maybe) become too big
 to be worth keeping in memory. This is where being able to store the
 partition index on disk would be interesting.

BRIN also does a linear search, so it doesn't solve that. BRIN's only
power is that it can answer very fast whether some quals rule out a
partition.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST kNN search queue (Re: KNN-GiST with recheck)

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 03:49 AM, Michael Paquier wrote:

On Thu, Dec 11, 2014 at 12:50 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:

On 01/28/2014 04:12 PM, Alexander Korotkov wrote:



3. A binary heap would be a better data structure to buffer the
rechecked
values. A Red-Black tree allows random insertions and deletions, but in
this case you need to insert arbitrary values but only remove the
minimum
item. That's exactly what a binary heap excels at. We have a nice binary
heap implementation in the backend that you can use, see
src/backend/lib/binaryheap.c.



Hmm. For me binary heap would be a better data structure for KNN-GiST at
all :-)



I decided to give this a shot, replacing the red-black tree in GiST with the
binary heap we have in lib/binaryheap.c. It made the GiST code somewhat
simpler, as the binaryheap interface is simpler than the red-black tree one.
Unfortunately, performance was somewhat worse. That was quite surprising, as
insertions and deletions are both O(log N) in both data structures, but the
red-black tree implementation is more complicated.

I implemented another data structure called a Pairing Heap. It's also a
fairly simple data structure, but insertions are O(1) instead of O(log N).
It also performs fairly well in practice.

With that, I got a small but measurable improvement. To test, I created a
table like this:

create table gisttest (id integer, p point);
insert into gisttest select id, point(random(), random()) from
generate_series(1, 100) id;
create index i_gisttest on gisttest using gist (p);

And I ran this query with pgbench:

select id from gisttest order by p - '(0,0)' limit 1000;

With unpatched master, I got about 650 TPS, and with the patch 720 TPS.
That's a nice little improvement, but perhaps more importantly, the pairing
heap implementation consumes less memory. To measure that, I put a
MemoryContextStats(so-queueCtx) call into gistendscan. With the above
query, but without the limit clause, on master I got:

GiST scan context: 2109752 total in 10 blocks; 2088456 free (24998 chunks);
21296 used

And with the patch:

GiST scan context: 1061160 total in 9 blocks; 1040088 free (12502 chunks);
21072 used

That's 2MB vs 1MB. While that's not much in absolute terms, it'd be nice to
reduce that memory consumption, as there is no hard upper bound on how much
might be needed. If the GiST tree is really disorganized for some reason, a
query might need a lot more.


So all in all, I quite like this patch, even though it doesn't do anything
too phenomenal. It adds a some code, in the form of the new pairing heap
implementation, but it makes the GiST code a little bit simpler. And it
gives a small performance gain, and reduces memory usage a bit.

Hum. It looks that this patch using binary heap is intended to be a
replacement red-black tree method.


Right.

Here's a new version of the patch. It now uses the same pairing heap 
code that I posted in the other thread (advance local xmin more 
aggressivley, 
http://www.postgresql.org/message-id/5488acf0.8050...@vmware.com). The 
pairingheap_remove() function is unused in this patch, but it is needed 
by that other patch.



Any reason why it isn't added to the CF to track it?


No. Will add.

- Heikki

diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index 7a8692b..e5eb6f6 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -18,6 +18,7 @@
 #include access/relscan.h
 #include miscadmin.h
 #include pgstat.h
+#include lib/pairingheap.h
 #include utils/builtins.h
 #include utils/memutils.h
 #include utils/rel.h
@@ -243,8 +244,6 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances,
 	GISTPageOpaque opaque;
 	OffsetNumber maxoff;
 	OffsetNumber i;
-	GISTSearchTreeItem *tmpItem = so-tmpTreeItem;
-	bool		isNew;
 	MemoryContext oldcxt;
 
 	Assert(!GISTSearchItemIsHeap(*pageItem));
@@ -275,18 +274,15 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances,
 		oldcxt = MemoryContextSwitchTo(so-queueCxt);
 
 		/* Create new GISTSearchItem for the right sibling index page */
-		item = palloc(sizeof(GISTSearchItem));
-		item-next = NULL;
+		item = palloc(SizeOfGISTSearchItem(scan-numberOfOrderBys));
 		item-blkno = opaque-rightlink;
 		item-data.parentlsn = pageItem-data.parentlsn;
 
 		/* Insert it into the queue using same distances as for this page */
-		tmpItem-head = item;
-		tmpItem-lastHeap = NULL;
-		memcpy(tmpItem-distances, myDistances,
+		memcpy(item-distances, myDistances,
 			   sizeof(double) * scan-numberOfOrderBys);
 
-		(void) rb_insert(so-queue, (RBNode *) tmpItem, isNew);
+		pairingheap_add(so-queue, item-phNode);
 
 		MemoryContextSwitchTo(oldcxt);
 	}
@@ -348,8 +344,7 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances,
 			oldcxt = MemoryContextSwitchTo(so-queueCxt);
 
 			/* Create new GISTSearchItem for this item */
-			item = 

Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 08:37 AM, Michael Paquier wrote:

On Mon, Dec 15, 2014 at 3:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:

Michael Paquier michael.paqu...@gmail.com writes:

- Point to polygon distance operator

I looked at that briefly during the last fest, but was unsure whether it
was too entangled with the GiST patches that Heikki was looking at.

Recalling my memories of this morning, things are rather independent.


Right. I also looked at it briefly, but I wasn't sure if we really want 
it. AFAICT, no-one has actually asked for that operator, it was written 
only to be an example of an operator that would benefit from the 
knn-gist with recheck patch. If there is some other, real, use for the 
knn-gist with recheck patch, then I'm OK with that, but otherwise it's 
dubious to add an operator just so that it can then be made faster by 
another patch. That said, it seems quite harmless, so might as well 
commit it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST kNN search queue (Re: KNN-GiST with recheck)

2014-12-15 Thread Andres Freund
On 2014-12-15 15:08:28 +0200, Heikki Linnakangas wrote:
 +/*-
 + *
 + * pairingheap.c
 + * A Pairing Heap implementation
 + *
 + * Portions Copyright (c) 2012-2014, PostgreSQL Global Development Group
 + *
 + * IDENTIFICATION
 + * src/backend/lib/pairingheap.c
 + *
 + *-
 + */

 diff --git a/src/include/lib/pairingheap.h b/src/include/lib/pairingheap.h
 new file mode 100644
 index 000..e78196d
 --- /dev/null
 +++ b/src/include/lib/pairingheap.h
 @@ -0,0 +1,67 @@
 +/*
 + * pairingheap.h
 + *
 + * A Pairing Heap implementation
 + *
 + * Portions Copyright (c) 2012-2014, PostgreSQL Global Development Group
 + *
 + * src/include/lib/pairingheap.h
 + */
 +

If we add another heap implementation we probably should at least hint
at the different advantages somewhere.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] speedup tidbitmap patch: hash BlockNumber

2014-12-15 Thread Heikki Linnakangas

On 10/22/2014 04:14 PM, Teodor Sigaev wrote:

Just replace tag_hash in tidbitmap which uses hash_any to direct call of
hash_uint32, it saves ~5% of execution time.

An example:
# create extension btree_gin;
# select (v / 10)::int4 as i into t from generate_series(1, 500) as v;
# create index idx on t using gin (i);
# set enable_seqscan  = off;

# explain analyze select * from t where i = 0;
without patch: Execution time: 2427.692 ms
with patch:Execution time: 2319.376 ms

# explain analyze select * from t where i = 100 and i= 100;
without patch: Execution time: 524.441 ms
with patch:Execution time: 504.478 ms


Nice little speedup, for such a simple patch. On my laptop, perf 
confirms that with the patch, about 5% of the CPU time is spent in 
tag_blocknumber, and without it, about 8% is spent in tag_hash. That 
gives a 3% speed up, which is in the same ballpark that you saw.


I'd suggest putting the new function in hashfn.c, where we already have 
similar functions, string_hash, tag_hash, oid_hash and bitmap_hash. And 
name it blocknumber_hash, for consistency.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alexander Korotkov
On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 12/15/2014 08:37 AM, Michael Paquier wrote:

 On Mon, Dec 15, 2014 at 3:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Michael Paquier michael.paqu...@gmail.com writes:

 - Point to polygon distance operator

 I looked at that briefly during the last fest, but was unsure whether it
 was too entangled with the GiST patches that Heikki was looking at.

 Recalling my memories of this morning, things are rather independent.


 Right. I also looked at it briefly, but I wasn't sure if we really want
 it. AFAICT, no-one has actually asked for that operator, it was written
 only to be an example of an operator that would benefit from the knn-gist
 with recheck patch. If there is some other, real, use for the knn-gist with
 recheck patch, then I'm OK with that, but otherwise it's dubious to add an
 operator just so that it can then be made faster by another patch. That
 said, it seems quite harmless, so might as well commit it.


Lack of recheck is major limitation of KNN-GiST now. People are not asking
for that because they don't know what is needed to implement exact KNN for
PostGIS. Now they have to invent kluges like this:

WITH closest_candidates AS (
  SELECT
streets.gid,
streets.name,
streets.geom
  FROM
nyc_streets streets
  ORDER BY
streets.geom -
'SRID=26918;POINT(583571.905921312 4506714.34119218)'::geometry
  LIMIT 100
)
SELECT gid, name
FROM closest_candidates
ORDER BY
  ST_Distance(
geom,
'SRID=26918;POINT(583571.905921312 4506714.34119218)'::geometry
)
LIMIT 1;

See blog posts:
http://blog.light42.com/wordpress/?p=102
http://workshops.boundlessgeo.com/postgis-intro/knn.html

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Heikki Linnakangas

On 12/07/2014 09:48 PM, John Gorman wrote:

This patch implements the first wiki/Todo Configuration Files item
Consider normalizing fractions in postgresql.conf, perhaps using '%'.

The Fractions in GUC variables discussion is here.

http://www.postgresql.org/message-id/467132cf.9020...@enterprisedb.com


Ah, flash from the past :-). Thanks for looking into this.

The bgwriter_lru_percent and bgwriter_all_percent settings are gone now, 
so the original reason to do this is gone. We consistently use fractions 
in the GUCs now.



This patch implements expressing GUC variables as percents in
postgresql.conf.

autovacuum_vacuum_scale_factor = 20%   # percent of table size before vacuum
autovacuum_analyze_scale_factor = 10%  # percent of table size before
analyze

As you can see the postgresql.conf file and the documentation read more
naturally.  I added a regression test to guc.sql. The sql interface also
accepts both numeric and percent forms although the percent form must be
quoted because '%' is an operator.

show cursor_tuple_fraction; -- 10%
set cursor_tuple_fraction = .15; -- 15%
set cursor_tuple_fraction = '33%'; -- 33%

I tagged four configuration variables to display as percents.


I'm not sure I agree that percentages are better than fractions. I'd 
vote a -0.5 for changing the default display format. There isn't much 
harm in accepting them as a secondary format, though.


We should only accept percentages for settings where that makes sense. 
It is sensible for most of the PGC_REAL settings, but not so much for 
geqo_selection_bias or seed, for example.


Overall, I feel that this isn't really worth the trouble. We use 
fractions consistently now, so there isn't much room for confusion over 
what the current values mean. Using a percentage might be more familiar 
for some people, but OTOH you'll have to get used to the fractions 
anyway, unless we change the default output format too, and I'm not in 
favour of doing that. I suggest that we just drop this, and remove the 
TODO item.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 03:45 PM, Alexander Korotkov wrote:

On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas hlinnakan...@vmware.com

wrote:

On 12/15/2014 08:37 AM, Michael Paquier wrote:


On Mon, Dec 15, 2014 at 3:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:


Michael Paquier michael.paqu...@gmail.com writes:


- Point to polygon distance operator


I looked at that briefly during the last fest, but was unsure whether it
was too entangled with the GiST patches that Heikki was looking at.


Recalling my memories of this morning, things are rather independent.


Right. I also looked at it briefly, but I wasn't sure if we really want
it. AFAICT, no-one has actually asked for that operator, it was written
only to be an example of an operator that would benefit from the knn-gist
with recheck patch. If there is some other, real, use for the knn-gist with
recheck patch, then I'm OK with that, but otherwise it's dubious to add an
operator just so that it can then be made faster by another patch. That
said, it seems quite harmless, so might as well commit it.


Lack of recheck is major limitation of KNN-GiST now. People are not asking
for that because they don't know what is needed to implement exact KNN for
PostGIS. Now they have to invent kluges like this:

WITH closest_candidates AS (
   SELECT
 streets.gid,
 streets.name,
 streets.geom
   FROM
 nyc_streets streets
   ORDER BY
 streets.geom -
 'SRID=26918;POINT(583571.905921312 4506714.34119218)'::geometry
   LIMIT 100
)
SELECT gid, name
FROM closest_candidates
ORDER BY
   ST_Distance(
 geom,
 'SRID=26918;POINT(583571.905921312 4506714.34119218)'::geometry
 )
LIMIT 1;

See blog posts:
http://blog.light42.com/wordpress/?p=102
http://workshops.boundlessgeo.com/postgis-intro/knn.html


Ugh. Ok, sold :-). I'll review...

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add ssl_protocols configuration option

2014-12-15 Thread Alex Shulgin
Michael Paquier michael.paqu...@gmail.com writes:

 Perhaps ssloptions.[ch], unless you plan to add non-option-related code
 there later?

 I don't think anything else than common options-related code fits in
 there, so ssloptions.c makes sense to me.

 BTW, there is no Regent code in your openssl.c, so the copyright
 statement is incorrect.

 Good catch, I just blindly copied that from some other file.
 There have been arguments in favor and against this patch... But
 marking it as returned with feedback because of a lack of activity in
 the last couple of weeks. Feel free to update if you think that's not
 correct, and please move this patch to commit fest 2014-12.

Attached is a new version that addresses the earlier feedback: renamed
the added *.[ch] files and removed incorrect copyright line.

I'm moving this to the current CF.

--
Alex

From 18388300f9ed34fa47c66b8a2da098aeb19a7f71 Mon Sep 17 00:00:00 2001
From: Alex Shulgin a...@commandprompt.com
Date: Wed, 19 Nov 2014 19:49:29 +0300
Subject: [PATCH] DRAFT: ssl_protocols GUC

---
 doc/src/sgml/config.sgml  |  29 ++
 doc/src/sgml/libpq.sgml   |  25 ++
 src/backend/libpq/Makefile|   2 +-
 src/backend/libpq/be-secure-openssl.c |  29 --
 src/backend/libpq/be-secure.c |   3 +-
 src/backend/libpq/ssloptions.c| 123 ++
 src/backend/utils/misc/guc.c  |  15 
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/include/libpq/libpq.h |   1 +
 src/include/libpq/ssloptions.h|  48 ++
 src/interfaces/libpq/Makefile |   8 +-
 src/interfaces/libpq/fe-connect.c |   4 +
 src/interfaces/libpq/fe-secure-openssl.c  |  18 +++-
 src/interfaces/libpq/libpq-int.h  |   1 +
 14 files changed, 297 insertions(+), 10 deletions(-)
 create mode 100644 src/backend/libpq/ssloptions.c
 create mode 100644 src/include/libpq/ssloptions.h

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index 48ae3e4..699f0f9
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
*** include_dir 'conf.d'
*** 1055,1060 
--- 1055,1089 
/listitem
   /varlistentry
  
+  varlistentry id=guc-ssl-protocols xreflabel=ssl_protocols
+   termvarnamessl_protocols/varname (typestring/type)
+   indexterm
+primaryvarnamessl_protocols/ configuration parameter/primary
+   /indexterm
+   /term
+   listitem
+para
+ Specifies a colon-separated list of acronymSSL/ protocols that are
+ allowed to be used on secure connections. Each entry in the list can
+ be prefixed by a literal+/ (add to the current list)
+ or literal-/ (remove from the current list). If no prefix is used,
+ the list is replaced with the specified protocol.
+/para
+para
+ The full list of supported protocols can be found in the
+ the applicationopenssl/ manual page.  In addition to the names of
+ individual protocols, the following keywords can be
+ used: literalALL/ (all protocols supported by the underlying
+ crypto library), literalSSL/ (all supported versions
+ of acronymSSL/) and literalTLS/ (all supported versions
+ of acronymTLS/).
+/para
+para
+ The default is literalALL:-SSL/literal.
+/para
+   /listitem
+  /varlistentry
+ 
   varlistentry id=guc-ssl-ciphers xreflabel=ssl_ciphers
termvarnamessl_ciphers/varname (typestring/type)
indexterm
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
new file mode 100644
index d829a4b..62ee0b4
*** a/doc/src/sgml/libpq.sgml
--- b/doc/src/sgml/libpq.sgml
*** postgresql://%2Fvar%2Flib%2Fpostgresql/d
*** 1230,1235 
--- 1230,1250 
/listitem
   /varlistentry
  
+  varlistentry id=libpq-connect-sslprotocols xreflabel=sslprotocols
+   termliteralsslprotocols/literal/term
+   listitem
+para
+ Specifies a colon-separated list of acronymSSL/ protocols that are
+ allowed to be used on secure connections.
+ See xref linkend=guc-ssl-protocols server configuration parameter
+ for format.
+/para
+para
+ Defaults to literalALL:-SSL/.
+/para
+   /listitem
+  /varlistentry
+ 
   varlistentry id=libpq-connect-sslcompression xreflabel=sslcompression
termliteralsslcompression/literal/term
listitem
*** myEventProc(PGEventId evtId, void *evtIn
*** 6693,6698 
--- 6708,6723 
   /para
  /listitem
  
+ listitem
+  para
+   indexterm
+primaryenvarPGSSLPROTOCOLS/envar/primary
+   /indexterm
+   envarPGSSLPROTOCOLS/envar behaves the same as the xref
+   linkend=libpq-connect-sslprotocols connection 

Re: [HACKERS] Custom timestamp format in logs

2014-12-15 Thread Heikki Linnakangas

On 12/14/2014 06:36 PM, Magnus Hagander wrote:

A separate GUC seems kind of weird. Wouldn't it be better with something
like %(format)t or such in the log_line_prefix itself in that case? That
could also be expanded to other parameters, should we need them?


%t isn't the only thing that prints timestamps in the logs, e.g:

LOG:  database system was shut down at 2014-12-15 15:30:15 EET

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Tom Lane
Christoph Berg c...@df7cb.de writes:
 a fellow Debian Developer found a minor glitch in
 src/timezone/localtime.c, where binary search is used. Now I don't
 think there is an actual problem (unless there's  2^30 timezones),
 but it would at least make sense to mark the code as okayish so that
 people running code scanners won't stumble over the issue again.

 The attached patch added comments to address this.

This is totally silly.  The timecnt couldn't be anywhere near INT_MAX (in
fact, it is not allowed to exceed TZ_MAX_TIMES, which is currently just
1200).  And there are bunches of other instances of similar code in PG;
shall we put equally wishy-washy comments on them all?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] speedup tidbitmap patch: hash BlockNumber

2014-12-15 Thread Tom Lane
Heikki Linnakangas hlinnakan...@vmware.com writes:
 On 10/22/2014 04:14 PM, Teodor Sigaev wrote:
 Just replace tag_hash in tidbitmap which uses hash_any to direct call of
 hash_uint32, it saves ~5% of execution time.

 I'd suggest putting the new function in hashfn.c, where we already have 
 similar functions, string_hash, tag_hash, oid_hash and bitmap_hash. And 
 name it blocknumber_hash, for consistency.

I think this suggestion is misguided, and the patch itself needs
rethinking.  Instead of doing this, let's hack dynahash.c itself
to substitute a shim like this when it's told function == tag_hash and
keysize == sizeof(uint32).  Then we can remove any similar shims that
already exist, and possibly end up with a net savings of code rather than
adding more.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Tom Lane
The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
contrib/test_decoding with

TRAP: FailedAssertion(!(((bool)((relation)-rd_refcnt == 0))), File: 
relcache.c, Line: 1981)

for example here:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhordt=2014-12-14%2005%3A50%3A09
and here:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhordt=2014-12-13%2021%3A26%3A05

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] KNN-GiST with recheck

2014-12-15 Thread Heikki Linnakangas

On 08/03/2014 04:48 PM, Emre Hasegeli wrote:

1. This patch introduces a new polygon - point operator. That seems
useful on its own, with or without this patch.


Yeah, but exact-knn cant come with no one implementation. But it would
better come in a separate patch.


I tried to split them.  Separated patches are attached.  I changed
the order of the arguments as point - polygon, because point was
the first one on all the others.  Its commutator was required for
the index, so I added it on the second patch.  I also added tests
for the operator.  I think it is ready for committer as a separate
patch.  We can add it to the open CommitFest.


Ok, committed this part now with minor changes. The implementation was 
copy-pasted from circle - polygon, so I put the common logic to a 
dist_ppoly_internal function, and called that in both dist_cpoly and 
dist_ppoly.


I was surprised that there were no documentation changes in the patch, 
but looking at the docs, we just list the geometric operators without 
explaining what the argument types are. That's not very comprehensive, 
might be good to expand the docs on that, but it's not this patch's fault.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Peter Eisentraut
On 12/15/14 8:56 AM, Heikki Linnakangas wrote:
 show cursor_tuple_fraction; -- 10%
 set cursor_tuple_fraction = .15; -- 15%
 set cursor_tuple_fraction = '33%'; -- 33%

 I tagged four configuration variables to display as percents.
 
 I'm not sure I agree that percentages are better than fractions. I'd
 vote a -0.5 for changing the default display format. There isn't much
 harm in accepting them as a secondary format, though.

Agreed with not changing the default output.  Everyone is used to it,
and there is no reason why the percent output would be intrinsically better.

 We should only accept percentages for settings where that makes sense.
 It is sensible for most of the PGC_REAL settings, but not so much for
 geqo_selection_bias or seed, for example.

Right.  But then this feature would get more complicated, as opposed to
supposedly making things simpler.

 Overall, I feel that this isn't really worth the trouble. We use
 fractions consistently now, so there isn't much room for confusion over
 what the current values mean. Using a percentage might be more familiar
 for some people, but OTOH you'll have to get used to the fractions
 anyway, unless we change the default output format too, and I'm not in
 favour of doing that. I suggest that we just drop this, and remove the
 TODO item.

Agreed.

The patch is sound as far as it goes (I might be inclined to accept
whitespace between number and % sign), but given the above points and
the original reason for it having been eliminated, I'm inclined to drop it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:
 Right. I also looked at it briefly, but I wasn't sure if we really want
 it. AFAICT, no-one has actually asked for that operator, it was written
 only to be an example of an operator that would benefit from the knn-gist
 with recheck patch.

 Lack of recheck is major limitation of KNN-GiST now. People are not asking
 for that because they don't know what is needed to implement exact KNN for
 PostGIS. Now they have to invent kluges like this:
 [ query using ORDER BY ST_Distance ]

It's not apparent to me that the proposed operator is a replacement for
ST_Distance.  The underlying data in an example like this won't be either
points or polygons, it'll be PostGIS datatypes.

In short, I believe that PostGIS could use what you're talking about,
but I agree with Heikki's objection that nobody has asked for this
particular operator.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alexander Korotkov
On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Alexander Korotkov aekorot...@gmail.com writes:
  On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas 
 hlinnakan...@vmware.com
  wrote:
  Right. I also looked at it briefly, but I wasn't sure if we really want
  it. AFAICT, no-one has actually asked for that operator, it was written
  only to be an example of an operator that would benefit from the
 knn-gist
  with recheck patch.

  Lack of recheck is major limitation of KNN-GiST now. People are not
 asking
  for that because they don't know what is needed to implement exact KNN
 for
  PostGIS. Now they have to invent kluges like this:
  [ query using ORDER BY ST_Distance ]

 It's not apparent to me that the proposed operator is a replacement for
 ST_Distance.  The underlying data in an example like this won't be either
 points or polygons, it'll be PostGIS datatypes.

 In short, I believe that PostGIS could use what you're talking about,
 but I agree with Heikki's objection that nobody has asked for this
 particular operator.


polygon - point is for sure not ST_Distance replacement. I was giving
this argument about KNN-GiST with recheck itself. polygon - point is
needed just as in-core example of KNN-GiST with recheck.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 I'm working on $SUBJECT and would like to get comments about the
 design.  Attached patch is for the design below.  Note that the patch
 requires Kaigai-san's custom foriegn join patch[1]

For the record, I'm not particularly on-board with custom scan, and
even less so with custom join.  I don't want FDW features like this
depending on those kluges, especially not when you're still in need
of core-code changes (which really points up the inadequacy of those
concepts).

Also, please don't redefine struct NestPath like that.  That adds a
whole bunch of notational churn (and backpatch risk) for no value
that I can see.  It might've been better to do it like that in a
green field, but you're about twenty years too late to do it now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Andres Freund
Hi,

On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
 The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
 contrib/test_decoding with

 TRAP: FailedAssertion(!(((bool)((relation)-rd_refcnt == 0))), File: 
 relcache.c, Line: 1981)

 for example here:
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhordt=2014-12-14%2005%3A50%3A09
 and here:
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhordt=2014-12-13%2021%3A26%3A05

Yes, I've been trying to debug that. I've finally managed to reproduce
locally. Unfortunately it's not directly reproducible on my laptop...

A fair amount of tinkering later I've found out that it's not actually
CLOBBER_CACHE_ALWAYS itself that triggers the problem but catchup
interrupts. It triggers with CLOBBER_CACHE_ALWAYS because that happens
to make parts of the code slow enough to not reach a ordinary
AcceptInvalidationMessages() in time.

The problem is that in the added regression test there can be a
situation where there's a relcache entry that's *not* currently visible
but still referenced. Consider:

SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 
'test_decoding');
CREATE TABLE somechange(id serial primary key);
INSERT INTO changeresult
SELECT data FROM pg_logical_slot_get_changes(...);

While get_changes stores it data into a tuplestore there can be a moment
where 'changeresult' has a refcnt  0 due to the INSERT, but is
invisible because we're using a historic snapshot from when changeresult
doesn't exist.

Without catchup invalidations and/or an outside reference to a relation
that's normally not a problem because it won't get reloaded from the
caches at an inappropriate time while invisible. Until a few weeks ago
there was no regression test covering that case which is why these
crashes are only there now.

It triggers via:
RelationCacheInvalidate() -
  RelationClearRelation(relation, true) -
/* Build temporary entry, but don't link it into hashtable */
newrel = RelationBuildDesc(save_relid, false);
if (newrel == NULL)
{
/* Should only get here if relation was deleted */
RelationCacheDelete(relation);
RelationDestroyRelation(relation, false);
elog(ERROR, relation %u deleted while still in use, save_relid);
}

That path is only hit while refcnt  0

RelationDestroyRelation() has
Assert(RelationHasReferenceCountZero(relation));

So that path doesn't really work... Without assertions we'd just get a
transient ERROR. I think that path should generally loose the
RelationDestroyRelation() - if it's referenced that's surely not the
right thing to do.  I'm not actually convinced logical decoding is the
only way that assert can be reached.

Since logical decoding happens inside a subtransaction (when called via
SQL) and invalidate caches one relatively straightforward way to fix
this would be to add something roughly akin to:
 Assert(!relation-rd_isvalid)
 /*
  * This should only happen when we're doing logical decoding where
  * it can happen when [above].
  */
 if (HistoricSnapshotActive())
 return;

There's no chance that such a entry could survive too long in an invalid
state (minus preexisting bugs independent of decoding) since we hit the
same paths that subtransaction abort hits.

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Bruce Momjian
On Mon, Dec 15, 2014 at 10:19:19AM -0500, Peter Eisentraut wrote:
  Overall, I feel that this isn't really worth the trouble. We use
  fractions consistently now, so there isn't much room for confusion over
  what the current values mean. Using a percentage might be more familiar
  for some people, but OTOH you'll have to get used to the fractions
  anyway, unless we change the default output format too, and I'm not in
  favour of doing that. I suggest that we just drop this, and remove the
  TODO item.
 
 Agreed.
 
 The patch is sound as far as it goes (I might be inclined to accept
 whitespace between number and % sign), but given the above points and
 the original reason for it having been eliminated, I'm inclined to drop it.

TODO item removed.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 4:13 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 While working on BDR, I've run into a situation that I think highlights
 a limitation of the dynamic bgworker API that should be fixed.
 Specifically, when the postmaster crashes and restarts shared memory
 gets cleared but dynamic bgworkers don't get unregistered, and that's a
 mess.

I've noticed this as well.  What I was thinking of proposing is that
we change things so that a BGW_NEVER_RESTART worker is unregistered
when a crash-and-restart cycle happens, but workers with any other
restart time are retained. What's happened to me a few times is that
the database crashes after registering BGW_NO_RESTART workers but
before the postmaster launches them; the postmaster fires them up
after completing the crash-and-restart cycle, but by then the dynamic
shared memory segments they are supposed to map are gone, so they just
start up uselessly and then die.

 The latest BDR extension has a single static bgworker registered at
 shared_preload_libraries time. This worker launches one dynamic bgworker
 per database. Those dynamic bgworkers are in turn responsible for
 launching workers for each connection to another node in the mesh
 topology (and for various other tasks). They communicate via shared
 memory blocks, where each worker has an offset into a shared memory array.

 That's all fine until the postmaster crashes and restarts, zeroing
 shared memory. The dynamic background workers are killed by the
 postmaster, but *not* unregistered. Workers only get unregistered if
 they exit with exit code 0, which isn't the case when they're killed, or
 when their restart interval is BGW_NO_RESTART .

Maybe it would be best to make the per-database workers BGW_NO_RESTART
and have the static bgworker, rather than the postmaster, be
responsible for starting them.  Then the fix mentioned above would
suffice.

If that's not good for some reason, my second choice is adding a
BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
less cumbersome than your other proposal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Bruce Momjian
On Sun, Dec 14, 2014 at 05:21:06PM +, Mark Cave-Ayland wrote:
 I should add here that the QEMU folk do tend to go to great lengths to
 preserve bisectability; often intermediate compatibility APIs are
 introduced early in the patchset and then removed at the very end when
 the final feature is implemented.

I agree with Tom on this, and I want to point out that certain software
projects benefit more from modularized development than others , e.g.
QEMU is an interface library and therefore probably does things in a
more modular way than usual.  For example, they are probably not adding
new SQL commands or configuration settings in the same way Postgres
does.  It would be interesting to compare the directory span of a
typical Postgres patch vs. a QEMU or Linux kernel one.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada
shigeru.han...@gmail.com wrote:
 I'm working on $SUBJECT and would like to get comments about the
 design.  Attached patch is for the design below.

I'm glad you are working on this.

 1. Join source relations
 As described above, postgres_fdw (and most of SQL-based FDWs) needs to
 check that 1) all foreign tables in the join belong to a server, and
 2) all foreign tables have same checkAsUser.
 In addition to that, I add extra limitation that both inner/outer
 should be plain foreign tables, not a result of foreign join.  This
 limiation makes SQL generator simple.  Fundamentally it's possible to
 join even join relations, so N-way join is listed as enhancement item
 below.

It seems pretty important to me that we have a way to push the entire
join nest down.  Being able to push down a 2-way join but not more
seems like quite a severe limitation.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Reducing lock strength of trigger and foreign key DDL

2014-12-15 Thread Robert Haas
I'm not sure about the rest of this but...

On Sat, Dec 13, 2014 at 3:45 PM, Andreas Karlsson andr...@proxel.se wrote:
 Is this patch
 worthwhile even without reducing the lock levels of the drop commands?

Yes!  It certainly makes more sense to reduce the lock levels where we
can do that relatively easily, and postpone work on related projects
that are harder, rather than waiting until it all seems to work before
doing anything at all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 11:21:03 -0500, Bruce Momjian wrote:
 On Sun, Dec 14, 2014 at 05:21:06PM +, Mark Cave-Ayland wrote:
  I should add here that the QEMU folk do tend to go to great lengths to
  preserve bisectability; often intermediate compatibility APIs are
  introduced early in the patchset and then removed at the very end when
  the final feature is implemented.
 
 I agree with Tom on this, and I want to point out that certain software
 projects benefit more from modularized development than others , e.g.
 QEMU is an interface library and therefore probably does things in a
 more modular way than usual.  For example, they are probably not adding
 new SQL commands or configuration settings in the same way Postgres
 does.

I'm not following. What do you mean with 'interface library'? I'm pretty
sure qemu very regularly adds features including configuration
settings/parameters.

 It would be interesting to compare the directory span of a
 typical Postgres patch vs. a QEMU or Linux kernel one.

I don't believe this really is a question of the type of project. I
think it's more that especially the kernel has had to deal with similar
problems at a much larger scale. And the granular approach somewhat
works for them.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Craig Ringer
On 12/16/2014 12:12 AM, Robert Haas wrote:
 On Sat, Dec 13, 2014 at 4:13 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 While working on BDR, I've run into a situation that I think highlights
 a limitation of the dynamic bgworker API that should be fixed.
 Specifically, when the postmaster crashes and restarts shared memory
 gets cleared but dynamic bgworkers don't get unregistered, and that's a
 mess.
 
 I've noticed this as well.  What I was thinking of proposing is that
 we change things so that a BGW_NEVER_RESTART worker is unregistered
 when a crash-and-restart cycle happens, but workers with any other
 restart time are retained.

Personally I need workers that get restarted, but are discarded on
crash. They access shared memory, so when shmem is cleared I need them
to be discarded too, but otherwise I wish them to be persistent
until/unless they're intentionally unregistered.

If I have to use BGW_NO_RESTART then I land up having to implement
monitoring of worker status and manual re-registration in a supervisor
static worker. Which is a pain, since it's duplicating work the
postmaster would otherwise just be doing for me.

I'd really rather a separate flag.

 Maybe it would be best to make the per-database workers BGW_NO_RESTART
 and have the static bgworker, rather than the postmaster, be
 responsible for starting them.  Then the fix mentioned above would
 suffice.

Yeah... it would, but it involves a bunch of code that duplicates
process management the postmaster already does.

More importantly, if the supervisor worker crashes / is killed it loses
its handles to the other workers and the signals they send no longer go
to the right worker. So it's not robust.

 If that's not good for some reason, my second choice is adding a
 BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
 less cumbersome than your other proposal.

That'd be my preference.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench -f and vacuum

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 10:39 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Tatsuo Ishii is...@postgresql.org writes:
 Currently pgbench -f (run custom script) executes vacuum against
 pgbench_* tables before stating bench marking if -n (or --no-vacuum)
 is not specified. If those tables do not exist, pgbench fails. To
 prevent this, -n must be specified. For me this behavior seems insane
 because -f does not necessarily suppose the existence of the
 pgbench_* tables.  Attached patch prevents pgbench from exiting even
 if those tables do not exist.

 I don't particularly care for this approach.  I think if we want to
 do something about this, we should just make -f imply -n.  Although
 really, given the lack of complaints so far, it seems like people
 manage to deal with this state of affairs just fine.  Do we really
 need to do anything?

I would vote for changing nothing.  If we make -f imply -n, then what
happens if you have a script which is a slight variant of the default
script and you *don't* want -n?  Then we'll have to add yet another
pgbench option to select the default behavior, and I don't know that
the marginal usability gain of getting to leave out -n sometimes would
be enough to justify having to remember another switch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
 The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
 contrib/test_decoding with
 TRAP: FailedAssertion(!(((bool)((relation)-rd_refcnt == 0))), File: 
 relcache.c, Line: 1981)

 Without catchup invalidations and/or an outside reference to a relation
 that's normally not a problem because it won't get reloaded from the
 caches at an inappropriate time while invisible. Until a few weeks ago
 there was no regression test covering that case which is why these
 crashes are only there now.

I've always thought that this whole idea of allowing the caches to contain
stale information was a Rube Goldberg plan that was going to bite you on
the ass eventually.  This case isn't doing anything to increase my faith
in it, and the proposed patch seems like just another kluge on top of
a rickety stack.

I think the safest fix would be to defer catchup interrupt processing
while you're in this mode.  You don't really want to be processing any
remote sinval messages at all, I'd think.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 11:28 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 If that's not good for some reason, my second choice is adding a
 BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
 less cumbersome than your other proposal.

 That'd be my preference.

OK, let's do that, then.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Craig Ringer
On 12/16/2014 12:31 AM, Robert Haas wrote:
 On Mon, Dec 15, 2014 at 11:28 AM, Craig Ringer cr...@2ndquadrant.com wrote:
  If that's not good for some reason, my second choice is adding a
  BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
  less cumbersome than your other proposal.
 
  That'd be my preference.
 OK, let's do that, then.

Right-o.

I had an earlier patch that added unregistration on exit(0) and also
added a flag like this. Only the first part got committed. I'll
resurrect it and rebase it on top of master.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] HINT: pg_hba.conf changed since last config reload

2014-12-15 Thread Alex Shulgin
Peter Eisentraut pete...@gmx.net writes:

 On 10/16/14 11:34 PM, Craig Ringer wrote:
 psql: FATAL:  Peer authentication failed for user fred
 HINT:  See the server error log for additional information.

 I think this is wrong for many reasons.

 I have never seen an authentication system that responds with, hey, what
 you just did didn't get you in, but the administrators are currently in
 the process of making a configuration change, so why don't you check
 that out.

 We don't know whether the user has access to the server log.  They
 probably don't.  Also, it is vastly more likely that the user really
 doesn't have access in the way they chose, so throwing in irrelevant
 hints will be distracting.

 Moreover, it will be confusing to regular users if this message
 sometimes shows up and sometimes doesn't, independent of their own state
 and actions.

 Finally, the fact that a configuration change is in progress is
 privileged information.  Unprivileged users can deduct from the presence
 of this message that administrators are doing something, and possibly
 that they have done something wrong.

 I think it's fine to log a message in the server log if the pg_hba.conf
 file needs reloading.  But the client shouldn't know about this at all.

These are all valid concerns IMHO.

Attached is the modified version of the original patch by Craig,
addressing the handling of the new hint_log error data field and
removing the client-side HINT.

I'm also moving this to the current CF.

--
Alex

From 702e0ac31f3d8023ad8c064d90bdf5a8441fddea Mon Sep 17 00:00:00 2001
From: Craig Ringer cr...@2ndquadrant.com
Date: Fri, 17 Oct 2014 11:18:18 +0800
Subject: [PATCH 1/2] Add an errhint_log, akin to errdetail_log

This allows a different HINT to be sent to the server error log
and to the client, which will be useful where there's security
sensitive information that's more appropriate for a HINT than
a DETAIL message.
---
 src/backend/utils/error/elog.c | 59 --
 src/include/utils/elog.h   |  7 +
 2 files changed, 53 insertions(+), 13 deletions(-)

diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
new file mode 100644
index 2316464..da55207
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
*** errfinish(int dummy,...)
*** 503,508 
--- 503,510 
  		pfree(edata-detail_log);
  	if (edata-hint)
  		pfree(edata-hint);
+ 	if (edata-hint_log)
+ 		pfree(edata-hint_log);
  	if (edata-context)
  		pfree(edata-context);
  	if (edata-schema_name)
*** errhint(const char *fmt,...)
*** 1015,1020 
--- 1017,1042 
  	return 0;	/* return value does not matter */
  }
  
+ /*
+  * errhint_log --- add a hint_log error message text to the current error
+  */
+ int
+ errhint_log(const char *fmt,...)
+ {
+ 	ErrorData  *edata = errordata[errordata_stack_depth];
+ 	MemoryContext oldcontext;
+ 
+ 	recursion_depth++;
+ 	CHECK_STACK_DEPTH();
+ 	oldcontext = MemoryContextSwitchTo(edata-assoc_context);
+ 
+ 	EVALUATE_MESSAGE(edata-domain, hint_log, false, true);
+ 
+ 	MemoryContextSwitchTo(oldcontext);
+ 	recursion_depth--;
+ 	return 0;	/* return value does not matter */
+ }
+ 
  
  /*
   * errcontext_msg --- add a context error message text to the current error
*** CopyErrorData(void)
*** 1498,1503 
--- 1520,1527 
  		newedata-detail_log = pstrdup(newedata-detail_log);
  	if (newedata-hint)
  		newedata-hint = pstrdup(newedata-hint);
+ 	if (newedata-hint_log)
+ 		newedata-hint_log = pstrdup(newedata-hint_log);
  	if (newedata-context)
  		newedata-context = pstrdup(newedata-context);
  	if (newedata-schema_name)
*** FreeErrorData(ErrorData *edata)
*** 1536,1541 
--- 1560,1567 
  		pfree(edata-detail_log);
  	if (edata-hint)
  		pfree(edata-hint);
+ 	if (edata-hint_log)
+ 		pfree(edata-hint_log);
  	if (edata-context)
  		pfree(edata-context);
  	if (edata-schema_name)
*** ThrowErrorData(ErrorData *edata)
*** 1607,1612 
--- 1633,1640 
  		newedata-detail_log = pstrdup(edata-detail_log);
  	if (edata-hint)
  		newedata-hint = pstrdup(edata-hint);
+ 	if (edata-hint_log)
+ 		newedata-hint_log = pstrdup(edata-hint_log);
  	if (edata-context)
  		newedata-context = pstrdup(edata-context);
  	if (edata-schema_name)
*** ReThrowError(ErrorData *edata)
*** 1669,1674 
--- 1697,1704 
  		newedata-detail_log = pstrdup(newedata-detail_log);
  	if (newedata-hint)
  		newedata-hint = pstrdup(newedata-hint);
+ 	if (newedata-hint_log)
+ 		newedata-hint_log = pstrdup(newedata-hint_log);
  	if (newedata-context)
  		newedata-context = pstrdup(newedata-context);
  	if (newedata-schema_name)
*** write_csvlog(ErrorData *edata)
*** 2710,2717 
  		appendCSVLiteral(buf, edata-detail);
  	appendStringInfoChar(buf, ',');
  
! 	/* errhint */
! 	appendCSVLiteral(buf, edata-hint);
  	appendStringInfoChar(buf, ',');
  
  	/* internal query */
--- 

Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-12-15 Thread Adam Brightwell
All,

Overall, I'm pretty happy with the patch and would suggest moving on to
 writing up the documentation changes to go along with the code changes.
 I'll continue to play around with it but it all seems pretty clean to
 me and will allow us to easily add the additiaonl role attributes being
 discussed.


I have attached an updated patch with initial documentation changes for
review.

Thanks,
Adam

-- 
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index 9ceb96b..b0b4fca
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***
*** 1391,1441 
   /row
  
   row
!   entrystructfieldrolsuper/structfield/entry
!   entrytypebool/type/entry
entryRole has superuser privileges/entry
   /row
  
   row
!   entrystructfieldrolinherit/structfield/entry
!   entrytypebool/type/entry
!   entryRole automatically inherits privileges of roles it is a
!member of/entry
   /row
  
   row
!   entrystructfieldrolcreaterole/structfield/entry
!   entrytypebool/type/entry
entryRole can create more roles/entry
   /row
  
   row
!   entrystructfieldrolcreatedb/structfield/entry
!   entrytypebool/type/entry
entryRole can create databases/entry
   /row
  
   row
!   entrystructfieldrolcatupdate/structfield/entry
!   entrytypebool/type/entry
entry
!Role can update system catalogs directly.  (Even a superuser cannot do
!this unless this column is true)
/entry
   /row
  
   row
!   entrystructfieldrolcanlogin/structfield/entry
!   entrytypebool/type/entry
entry
!Role can log in. That is, this role can be given as the initial
!session authorization identifier
/entry
   /row
  
   row
!   entrystructfieldrolreplication/structfield/entry
!   entrytypebool/type/entry
entry
 Role is a replication role. That is, this role can initiate streaming
 replication (see xref linkend=streaming-replication) and set/unset
--- 1391,1493 
   /row
  
   row
!   entrystructfieldrolattr/structfield/entry
!   entrytypebigint/type/entry
!   entry
!Role attributes; see xref linkend=sql-createrole for details
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolconnlimit/structfield/entry
!   entrytypeint4/type/entry
!   entry
!For roles that can log in, this sets maximum number of concurrent
!connections this role can make.  -1 means no limit.
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolpassword/structfield/entry
!   entrytypetext/type/entry
!   entry
!Password (possibly encrypted); null if none.  If the password
!is encrypted, this column will begin with the string literalmd5/
!followed by a 32-character hexadecimal MD5 hash.  The MD5 hash
!will be of the user's password concatenated to their user name.
!For example, if user literaljoe/ has password literalxyzzy/,
!productnamePostgreSQL/ will store the md5 hash of
!literalxyzzyjoe/.  A password that does not follow that
!format is assumed to be unencrypted.
!   /entry
!  /row
! 
!  row
!   entrystructfieldrolvaliduntil/structfield/entry
!   entrytypetimestamptz/type/entry
!   entryPassword expiry time (only used for password authentication);
!null if no expiration/entry
!  /row
! /tbody
!/tgroup
!   /table
! 
!   table id=catalog-rolattr-bitmap-table
!titlestructfieldrolattr/ bitmap positions/title
! 
!tgroup cols=3
! thead
!  row
!   entryPosition/entry
!   entryAttribute/entry
!   entryDescription/entry
!  /row
! /thead
! 
! tbody
!  row
!   entryliteral0/literal/entry
!   entrySuperuser/entry
entryRole has superuser privileges/entry
   /row
  
   row
!   entryliteral1/literal/entry
!   entryInherit/entry
!   entryRole automatically inherits privileges of roles it is a member of/entry
   /row
  
   row
!   entryliteral2/literal/entry
!   entryCreate Role/entry
entryRole can create more roles/entry
   /row
  
   row
!   entryliteral3/literal/entry
!   entryCreate DB/entry
entryRole can create databases/entry
   /row
  
   row
!   entryliteral4/literal/entry
!   entryCatalog Update/entry
entry
!Role can update system catalogs directly.  (Even a superuser cannot do this unless this column is true)
/entry
   /row
  
   row
!   entryliteral5/literal/entry
!   entryCan Login/entry
entry
!Role can log in. That is, this role can be given as the initial session authorization identifier
/entry
  

Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 8:24 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Tue, Dec 9, 2014 at 2:52 AM, Peter Geoghegan p...@heroku.com wrote:
 On Mon, Dec 8, 2014 at 9:43 AM, Peter Geoghegan p...@heroku.com wrote:
 I think it's very possible that the wrong alias may be provided by the
 user, and that we should consider that when providing a hint.

 Note that the existing mechanism (the mechanism that I'm trying to
 improve) only ever shows this error message:

 There is a column named \%s\ in table \%s\, but it cannot be
 referenced from this part of the query.

 I think it's pretty clear that this general class of user error is common.
 Moving this patch to CF 2014-12 as work is still going on, note that
 it is currently marked with Robert as reviewer and that its current
 status is Needs review.

The status here is more like waiting around to see if anyone else has
an opinion.  The issue is what should happen when you enter qualified
name like alvaro.herrera and there is no column named anything like
herrara in the RTE named alvaro, but there is some OTHER RTE that
contains a column with a name that is only a small Levenshtein
distance away from herrera, like roberto.correra.  The questions are:

1. Should we EVER give a you-might-have-meant hint in a case like this?
2. If so, does it matter whether the RTE name is just a bit different
from the actual RTE or whether it's completely different?  In other
words, might we skip the hint in the above case but give one for
alvara.correra?

My current feeling is that we should answer #1 no, but Peter prefers
to answer it yes.  My further feeling is that if we do decide to say
yes to #1, then I would answer #2 as yes also, but Peter would
answer it no, assigning a fixed penalty for a mismatched RTE rather
than one that varies by the Levenshtein distance between the RTEs.

If no one else expresses an opinion, I'm going to insist on doing it
my way, but I'm happy to have other people weigh in.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 05:22 PM, Alexander Korotkov wrote:

On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:


Alexander Korotkov aekorot...@gmail.com writes:

On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas 

hlinnakan...@vmware.com

wrote:
Right. I also looked at it briefly, but I wasn't sure if we really want
it. AFAICT, no-one has actually asked for that operator, it was written
only to be an example of an operator that would benefit from the

knn-gist

with recheck patch.



Lack of recheck is major limitation of KNN-GiST now. People are not

asking

for that because they don't know what is needed to implement exact KNN

for

PostGIS. Now they have to invent kluges like this:
[ query using ORDER BY ST_Distance ]


It's not apparent to me that the proposed operator is a replacement for
ST_Distance.  The underlying data in an example like this won't be either
points or polygons, it'll be PostGIS datatypes.

In short, I believe that PostGIS could use what you're talking about,
but I agree with Heikki's objection that nobody has asked for this
particular operator.


polygon - point is for sure not ST_Distance replacement. I was giving
this argument about KNN-GiST with recheck itself. polygon - point is
needed just as in-core example of KNN-GiST with recheck.


Right. I don't think point - polygon is too useful by itself, but we 
need an example in core that could make use KNN-GiST recheck patch. We 
can't write a regression test for it otherwise, for starters.


Actually, we probably could've used the circle - polygon for that just 
as well...


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] explain sortorder

2014-12-15 Thread Mike Blackwell
Initial review:

Patch applies cleanly to current head, although it appears to have
soft/hard tab and trailing space issues.

make check fails with the output below.  The expected collation clause is
not present.

--
-- Test explain feature: sort order
--
CREATE TABLE sortordertest (n1 char(1), n2 int4);
-- Insert values by which should be ordered
INSERT INTO sortordertest(n1, n2) VALUES ('d', 5), ('b', 3), ('a', 1),
('e', 2), ('c', 4);
-- Display sort order when explain analyze and verbose are true.
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM sortordertest ORDER BY n1
COLLATE C DESC, n2;
   QUERY PLAN

 Sort
   Output: n1, n2, ((n1)::character(1))
   Sort Key: sortordertest.n1, sortordertest.n2
   Sort Order:  ASC NULLS LAST,  ASC NULLS LAST
   -  Seq Scan on public.sortordertest
 Output: n1, n2, n1
(6 rows)

DROP TABLE sortordertest;


__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Andres Freund
On 2014-12-15 11:30:40 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
  The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
  contrib/test_decoding with
  TRAP: FailedAssertion(!(((bool)((relation)-rd_refcnt == 0))), File: 
  relcache.c, Line: 1981)
 
  Without catchup invalidations and/or an outside reference to a relation
  that's normally not a problem because it won't get reloaded from the
  caches at an inappropriate time while invisible. Until a few weeks ago
  there was no regression test covering that case which is why these
  crashes are only there now.
 
 I've always thought that this whole idea of allowing the caches to contain
 stale information was a Rube Goldberg plan that was going to bite you on
 the ass eventually.

I guess we'll see. I don't think this specific case is that dramatic.
The complication here is that there's a reference to a relation from
outside logical decoding - that's not something actually likely to be
used at least by replication solutions. It's also impossible to hit from
the walsender interface.

We could just prohibit referencing relations like that... The SQL
interface primarily is used for regression tests and discovery of the
feature. At least as long there's no way to do streaming from SQL which
I can't really see how to do.

 This case isn't doing anything to increase my faith
 in it, and the proposed patch seems like just another kluge on top of
 a rickety stack.

Another possibility would be to replace
else if (!IsTransactionState())
by
else if (!IsTransactionState() || HistoricSnapshotActive())

as that pretty much what we need - invalidations during decoding happen
either outside of a valid transaction state or when entries aren't
referenced anyway.

Btw, if ever hit that code path would Assert() out without logical
decoding as well - it's not allowed to Destroy() a relation that's still
referenced as that assert shows. It's an especially bad idea if the
reference is actually from outside a subxact and the error is caught. I
think we should just remove the Destroy() call - the normal refcount
and/or resowners will take care of deleting the entry later.

 I think the safest fix would be to defer catchup interrupt processing
 while you're in this mode.  You don't really want to be processing any
 remote sinval messages at all, I'd think.

Well, we need to do relmap, smgr and similar things. So I think that'd
be more complicated than we want.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-15 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, Dec 14, 2014 at 8:24 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 Moving this patch to CF 2014-12 as work is still going on, note that
 it is currently marked with Robert as reviewer and that its current
 status is Needs review.

 The status here is more like waiting around to see if anyone else has
 an opinion.  The issue is what should happen when you enter qualified
 name like alvaro.herrera and there is no column named anything like
 herrara in the RTE named alvaro, but there is some OTHER RTE that
 contains a column with a name that is only a small Levenshtein
 distance away from herrera, like roberto.correra.  The questions are:

 1. Should we EVER give a you-might-have-meant hint in a case like this?
 2. If so, does it matter whether the RTE name is just a bit different
 from the actual RTE or whether it's completely different?  In other
 words, might we skip the hint in the above case but give one for
 alvara.correra?

It would be astonishingly silly to not care about the RTE name's distance,
if you ask me.  This is supposed to detect typos, not thinkos.

I think there might be some value in a separate heuristic that, when
you typed foo.bar and that doesn't match but there is a baz.bar, suggests
that maybe you meant baz.bar, even if baz is not close typo-wise.  This
would be addressing the thinko case not the typo case, so the rules ought
to be quite different --- in particular I doubt that it'd be a good idea
to hint this way if the column names don't match exactly.  But in any
case the key point is that this is a different heuristic addressing a
different failure mode.  We should not try to make the
levenshtein-distance heuristic address that case.

So my two cents is that when considering a qualified name, this patch
should take levenshtein distance across the two components equally.
There's no good reason to suppose that typos will attack one name
component more (nor less) than the other.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Status of CF 2014-10 and upcoming 2014-12

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 10:58 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 PS: Could someone close CF 2014-10 btw?)

Done, and I marked 2014-12 in progress.  I would give you access, but
I can't seem to ssh into commitfest.postgresql.org any more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 (2014/12/13 1:17), Tom Lane wrote:
 We should
 probably also think about allowing FDWs to change these settings if
 they want to.

 This is not clear to me.  Maybe I'm missing something, but I think that
 the FDW only needs to look at the original locking strength in
 GetForeignPlan().  Please explain that in a little more detail.

Well, the point is that for postgres_fdw, we could consider using the
same locked-row-identification methods as for local tables, ie CTID.
Now admittedly this might be the only such case, but I'm not entirely
convinced of that --- you could imagine using FDWs for many of the same
use-cases that KaiGai-san has been proposing custom scans for, and
in some of those cases CTIDs would be useful for row IDs.

We'd originally dismissed this on the argument that ROWMARK_REFERENCE
is a cheaper implementation than CTID-based implementations for any
FDW (since it avoids the possible need to fetch a remote row twice).
I'm not sure I still believe that though.  Fetching all columns of all
retrieved rows in order to avoid what might be zero or a small number of
re-fetches is not obviously a win, especially not for FDWs that
represent not-actually-remote resources.

So as long as we're revisiting this area, it might be worth thinking
about letting an FDW have some say in which ROWMARK method is selected
for its tables.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Mark Cave-Ayland
On 15/12/14 16:28, Andres Freund wrote:

 I don't believe this really is a question of the type of project. I
 think it's more that especially the kernel has had to deal with similar
 problems at a much larger scale. And the granular approach somewhat
 works for them.

Correct. My argument was that dividing patches into smaller, more
reviewable chunks lessens the barrier for reviewers since many people
can review the individual patches as appropriate to their area of
expertise.

The benefits of this are that the many parts of the patchset get
reviewed early by a number of people, which reduces the workload on the
core developers as they only need to focus on a small number of
individual patches. Hence patches get reworked and merged much more
quickly in this way.

This is in contrast to the commitfest system where a single patch is i)
often held until the next commitfest (where bitrot often sets in) and
ii) requires the reviewer to have good knowledge all of the areas
covered by the patch in order to give a meaningful review, which
significantly reduces the pool of available reviewers.


ATB,

Mark.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alexander Korotkov
On Mon, Dec 15, 2014 at 7:47 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 12/15/2014 05:22 PM, Alexander Korotkov wrote:

 On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:


 Alexander Korotkov aekorot...@gmail.com writes:

 On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas 

 hlinnakan...@vmware.com

 wrote:
 Right. I also looked at it briefly, but I wasn't sure if we really want
 it. AFAICT, no-one has actually asked for that operator, it was written
 only to be an example of an operator that would benefit from the

 knn-gist

 with recheck patch.


  Lack of recheck is major limitation of KNN-GiST now. People are not

 asking

 for that because they don't know what is needed to implement exact KNN

 for

 PostGIS. Now they have to invent kluges like this:
 [ query using ORDER BY ST_Distance ]


 It's not apparent to me that the proposed operator is a replacement for
 ST_Distance.  The underlying data in an example like this won't be either
 points or polygons, it'll be PostGIS datatypes.

 In short, I believe that PostGIS could use what you're talking about,
 but I agree with Heikki's objection that nobody has asked for this
 particular operator.


 polygon - point is for sure not ST_Distance replacement. I was giving
 this argument about KNN-GiST with recheck itself. polygon - point is
 needed just as in-core example of KNN-GiST with recheck.


 Right. I don't think point - polygon is too useful by itself, but we
 need an example in core that could make use KNN-GiST recheck patch. We
 can't write a regression test for it otherwise, for starters.

 Actually, we probably could've used the circle - polygon for that just
 as well...


Did you mean searching for circles or polygons in the last sentence?

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] WIP: dynahash replacement for buffer table

2014-12-15 Thread Robert Haas
On Sun, Nov 9, 2014 at 3:58 PM, Andres Freund and...@2ndquadrant.com wrote:
  * There's several cases where it's noticeable that chash creates more
cacheline misses - that's imo a good part of why the single threaded
performance regresses. There's good reasons for the current design
without a inline bucket, namely that it makes the concurrency handling
easier. But I think that can be countered by still storing a pointer -
just to an element inside the bucket. Afaics that'd allow this to be
introduced quite easily?

 It's not obvious to me how that would work.  If you just throw those
 elements on the garbage lists with everything else, it will soon be
 the case that one bucket header ends up using the cell stored in some
 other bucket, which isn't going to be awesome.  So you need some kind
 of more complex scheme to preserve locality.

 Treating the element inside the bucket as a kind of one element freelist
 seems quite workable to me. Test whether it's marked deleted, scan the
 hazard pointer list to see whether it can be used. If yes, grand. If
 not, go to the current code. The fact that the element is cacheline
 local will make the test for its deletedness almost free. Yes, the
 hazard pointer scan surely ain't free - but at least for cases like
 bufmgr where reads are never less frequent than writes and very often
 *much* more frequent I'm pretty sure it'd be a noticeable win.

Maybe. I'd expect that to radically increase the time spend doing
hazard pointer scans.  The performance of this system depends heavily
on garbage collection not happening too often, and ISTR that the
performance changes significantly if you vary the amount of of
overallocation.

 I'm not sure.  We're talking about something on the order of half a
 percent on my tests, and lots of changes cause things to bounce around
 by that much.  I'm not sure it makes sense to get too worked up about
 that if the alternative is to add a big pile of new complexity.

 I saw things in the range of 3-4% on my laptop.

Mrmpf.  Well, that sucks.

  * I don't understand right now (but then I'm in a Hotel bar) why it's
safe for CHashAddToGarbage() to clobber the hash value?
CHashBucketScan() relies the chain being ordered by hashcode. No?
Another backend might just have been past the IsMarked() bit and look
at the hashcode?

 I think the worst case scenario is that we falsely conclude that
 there's a hash match when there really isn't.  The ensuing memcmp()
 will clarify matters.

 Hm. Let me think about it for a while.

 I was thinking that a spurious cmp  0 could causing us to to miss a
 match - but that could only happen if the match just has been removed
 from the list. Which is fine. Perhaps that case deserves to be mentioned
 in the comment?

Maybe.  I mean, the general principle here is that there may be some
difference between the apparent order of execution on one CPU as
opposed to another, and the code that uses the hash table has to be OK
with that - at least, unless it has independent methods of assuring
that things happen in the right order, but in that case that other
thing may well become the botleneck.  This is just one example of
that.  You might also fail to see an insert that's just happened on
some other node but is not committed to main memory yet, which is not
really an issue we need to fix; it's just how things are.  A general
discussion of this somewhere might be worthwhile, but it's pretty much
the same as any other highly-concurrent hashtable implemenation you'll
find out there.

(It's also not really different from surrounding the hash table
operation, and only the hash table operation, with a big lock.  Then
things can't change while you are scanning the bucket list, but they
can change by the time you can do anything with the returned value,
which is the same problem we have to cope with here.)

 * Another thing I'm wondering about here is whether it's possible that
 somebody is currently walking an older version of the bucket list
 (i.e. is currently at an already unlinked element) and then visits a
 newly inserted element with an 'apparently' out of order hash
 value. That seems possible because the inserter might not have seen the
 unlinked element. Afaics that's not a problem for searches - but I'm not
 sure whether it couldn't cause a problem for concurrent inserts and
 deletes.

This is why the delete-mark bit has to be part of the same atomic word
as the next-pointer.  If somebody applies a delete-mark, a subsequent
attempt to insert an entry at that position will fail because the
CAS() of the next-word won't find the right value there - it will find
a delete-marked value, which will never be the value it's expecting.

 * One thing I noticed while looking at that part of code is:
 +   h = target_node-un.hashcode;
 +   if (h == hashcode)
 +   cmp = memcmp(CHashNodeGetItem(target_node), key,
 +  

Re: [HACKERS] Logical Replication Helpers WIP for discussion

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 12:57 AM, Petr Jelinek p...@2ndquadrant.com wrote:
 we've made few helper functions for making logical replication easier, I
 bundled it into contrib module as this is mainly for discussion at this time
 (I don't expect this to get committed any time soon, but it is good way to
 iron out protocol, etc).

 I created sample logical decoding plugin that uses those functions and which
 can be used for passing DML changes in platform/version independent
 (hopefully) format.

 I will post sample apply BG worker also once I get some initial feedback
 about this.

 It's hard to write tests for this as the binary changes contain transaction
 ids and timestamps so the data changes constantly.

 This is of course based on the BDR work Andres, Craig and myself have been
 doing.

I can't understand, either from what you've written here or the rather
sparse comments in the patch, what this might be good for.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 9:36 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 Something to be aware of btw is that this patch introduces an
 additional 8 bytes per block image in WAL as it contains additional
 information to control the compression. In this case this is the
 uint16 compress_len present in XLogRecordBlockImageHeader. In the case
 of the measurements done, knowing that 63638 FPWs have been written,
 there is a difference of a bit less than 500k in WAL between HEAD and
 FPW off in favor of HEAD. The gain with compression is welcome,
 still for the default there is a small price to track down if a block
 is compressed or not. This patch still takes advantage of it by not
 compressing the hole present in page and reducing CPU work a bit.

That sounds like a pretty serious problem to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Jeff Janes
On Sat, Dec 13, 2014 at 1:37 AM, Craig Ringer cr...@2ndquadrant.com wrote:

 On 12/12/2014 06:02 AM, Josh Berkus wrote:
 
  Speaking as the originator of commitfests, they were *always* intended
  to be a temporary measure, a step on the way to something else like
  continuous integration.

 I'd really like to see the project revisit some of the underlying
 assumptions that're being made in this discussion:

 - Patches must be email attachments to a mailing list

 - Changes must be committed by applying a diff

 ... and take a look at some of the options a git-based workflow might
 offer, especially in combination with some of the tools out there that
 help track working branches, run CI, etc.

 Having grown used to push/pull workflows with CI integration I find the
 PostgreSQL patch workflow very frustrating, especially for larger
 patches. It's particularly annoying to see a patch series squashed into
 a monster patch whenever it changes hands or gets rebased, because it's
 being handed around as a great honking diff not a git working branch.

 Is it time to stop using git like CVS?


Perhaps it is just my inexperience with it, but I find the way that
mediawiki, for example, uses git to be utterly baffling. The git log is
bloated with the same change being listed multiple times, at least once as
a commit and again as a merge. If your suggestion would be to go in that
direction, I don't think that would be an improvement.



Cheers,

Jeff


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 11:52 AM, Josh Berkus j...@agliodbs.com wrote:
 On 12/15/2014 11:27 AM, Robert Haas wrote:
 I feel like we used to be better at encouraging people to participate
 in the CF even if they were not experts, and to do the best they can
 based on what they did know.  That was a helpful dynamic.  Sure, the
 reviews weren't perfect, but more people helped, and reviewing some of
 the patch well and some of it in a more cursory manner is way better
 than reviewing none of it at all.

 Well, it was strongly expressed to me by a number of senior contributors
 on this list and at the developer meeting that inexpert reviews were not
 really wanted, needed or helpful.  As such, I stopped recruiting new
 reviewers (and, for that matter, doing them myself).  I don't know if
 the same goes for anyone else.

Really? I thought we were pretty consistent in encouraging new reviewers.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Merlin Moncure
On Fri, Dec 12, 2014 at 8:27 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
   Well, the larger question is why wouldn't we just have the user compress
   the entire WAL file before archiving --- why have each backend do it?
   Is it the write volume we are saving?  I though this WAL compression
   gave better performance in some cases.
 
  Err. Streaming?

 Well, you can already set up SSL for compression while streaming.  In
 fact, I assume many are already using SSL for streaming as the majority
 of SSL overhead is from connection start.

 That's not really true. The overhead of SSL during streaming is
 *significant*. Both the kind of compression it does (which is far more
 expensive than pglz or lz4) and the encyrption itself. In many cases
 it's prohibitively expensive - there's even a fair number on-list
 reports about this.

(late to the party)
That may be true, but there are a number of ways to work around SSL
performance issues such as hardware acceleration (perhaps deferring
encryption to another point in the network), weakening the protocol,
or not using it at all.

OTOH, Our built in compressor as we all know is a complete dog in
terms of cpu when stacked up against some more modern implementations.
All that said, as long as there is a clean path to migrating to
another compression alg should one materialize, that problem can be
nicely decoupled from this patch as Robert pointed out.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench -f and vacuum

2014-12-15 Thread Andres Freund
On 2014-12-15 10:55:30 -0800, Jeff Janes wrote:
 On Sat, Dec 13, 2014 at 7:39 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 
  Tatsuo Ishii is...@postgresql.org writes:
   Currently pgbench -f (run custom script) executes vacuum against
   pgbench_* tables before stating bench marking if -n (or --no-vacuum)
   is not specified. If those tables do not exist, pgbench fails. To
   prevent this, -n must be specified. For me this behavior seems insane
   because -f does not necessarily suppose the existence of the
   pgbench_* tables.  Attached patch prevents pgbench from exiting even
   if those tables do not exist.
 
  I don't particularly care for this approach.  I think if we want to
  do something about this, we should just make -f imply -n.  Although
  really, given the lack of complaints so far, it seems like people
  manage to deal with this state of affairs just fine.  Do we really
  need to do anything?
 
 
 I hereby complain about this.
 
 It has bugged me several times, and having the errors be non-fatal when -f
 was given was the best (easy) thing I could come up with as well, but I was
 too lazy to actually write the code.

Same here. I vote for making -f imply -n as well.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALWriter active during recovery

2014-12-15 Thread Andres Freund
Hi,

On 2014-12-15 18:51:44 +, Simon Riggs wrote:
 Currently, WALReceiver writes and fsyncs data it receives. Clearly,
 while we are waiting for an fsync we aren't doing any other useful
 work.

Well, it can still buffer data on the network level, but there's
definitely limits to that. So I can see this as being useful.

 Following patch starts WALWriter during recovery and makes it
 responsible for fsyncing data, allowing WALReceiver to progress other
 useful actions.
 
 At present this is a WIP patch, for code comments only. Don't bother
 with anything other than code questions at this stage.
 
 Implementation questions are
 
 * How should we wake WALReceiver, since it waits on a poll(). Should
 we use SIGUSR1, which is already used for latch waits, or another
 signal?

It's not entirely trivial, but also not hard, to make it use the latch
code for waiting. It'd probably end up requiring less code because then
we could just scratch libqpwalreceiver.c:libpq_select().

 * Should we introduce some pacing delays if the WALreceiver gets too
 far ahead of apply?

Hm. Why don't we simply start fsyncing in the receiver itself at regular
intervals? If already synced that's cheap, if not, it'll pace us.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Status of CF 2014-10 and upcoming 2014-12

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 2:22 AM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Dec 14, 2014 at 10:58 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 PS: Could someone close CF 2014-10 btw?)

 Done, and I marked 2014-12 in progress.  I would give you access, but
 I can't seem to ssh into commitfest.postgresql.org any more.
That's fine. Thanks!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Kouhei Kaigai
Hanada-san,

Thanks for proposing this great functionality people waited for.

 On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada shigeru.han...@gmail.com
 wrote:
  I'm working on $SUBJECT and would like to get comments about the
  design.  Attached patch is for the design below.
 
 I'm glad you are working on this.
 
  1. Join source relations
  As described above, postgres_fdw (and most of SQL-based FDWs) needs to
  check that 1) all foreign tables in the join belong to a server, and
  2) all foreign tables have same checkAsUser.
  In addition to that, I add extra limitation that both inner/outer
  should be plain foreign tables, not a result of foreign join.  This
  limiation makes SQL generator simple.  Fundamentally it's possible to
  join even join relations, so N-way join is listed as enhancement item
  below.
 
 It seems pretty important to me that we have a way to push the entire join
 nest down.  Being able to push down a 2-way join but not more seems like
 quite a severe limitation.
 
As long as we don't care about simpleness/gracefulness of the remote
query, what we need to do is not complicated. All the optimization jobs
are responsibility of remote system.

Let me explain my thought:
We have three cases to be considered; (1) a join between foreign tables
that is the supported case, (2) a join either of relations is foreign
join, and (3) a join both of relations are foreign joins.

In case of (1), remote query shall have the following form:
  SELECT tlist FROM FT1 JOIN FT2 ON cond WHERE qual

In case of (2) or (3), because we already construct inner/outer join,
it is not difficult to replace the FT1 or FT2 above by sub-query, like:
  SELECT tlist FROM FT3 JOIN
(SELECT tlist FROM FT1 JOIN FT2 ON cond WHERE qual) as FJ1
ON joincond WHERE qual

How about your thought?


Let me comment on some other points at this moment:

* Enhancement in src/include/foreign/fdwapi.h

It seems to me GetForeignJoinPath_function and GetForeignJoinPlan_function
are not used anywhere. Is it an oversight to remove definitions from your
previous work, isn't it?
Now ForeignJoinPath is added on set_join_pathlist_hook, but not callback of
FdwRoutine.


* Is ForeignJoinPath really needed?

I guess the reason why you added ForeignJoinPath is, to have the fields
of inner_path/outer_path. If we want to have paths of underlying relations,
a straightforward way for the concept (join relations is replaced by
foreign-/custom-scan on a result set of remote join) is enhancement of the
fields in ForeignPath.
How about an idea that adds List *fdw_subpaths to save the list of
underlying Path nodes. It also allows to have more than two relations
to be joined.
(Probably, it should be a feature of interface portion. I may have to
enhance my portion.)

* Why NestPath is re-defined?

-typedef JoinPath NestPath;
+typedef struct NestPath
+{
+   JoinPathjpath;
+} NestPath;

It looks to me this change makes patch scale larger...

Best regards,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-12-15 Thread Amit Langote

Robert wrote:
 On Sun, Dec 14, 2014 at 9:12 PM, Amit Langote
 langote_amit...@lab.ntt.co.jp wrote:
  This means if a user puts arbitrary expressions in a partition definition, 
  say,
 
  ... FOR VALUES  extract(month from current_date) TO extract(month from
 current_date + interval '3 months'),
 
  we make sure that those expressions are pre-computed to literal values.
 
 I would expect that to fail, just as it would fail if you tried to
 build an index using a volatile expression.

Oops, wrong example, sorry. In case of an otherwise good expression?

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Josh Berkus
On 12/15/2014 12:05 PM, Peter Geoghegan wrote:
 On Mon, Dec 15, 2014 at 11:52 AM, Josh Berkus j...@agliodbs.com wrote:
 On 12/15/2014 11:27 AM, Robert Haas wrote:
 I feel like we used to be better at encouraging people to participate
 in the CF even if they were not experts, and to do the best they can
 based on what they did know.  That was a helpful dynamic.  Sure, the
 reviews weren't perfect, but more people helped, and reviewing some of
 the patch well and some of it in a more cursory manner is way better
 than reviewing none of it at all.

 Well, it was strongly expressed to me by a number of senior contributors
 on this list and at the developer meeting that inexpert reviews were not
 really wanted, needed or helpful.  As such, I stopped recruiting new
 reviewers (and, for that matter, doing them myself).  I don't know if
 the same goes for anyone else.
 
 Really? I thought we were pretty consistent in encouraging new reviewers.
 

Read the thread on this list where I suggested crediting reviewers in
the release notes.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Jeff Janes
On Mon, Dec 8, 2014 at 8:16 PM, Peter Geoghegan p...@heroku.com wrote:

 Attached revision, v1.6, slightly tweaks the ordering of per-statement
 trigger execution. The ordering is now explicitly documented (the html
 mirror has been updated:

 http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/on-conflict-docs/trigger-definition.html
 ).

 As always, there is a variant for each approach to value locking.

 This revision fixes bitrot that developed when the patchset was
 applied on master's tip, and also cleans up comments regarding how the
 parent insert carries auxiliary/child state through all stages of
 query processing. That should structure be clearer now, including how
 setrefs.c has the auxiliary/child ModifyTable use the same
 resultRelation as its parent.



If I build either option of the patch under MinGW, I get an error in the
grammar files related to the IGNORE reserved word.

$ (./configure --host=x86_64-w64-mingw32 --without-zlib  make   make
check)  /dev/null

In file included from ../../../src/include/parser/gramparse.h:29:0,
 from gram.y:59:
../../../src/include/parser/gram.h:207:6: error: expected identifier before
numeric constant
In file included from gram.y:14366:0:

I don't get this problem on Linux.

The build chain seems to meet the specified minimum:

flex.exe 2.5.35
bison (GNU Bison) 2.4.2
This is perl, v5.8.8 built for msys-64int

It seems like IGNORE is getting replaced by the preprocessor with something
else, but I don't know how to get my hands on the intermediate file after
the preprocessor has done its thing.

Also, in both Linux and MinGW under option 1 patch I get an OID conflict on
OID 3261.

Cheers,

Jeff


[HACKERS] REVIEW: Track TRUNCATE via pgstat

2014-12-15 Thread Jim Nasby

https://commitfest.postgresql.org/action/patch_view?id=1661 (apologies for not 
replying to the thread; I can't find it in my inbox)

Patch applies and passes make check. Code formatting looks good.

The regression test partially tests this. It does not cover 2PC, nor does it 
test rolling back a subtransaction that contains a truncate. The latter 
actually doesn't work correctly.

The test also adds 2.5 seconds of forced pg_sleep. I think that's both bad and 
unnecessary. When I removed the sleeps I still saw times of less than 0.1 
seconds. Also, wait_for_trunc_test_stats() should display something if it times 
out; otherwise you'll have a test failure and won't have any indication why.

I've attached a patch that adds logging on timeout and contains a test case 
that demonstrates the rollback to savepoint bug.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From 973dbe11b796395641dd3947658508ad68aebda5 Mon Sep 17 00:00:00 2001
From: Jim Nasby jim.na...@bluetreble.com
Date: Mon, 15 Dec 2014 18:18:40 -0600
Subject: [PATCH] Show broken rollback case
Warn if stats update doesn't happen. Add test that shows
broken stats counts when rolling back a subtrans with a truncate.

---
 src/test/regress/sql/truncate.sql | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/src/test/regress/sql/truncate.sql 
b/src/test/regress/sql/truncate.sql
index c912345..f5a0e7a 100644
--- a/src/test/regress/sql/truncate.sql
+++ b/src/test/regress/sql/truncate.sql
@@ -251,6 +251,9 @@ begin
 perform pg_stat_clear_snapshot();
 
   end loop;
+  IF NOT updated THEN
+RAISE WARNING 'stats update never happened';
+  END IF;
 
   TRUNCATE prevstats;  -- what a pun
   INSERT INTO prevstats SELECT newstats.*;
@@ -311,5 +314,20 @@ COMMIT;
 SELECT pg_sleep(0.5);
 SELECT * FROM wait_for_trunc_test_stats();
 
+-- now to use a savepoint: this should only count 2 inserts and have 3
+-- live tuples after commit
+BEGIN;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+SAVEPOINT p1;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+TRUNCATE trunc_stats_test;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+ROLLBACK TO SAVEPOINT p1;
+COMMIT;
+
+SELECT * FROM wait_for_trunc_test_stats();
+
 DROP TABLE prevstats CASCADE;
 DROP TABLE trunc_stats_test;
-- 
2.1.2


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Sat, Dec 13, 2014 at 12:01 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 12/12/2014 04:20 PM, Andres Freund wrote:

 Not sure if the copyright notices in the current form are actually ok?


 Hmm. We do have such copyright notices in the source tree, but I know that
 we're trying to avoid it in new code. They had to be there when the code
 lived as a separate project, but now that I'm contributing this to
 PostgreSQL proper, I can remove them if necessary.
Yep, it is fine to remove those copyright notices and to keep only the
references to PGDG when code is integrated in the tree.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Tom Lane
Jeff Janes jeff.ja...@gmail.com writes:
 It seems like IGNORE is getting replaced by the preprocessor with something
 else, but I don't know how to get my hands on the intermediate file after
 the preprocessor has done its thing.

Maybe IGNORE is defined as a macro in MinGW?
Try s/IGNORE/IGNORE_P/g throughout the patch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 4:22 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 Also, in both Linux and MinGW under option 1 patch I get an OID conflict on
 OID 3261.

I'll take a pass at fixing this bitrot soon. I'll follow Tom's advice
about macro collisions on MinGW while I'm at it, since his explanation
seems plausible.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 4:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jeff Janes jeff.ja...@gmail.com writes:
 It seems like IGNORE is getting replaced by the preprocessor with something
 else, but I don't know how to get my hands on the intermediate file after
 the preprocessor has done its thing.

 Maybe IGNORE is defined as a macro in MinGW?
 Try s/IGNORE/IGNORE_P/g throughout the patch.

BTW, the gcc -E flag does this. So figure out what exact arguments
MinGW's gcc is passed in the ordinary course of compiling gram.c, and
prepend -E to the list of existing flags while manually executing
gcc -- that should let you know exactly what's happening here.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Jim Nasby

On 12/7/14, 1:48 PM, John Gorman wrote:

This patch implements the first wiki/Todo Configuration Files item Consider 
normalizing fractions in postgresql.conf, perhaps using '%'.


FWIW, I've reviewed this (should have read the thread first :/). It looks 
clean, passes make check and works as advertised. I also looked at what config 
options were set to be % and they make sense. Looking for .[0-9] in 
postgresql.conf, the only GUCs I saw where % didn't make sense was the two geco 
GUCs Heikki mentioned.

One thing I don't like is the need to wrap a %-based SET in quotes, but we need 
to do that for all GUCs that include units, so presumably there's no good way 
around it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 9:32 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Sat, Dec 13, 2014 at 12:01 AM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
 On 12/12/2014 04:20 PM, Andres Freund wrote:

 Not sure if the copyright notices in the current form are actually ok?


 Hmm. We do have such copyright notices in the source tree, but I know that
 we're trying to avoid it in new code. They had to be there when the code
 lived as a separate project, but now that I'm contributing this to
 PostgreSQL proper, I can remove them if necessary.
 Yep, it is fine to remove those copyright notices and to keep only the
 references to PGDG when code is integrated in the tree.
In any case, I have a couple of comments about this patch as-is:
- If the move to src/bin is done, let's update the MSVC scripts accordingly
- contrib/pg_rewind/.gitignore should be cleaned up from its unnecessary entries
- README is incorrect, it is still mentioned for example that this
code should be copied inside PostgreSQL code tree as contrib/pg_rewind
- Code is going to need a brush to clean up things of this type:



-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 10:26 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 In any case, I have a couple of comments about this patch as-is:
 - If the move to src/bin is done, let's update the MSVC scripts accordingly
 - contrib/pg_rewind/.gitignore should be cleaned up from its unnecessary 
 entries
 - README is incorrect, it is still mentioned for example that this
 code should be copied inside PostgreSQL code tree as contrib/pg_rewind
(Sorry email got sent...)
- Code is going to need a brush to clean up things of this type:
+ PG_9.4_201403261
+   printf(Report bugs to https://github.com/vmware/pg_rewind.\n;);
- Versioning should be made the Postgres-way, aka not that:
+#define PG_REWIND_VERSION 0.1
But a way similar to other utilities:
pg_rewind (PostgreSQL) 9.5devel
- Shouldn't we use $(SHELL) here at least?
+++ b/contrib/pg_rewind/launcher
@@ -0,0 +1,6 @@
+#!/bin/bash
+#
+# Normally, psql feeds the files in sql/ directory to psql, but we want to
+# run them as shell scripts instead.
+
+bash
I doubt that this would work for example with MinGW.
- There are a couple of TODO items which may be good to fix:
+*
+* TODO: This assumes that there are no timeline switches on the target
+* cluster after the fork.
+*/
and:
+   /*
+* TODO: move old file out of the way, if any. And perhaps create the
+* file with temporary name first and rename in place after it's done.
+*/
- Regression tests, which have a good coverage btw are made using
shell scripts. There is some initialization process that could be
refactored IMO as this code is duplicated with pg_upgrade. For
example, listen_addresses initialization has checks fir MINGW,
environment variables PG* are unset, etc.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Tom Lane
Jim Nasby jim.na...@bluetreble.com writes:
 On 12/15/14, 1:39 PM, Christoph Berg wrote:
 Well, if it's not interesting, let's just forget it. Sorry.

 At the risk of sticking my head in the lions mouth... this is the kind of 
 response that deters people from contributing anything to the project, 
 including reviewing patches. A simple thanks, but we feel it's already clear 
 enough that there can't be anywhere close to INT_MAX timezones would have 
 sufficed.

Yeah, I need to apologize.  I was a bit on edge today due to the release
wrap (which you may have noticed wasn't going too smoothly), and should
not have responded like that.

Having said that, though, the submission wasn't carefully thought through
either.  That problem was either not-an-issue or a potential security bug,
and if the submitter hadn't taken the time to be sure which, reporting it
in a public forum wasn't the way to proceed.

I also remain curious as to what sort of tool would complain about this
particular code and not the N other nearly-identical binary-search loops
in the PG sources, most of which deal with data structures potentially
far larger than the timezone data ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Noah Misch
On Mon, Dec 15, 2014 at 12:12:10AM -0800, Michael Paquier wrote:
 On Wed, Dec 10, 2014 at 6:50 PM, Noah Misch n...@leadboat.com wrote:
  On Mon, Dec 08, 2014 at 02:23:39AM +0100, Petr Jelinek wrote:
  On 08/12/14 00:56, Noah Misch wrote:
  The commit_ts test suite gives me the attached diff on a 32-bit MinGW 
  build
  running on 64-bit Windows Server 2003.  I have not checked other Windows
  configurations; the suite does pass on GNU/Linux.
 
  Hmm I wonder if  now() needs to be changed to = now() in those 
  queries
  to make them work correctly on that plarform, I don't have machine with 
  that
  environment handy right now, so I would appreciate if you could try that, 
  in
  case you don't have time for that, I will try to setup something later...
 
  I will try that, though perhaps not until next week.
 
 FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
 I also checked that changing  now() to = now() fixed the
 problem, so your assumption was right, Petr.

Committed, after fixing the alternate expected output.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Etsuro Fujita
(2014/12/16 2:59), Tom Lane wrote:
 Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 (2014/12/13 1:17), Tom Lane wrote:
 We should
 probably also think about allowing FDWs to change these settings if
 they want to.
 
 This is not clear to me.  Maybe I'm missing something, but I think that
 the FDW only needs to look at the original locking strength in
 GetForeignPlan().  Please explain that in a little more detail.
 
 Well, the point is that for postgres_fdw, we could consider using the
 same locked-row-identification methods as for local tables, ie CTID.
 Now admittedly this might be the only such case, but I'm not entirely
 convinced of that --- you could imagine using FDWs for many of the same
 use-cases that KaiGai-san has been proposing custom scans for, and
 in some of those cases CTIDs would be useful for row IDs.
 
 We'd originally dismissed this on the argument that ROWMARK_REFERENCE
 is a cheaper implementation than CTID-based implementations for any
 FDW (since it avoids the possible need to fetch a remote row twice).
 I'm not sure I still believe that though.  Fetching all columns of all
 retrieved rows in order to avoid what might be zero or a small number of
 re-fetches is not obviously a win, especially not for FDWs that
 represent not-actually-remote resources.
 
 So as long as we're revisiting this area, it might be worth thinking
 about letting an FDW have some say in which ROWMARK method is selected
 for its tables.

Understood.  So, to what extext should we consider such things in the
FDW improvement?  We've already started an independent infrastructure
for such things, ie, custom scans, IIUC.

Thank you for the explanation!

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Kouhei Kaigai
Hanada-san,

One other question from my side:
How postgres_fdw tries to solve the varno/varattno mapping when it
replaces relations join by foreign-scan?

Let me explain the issue using an example. If SELECT has a target-
list that references 2nd-column of the inner relation and 2nd-column
of the outer relation, how varno/varattno of ForeignScan shall be
assigned on?
Unless FDW driver does not set fdw_ps_tlist, setrefs.c deals with
this ForeignScan as usual relation scan, then varno of Var will
have non-special varno (even though it may be shifted by rtoffset
in setrefs.c).
Then, ExecEvalScalarVar() is invoked on executor to evaluate the
value of fetched tuple. At that time, which slot and attribute will
be referenced? The varattno of Var-node is neutral on setrefs.c, so
both of the var-nodes that references 2nd-column of the inner relation
and 2nd-column of the outer relation will reference the 2nd-column
of the econtext-ecxt_scantuple, however, it is uncertain which
column of foreign-table is mapped to 2nd-column of the ecxt_scantuple.
So, it needs to inform the planner which underlying column is
mapped to the pseudo scan tlist.

Another expression of what I'm saying is:

  SELECT
ft_1.second_col X,   -- varno=1 / varattno=2
ft_2.second_col Y-- varno=2 / varattno=2
  FROM
ft_1 NATURAL JOIN ft_2;

When FROM-clause is replaced by ForeignScan, the relevant varattno
also needs to be updated, according to the underlying remote query.
If postgres_fdw runs the following remote query, X should have varattno=1
and Y should have varattno=2 on the pseudo scan tlist.
  remote: SELECT t_1.second_col, t_2.second_col
FROM t_1 NATURAL JOIN t_2;

You can inform the planner this mapping using fdw_ps_tlist field of
ForeignScan, if FDW driver put a list of TargetEntry.
In above example, fdw_ps_tlist will have two elements and both of then
has Var-node of the underlying foreign tables.

The patch to replace join by foreign-/custom-scan adds a functionality
to fix-up varno/varattno in these cases.

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei kai...@ak.jp.nec.com


 -Original Message-
 From: pgsql-hackers-ow...@postgresql.org
 [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Kouhei Kaigai
 Sent: Tuesday, December 16, 2014 9:01 AM
 To: Robert Haas; Shigeru Hanada
 Cc: PostgreSQL-development
 Subject: Re: [HACKERS] Join push-down support for foreign tables
 
 Hanada-san,
 
 Thanks for proposing this great functionality people waited for.
 
  On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada
  shigeru.han...@gmail.com
  wrote:
   I'm working on $SUBJECT and would like to get comments about the
   design.  Attached patch is for the design below.
 
  I'm glad you are working on this.
 
   1. Join source relations
   As described above, postgres_fdw (and most of SQL-based FDWs) needs
   to check that 1) all foreign tables in the join belong to a server,
   and
   2) all foreign tables have same checkAsUser.
   In addition to that, I add extra limitation that both inner/outer
   should be plain foreign tables, not a result of foreign join.  This
   limiation makes SQL generator simple.  Fundamentally it's possible
   to join even join relations, so N-way join is listed as enhancement
   item below.
 
  It seems pretty important to me that we have a way to push the entire
  join nest down.  Being able to push down a 2-way join but not more
  seems like quite a severe limitation.
 
 As long as we don't care about simpleness/gracefulness of the remote query,
 what we need to do is not complicated. All the optimization jobs are
 responsibility of remote system.
 
 Let me explain my thought:
 We have three cases to be considered; (1) a join between foreign tables
 that is the supported case, (2) a join either of relations is foreign join,
 and (3) a join both of relations are foreign joins.
 
 In case of (1), remote query shall have the following form:
   SELECT tlist FROM FT1 JOIN FT2 ON cond WHERE qual
 
 In case of (2) or (3), because we already construct inner/outer join, it
 is not difficult to replace the FT1 or FT2 above by sub-query, like:
   SELECT tlist FROM FT3 JOIN
 (SELECT tlist FROM FT1 JOIN FT2 ON cond WHERE qual) as FJ1
 ON joincond WHERE qual
 
 How about your thought?
 
 
 Let me comment on some other points at this moment:
 
 * Enhancement in src/include/foreign/fdwapi.h
 
 It seems to me GetForeignJoinPath_function and
 GetForeignJoinPlan_function are not used anywhere. Is it an oversight to
 remove definitions from your previous work, isn't it?
 Now ForeignJoinPath is added on set_join_pathlist_hook, but not callback
 of FdwRoutine.
 
 
 * Is ForeignJoinPath really needed?
 
 I guess the reason why you added ForeignJoinPath is, to have the fields
 of inner_path/outer_path. If we want to have paths of underlying relations,
 a straightforward way for the concept (join relations is replaced by
 foreign-/custom-scan on a result set of remote join) 

Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
 Read the thread on this list where I suggested crediting reviewers in
 the release notes.

Man. You're equating stuff that's not the same. You didn't get your way
(and I'm tentatively on your side onthat one) and take that to imply
that we don't want more reviewers.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Ali Akbar
2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz:

 On 15.12.2014 22:35, Jeff Janes wrote:
  On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra t...@fuzzy.cz
  mailto:t...@fuzzy.cz wrote:
 
  Hi,
 
  Attached is v2 of the patch lowering array_agg memory requirements.
  Hopefully it addresses the issues issues mentioned by TL in this
 thread
  (not handling some of the callers appropriately etc.).
 
 
  Hi Tomas,
 
  When configured --with-libxml I get compilation errors:
 
  xml.c: In function 'xml_xpathobjtoxmlarray':
  xml.c:3684: error: too few arguments to function 'accumArrayResult'
  xml.c:3721: error: too few arguments to function 'accumArrayResult'
  xml.c: In function 'xpath':
  xml.c:3933: error: too few arguments to function 'initArrayResult'
  xml.c:3936: error: too few arguments to function 'makeArrayResult'
 
  And when configured --with-perl, I get:
 
  plperl.c: In function 'array_to_datum_internal':
  plperl.c:1196: error: too few arguments to function 'accumArrayResult'
  plperl.c: In function 'plperl_array_to_datum':
  plperl.c:1223: error: too few arguments to function 'initArrayResult'
 
  Cheers,

 Thanks, attached is a version that fixes this.


Just fast-viewing the patch.

The patch is not implementing the checking for not creating new context in
initArrayResultArr. I think we should implement it also there for
consistency (and preventing future problems).

Regards,
-- 
Ali Akbar


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Ali Akbar
2014-12-16 10:47 GMT+07:00 Ali Akbar the.ap...@gmail.com:


 2014-12-16 6:27 GMT+07:00 Tomas Vondra t...@fuzzy.cz:

 On 15.12.2014 22:35, Jeff Janes wrote:
  On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra t...@fuzzy.cz
  mailto:t...@fuzzy.cz wrote:
 
  Hi,
 
  Attached is v2 of the patch lowering array_agg memory requirements.
  Hopefully it addresses the issues issues mentioned by TL in this
 thread
  (not handling some of the callers appropriately etc.).
 
 
  Hi Tomas,
 
  When configured --with-libxml I get compilation errors:
 
  xml.c: In function 'xml_xpathobjtoxmlarray':
  xml.c:3684: error: too few arguments to function 'accumArrayResult'
  xml.c:3721: error: too few arguments to function 'accumArrayResult'
  xml.c: In function 'xpath':
  xml.c:3933: error: too few arguments to function 'initArrayResult'
  xml.c:3936: error: too few arguments to function 'makeArrayResult'
 
  And when configured --with-perl, I get:
 
  plperl.c: In function 'array_to_datum_internal':
  plperl.c:1196: error: too few arguments to function 'accumArrayResult'
  plperl.c: In function 'plperl_array_to_datum':
  plperl.c:1223: error: too few arguments to function 'initArrayResult'
 
  Cheers,

 Thanks, attached is a version that fixes this.


 Just fast-viewing the patch.

 The patch is not implementing the checking for not creating new context in
 initArrayResultArr. I think we should implement it also there for
 consistency (and preventing future problems).


Looking at the modification in accumArrayResult* functions, i don't really
comfortable with:

   1. Code that calls accumArrayResult* after explicitly calling
   initArrayResult* must always passing subcontext, but it has no effect.
   2. All existing codes that calls accumArrayResult must be changed.

Just an idea: why don't we minimize the change in API like this:

   1. Adding parameter bool subcontext, only in initArrayResult* functions
   but not in accumArrayResult*
   2. Code that want to not creating subcontext must calls initArrayResult*
   explicitly.

Other codes that calls directly to accumArrayResult can only be changed in
the call to makeArrayResult* (with release=true parameter). In places that
we don't want to create subcontext (as in array_agg_transfn), modify it to
use initArrayResult* before calling accumArrayResult*.

What do you think?

Regards,
--
Ali Akbar


Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Alvaro Herrera
Noah Misch wrote:
 On Mon, Dec 15, 2014 at 12:12:10AM -0800, Michael Paquier wrote:

  FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
  I also checked that changing  now() to = now() fixed the
  problem, so your assumption was right, Petr.
 
 Committed, after fixing the alternate expected output.

Thanks.  I admit I don't understand what the issue is.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Noah Misch
On Mon, Dec 15, 2014 at 03:29:19PM -0500, Andrew Dunstan wrote:
 On 12/15/2014 03:16 PM, Andres Freund wrote:
 On 2014-12-15 11:52:35 -0800, Josh Berkus wrote:
 On 12/15/2014 11:27 AM, Robert Haas wrote:
 I feel like we used to be better at encouraging people to participate
 in the CF even if they were not experts, and to do the best they can
 based on what they did know.  That was a helpful dynamic.  Sure, the
 reviews weren't perfect, but more people helped, and reviewing some of
 the patch well and some of it in a more cursory manner is way better
 than reviewing none of it at all.
 Well, it was strongly expressed to me by a number of senior contributors
 on this list and at the developer meeting that inexpert reviews were not
 really wanted, needed or helpful.
 I think that's pretty far away from what was said.
 
 
 
 I welcome reviews at all levels, both as a developer and as a committer.
 
 It is true that we are very short on reviewers with in depth knowledge and
 experience, and this is the real problem we have far more than any
 technological issues people might have.
 
 But that doesn't mean we should be turning anyone away. We should not.

+1.  Some of the best reviews I've seen are ones where the reviewer expressed
doubts about the review's quality, so don't let such doubts keep you from
participating.  Every defect you catch saves a committer time; a review that
finds 3 of the 10 defects in a patch is still a big help.  Some patch
submissions waste the community's time, but it's almost impossible to waste
the community's time by posting a patch review.

Confusion may have arisen due to statements that we need more expert
reviewers, which is also true.  (When an expert writes a sufficiently-complex
patch, it's important that a second expert examine the patch at some point.)
If you're a novice reviewer, your reviews do help to solve that problem by
reducing the workload on expert reviewers.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Josh Berkus
On 12/15/2014 07:34 PM, Andres Freund wrote:
 On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
 Read the thread on this list where I suggested crediting reviewers in
 the release notes.
 
 Man. You're equating stuff that's not the same. You didn't get your way
 (and I'm tentatively on your side onthat one) and take that to imply
 that we don't want more reviewers.

During that thread a couple people said that novice reviewers added no
value to the review process, and nobody argued with them then.  I've
also been told this to my face at pgCon, and when I've tried organizing
patch review events.  I got the message, which is why I stopped trying
to get new reviewers.

And frankly: if we're opposed to giving credit to patch reviewers, we're
opposed to having them.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Kyotaro HORIGUCHI
Hello, I have a favor for you committers.

I confirmed that this issue the another have been fixed in the
repository, thank you.

Then, could you please give me when the next release of 9.2.10 is
to come?

The bugs are found in some system under developing, which is to
make use of PG9.2 and it wants to adopt the new release.

regards,

At Thu, 11 Dec 2014 20:37:34 -0500, Tom Lane t...@sss.pgh.pa.us wrote in 
17534.1418348...@sss.pgh.pa.us
 Kyotaro HORIGUCHI horiguchi.kyot...@lab.ntt.co.jp writes:
  So I replaced the get_parse_rowmark() with get_plan_rowmark() as
  the attached patch and the problem disappeared.
 
 Yeah, this is clearly a thinko: really, nothing in the planner should
 be using get_parse_rowmark().  I looked around for other errors of the
 same type and found that postgresGetForeignPlan() is also using
 get_parse_rowmark().  While that's harmless at the moment because we
 don't support foreign tables as children, it's still wrong.  Will
 fix that too.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 21:18:40 -0800, Josh Berkus wrote:
 On 12/15/2014 07:34 PM, Andres Freund wrote:
  On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
  Read the thread on this list where I suggested crediting reviewers in
  the release notes.
  
  Man. You're equating stuff that's not the same. You didn't get your way
  (and I'm tentatively on your side onthat one) and take that to imply
  that we don't want more reviewers.
 
 During that thread a couple people said that novice reviewers added no
 value to the review process, and nobody argued with them then.  I've
 also been told this to my face at pgCon, and when I've tried organizing
 patch review events.  I got the message, which is why I stopped trying
 to get new reviewers.

I think there's a very large difference in what novice reviewers do. A
schematic 'in context format, compiles and survives make check' type of
test indeed doesn't seem to be particularly useful to me. A novice
reviewer that tries out the feature by reading the docs noticing
shortages there on the way, and then verifies that the feature works
outside of the two regression tests added is something entirely
different. Novice reviewers *can* review the code quality as well - it's
just that many we had didn't.

I think the big problem is that a good review takes time - and that's
what many of the novice reviewers I've observed weren't really aware of.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres TR for missing chunk

2014-12-15 Thread M Tarkeshwar Rao
Hello Friends,

Can you please tell me the how can I track the which bugs are fixed in which 
release and when they will be fixed,
If I want to track the analysis and status of the bug raised on Postgres. Can I 
get this information.

From last few days we are struggling with following issue:

1.   Additionally we found that few operations on this table is getting 
failed like select or truncate and a more specific error is thrown as per 
below:-

ERROR:  missing chunk number 0 for toast value 54787 in pg_toast_2619

** Error **

We done all the suggested things on Google but not able to resolve it. I want 
to know how to avoid this issue?

Can you please suggest upto when following bugs will be resolved?

There are the known Bug on Postgres. Bugs detail are mentioned below.

BUG #9187: corrupt toast tables

http://www.postgresql.org/message-id/30154.1392153...@sss.pgh.pa.us
http://www.postgresql.org/message-id/cafj8praufpttn5+ohfqpbcd1jzkersck51uakhcwd8nt4os...@mail.gmail.com
http://www.postgresql.org/message-id/20140211162408.2713.81...@wrigleys.postgresql.org

BUG #7819: missing chunk number 0 for toast value 1235919 in pg_toast_35328

http://www.postgresql.org/message-id/C62EC84B2D3CF847899CCF4B589CCF70B20AA08F@BBMBX.backbone.local

Thanks !!
Tarkeshwar


[HACKERS] Some modes of vcregress not mentioned in MSVC scripts

2014-12-15 Thread Michael Paquier
Hi all,

While looking at the Windows docs and the MSVC scripts, I noticed that
the following run modes of vcregress.pl are either not mentioned in
the WIndows installation docs or in the help output of vcregress.pl:
- ecpgcheck
- isolationcheck
- upgradecheck
Attached is a patch fixing that.
Regards,
-- 
Michael
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index 71a5c2e..9b77648 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -436,6 +436,9 @@ $ENV{CONFIG}=Debug;
 userinputvcregress installcheck/userinput
 userinputvcregress plcheck/userinput
 userinputvcregress contribcheck/userinput
+userinputvcregress ecpgcheck/userinput
+userinputvcregress isolationcheck/userinput
+userinputvcregress upgradecheck/userinput
 /screen
 
To change the schedule used (default is parallel), append it to the
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index b84f70d..699c286 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -405,6 +405,6 @@ sub usage
 {
 	print STDERR
 	  Usage: vcregress.pl ,
-	  check|installcheck|plcheck|contribcheck|ecpgcheck [schedule]\n;
+	  check|installcheck|plcheck|contribcheck|isolationcheck|ecpgcheck|upgradecheck [schedule]\n;
 	exit(1);
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Craig Ringer

On 16 Dec 2014 7:43 am, Andres Freund and...@2ndquadrant.com wrote:

 On 2014-12-15 21:18:40 -0800, Josh Berkus wrote: 
  On 12/15/2014 07:34 PM, Andres Freund wrote: 
   On 2014-12-15 16:14:30 -0800, Josh Berkus wrote: 
   Read the thread on this list where I suggested crediting reviewers in 
   the release notes. 
   
   Man. You're equating stuff that's not the same. You didn't get your way 
   (and I'm tentatively on your side onthat one) and take that to imply 
   that we don't want more reviewers. 
  
  During that thread a couple people said that novice reviewers added no 
  value to the review process, and nobody argued with them then.  I've 
  also been told this to my face at pgCon, and when I've tried organizing 
  patch review events.  I got the message, which is why I stopped trying 
  to get new reviewers. 

 I think there's a very large difference in what novice reviewers do. A 
 schematic 'in context format, compiles and survives make check' type of 
 test indeed doesn't seem to be particularly useful to me. A novice 
 reviewer that tries out the feature by reading the docs noticing 
 shortages there on the way, and then verifies that the feature works 
 outside of the two regression tests added is something entirely 
 different. Novice reviewers *can* review the code quality as well - it's 
 just that many we had didn't. 

 I think the big problem is that a good review takes time - and that's 
 what many of the novice reviewers I've observed weren't really aware of. 


The review docs also over-emphasise the mechanical parts of review around make 
check etc, which may make it seem like that alone is quite useful. When really 
it's just the beginning.

 Greetings, 

 Andres Freund 

 -- 
 Andres Freund    http://www.2ndQuadrant.com/ 
 PostgreSQL Development, 24x7 Support, Training  Services 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] analyze_new_cluster.bat and delete_old_cluster.bat not ignored with vcregress upgradecheck

2014-12-15 Thread Michael Paquier
Hi all,

As mentioned in $subject, I noticed that those automatically-generated
files are not ignored in the tree when running vcregress on Windows,
we do ignore their .sh version though. I think that it would be good
to back-patch the patch attached to prevent the inclusion of those
files in the future.
Regards,
-- 
Michael
diff --git a/contrib/pg_upgrade/.gitignore b/contrib/pg_upgrade/.gitignore
index 9555f54..d24ec60 100644
--- a/contrib/pg_upgrade/.gitignore
+++ b/contrib/pg_upgrade/.gitignore
@@ -1,6 +1,8 @@
 /pg_upgrade
 # Generated by test suite
-analyze_new_cluster.sh
-delete_old_cluster.sh
+/analyze_new_cluster.sh
+/delete_old_cluster.sh
+/analyze_new_cluster.bat
+/delete_old_cluster.bat
 /log/
 /tmp_check/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] NUMERIC private methods?

2014-12-15 Thread David Fetter
Folks,

While noodling with some weighted statistics
https://github.com/davidfetter/weighted_stats, I noticed I was
having to jump through a lot of hoops because of all the private
methods in numeric.c, especially NumericVar.  Would there be some
major objection to exposing NumericVar as an opaque blob?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-15 Thread Michael Paquier
On Mon, Dec 15, 2014 at 11:53 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Mon, Dec 15, 2014 at 11:45 AM, Peter Eisentraut pete...@gmx.net wrote:
 On 12/14/14 9:08 PM, Michael Paquier wrote:
 - no Windows build system support yet
 Do you need some help here?
 Sure.
Peter,

Attached is a patch that can be applied on your existing set to
complete support on Windows for the move to src/bin. For MinGW there
was nothing to do, but on MSVC we have to declare frontend utilities
in the build scripts, which is roughly what this patch does. There are
as well some updates needed for clean.bat and the regression test
script. I did the following checks btw:
- Checked the builds worked correctly
- Checked that file version information was generated when building
with either MinGW or MSVC
- Checked that clean.bat ran correctly
- Checked that vcregress upgradecheck was working correctly
- Checked as well the other regression tests, but that's minor here...
I also noticed when looking at your patches that we actually forgot to
ignore the *.bat scripts generated by regressions of pg_upgrade when
running vcregress upgradecheck with MSVC stuff, but that's a different
issue that I reported on a new thread. It is included in my patch btw
for completeness.
Regards,
-- 
Michael


20141216_srcbin_win.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving Orafce from pgFoundry - pgFoundry management

2014-12-15 Thread Pavel Stehule
2014-12-16 6:15 GMT+01:00 Marc Fournier scra...@postgresql.org:


 Project disabled on pgfoundry … do you want me to remove the mailing lists
 and all those archives?   Or leave them there … ?


please, drop it - there is almost spam only

Pavel




  On Dec 13, 2014, at 9:18 AM, David Fetter da...@fetter.org wrote:
 
  On Sat, Dec 13, 2014 at 11:19:08AM +0100, Pavel Stehule wrote:
  Hi
 
  a Orafce package on pgFoundry is obsolete - we migrated to github few
 years
  ago. Please, can somebody modify a description about this migration? Or
  drop this project there.
 
  Pavel,
 
  I've removed pgFoundry references from the IRC documentation bot and
  replaced them with references to github.
 
  Marc,
 
  Could you please remove the Orafce project from pgFoundry?
 
  Cheers,
  David.
  --
  David Fetter da...@fetter.org http://fetter.org/
  Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
  Skype: davidfetter  XMPP: david.fet...@gmail.com
 
  Remember to vote!
  Consider donating to Postgres: http://www.postgresql.org/about/donate




Re: [HACKERS] pg_basebackup vs. Windows and tablespaces

2014-12-15 Thread Amit Kapila
On Sun, Dec 14, 2014 at 11:54 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
 On Sat, Dec 13, 2014 at 10:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Andrew Dunstan and...@dunslane.net writes:
   On 11/20/2014 02:27 AM, Amit Kapila wrote:
   Now the part where I would like to receive feedback before revising
the
   patch is on the coding style.  It seems to me from Tom's comments
that
   he is not happy with the code, now I am not sure which part of the
patch
   he thinks needs change.  Tom if possible, could you be slightly more
   specific about your concern w.r.t code?
 
   In view of the request above for comments from Tom, I have moved this
   back to Needs Review.

I have updated the patch and handled the review comments as below:

1. Change the name of file containing tablespace path information to
tablespace_map. I have changed the reference to file name in whole patch.

2. I have not added tablespace name in tablespace_map file as I am not
sure how important it is for user readability aspect and what format should
we use and another point is not many people have asked for it.  However
if you feel it is important to have the same for this patch, then I will
propose some new format.

3. Made the code generic (for all platforms) such that a tablespace_map
file will be created to restore tablespaces for base backup.

  Sorry, I was not paying very close attention to this thread and missed
  the request for comments.  A few such:
 
  1. The patch is completely naive about what might be in the symlink
  path string; eg embedded spaces in the path would break it.  On at
  least some platforms, newlines could be in the path as well.  I'm not
  sure about how to guard against this while maintaining human readability
  of the file.
 

 I will look into this and see what best can be done.


I have chosen #3 (Make pg_basebackup check for and fail on symlinks
containing characters (currently newline only) it can't handle) from the
different options suggested by Tom.  This keeps the format same as
previous and human readable.

  2. There seems to be more going on here than what is advertised, eg
  why do we need to add an option to the BASE_BACKUP command

 This is to ensure that symlink file is generated only for tar format;
 server is not aware of whether the backup is generated for plain format
 or tar format.  We don't want to do it for plain format as for that
 client (pg_basebackup) can update the symlinks via -T option and backing
 up symlink file during that operation can lead to spurious symlinks after
 archive recovery.  I have given the reason why we want to accomplish it
 only for tar format in my initial mail.

  (and if
  we do need it, doesn't it need to be documented in protocol.sgml)?

 I shall take care of it in next version of patch.


Added the description in protocol.sgml

  And why is the RelationCacheInitFileRemove call relocated?
 

 Because it assumes that tablespace directory pg_tblspc is in
 place and it tries to remove the files by reading pg_tblspc
 directory as well.  Now as we setup the symlinks in pg_tblspc
 after reading symlink file, so we should remove relcache init
 file once the symlinks are setup in pg_tblspc directory.

  3. Not terribly happy with the changes made to the API of
  do_pg_start_backup, eg having to be able to parse DIR * in its
  arguments seems like a lot of #include creep.  xlog.h is pretty
  central so I'm not happy about plastering more #includes in it.
 

 The reason of adding new include in xlog.c is for use of tablespaceinfo
 structure which I have now kept in basebackup.h.

 The reason why I have done this way is because do_pg_start_backup has
 some functionality common to both non-exclusive and exclusive backups and
 for this feature we have to do some work common for both non-exclusive
 and exclusive backup which is to generate the symlink label file for
 non-exclusive backups and write the symlink label file for exclusive
 backups using that information. Doing this way seems right to me
 as we are already doing something like that for backup label file.

 Another possible way could be to write a new function in xlogutils.c
 to do the symlink label stuff and then use the same in xlog.c, I think
 that way we could avoid any new include in xlog.c.  However for this we
 need to have include in xlogutils.c to make it aware of tablespaceinfo
 structure.


Are you okay with the alternative I have suggested to avoid the
new include in xlog.c or do you feel the alternative will make the
code worse than the current patch?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


extend_basebackup_to_include_symlink_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread David Rowley
On 16 December 2014 at 18:18, Josh Berkus j...@agliodbs.com wrote:

  Man. You're equating stuff that's not the same. You didn't get your way
  (and I'm tentatively on your side onthat one) and take that to imply
  that we don't want more reviewers.

 During that thread a couple people said that novice reviewers added no
 value to the review process, and nobody argued with them then.  I've
 also been told this to my face at pgCon, and when I've tried organizing
 patch review events.  I got the message, which is why I stopped trying
 to get new reviewers.

 And frankly: if we're opposed to giving credit to patch reviewers, we're
 opposed to having them.



I'd just like to add something which might be flying below the radar of
more senior people. There are people out there  (ike me)  working on
PostgreSQL more for the challenge and perhaps the love of the product, who
make absolutely zero money out of it. For these people getting credit where
it's due is very important. I'm pretty happy with this at the moment and I
can't imagine any situation where not crediting reviewers would be
beneficial to anyone.

Regards

David Rowley


  1   2   >