Re: Remove mention in docs that foreign keys on partitioned tables are not supported

2018-07-01 Thread Amit Langote
On 2018/07/02 14:46, Ashutosh Bapat wrote:
> This constraint was added to the partitioned table and inherited from
> there. If user wants to drop that constraint for some reason, this
> error message doesn't help. The error message tells why he can't drop
> it, but doesn't tell, directly or indirectly, the user what he should
> do in order to drop it. When there was only a single way, i.e table
> inheritance, to "inherit" things users could probably guess that. But
> now there are multiple ways to inherit things, we have to help user a
> bit more. The user might figure out that ti's a partition of a table,
> so the constraint is inherited from the partitioned table. But it will
> help if we give a hint about from where the constraint was inherited
> like the error message itself reads like "can not drop constraint
> "p_a_check" on relation "p1" inherited from "partitioned table's name"
> OR a hint "you may drop constraint "p_a_check" on the partitioned
> table "partitioned table's name". It might even suffice to say
> "partition p1" instead "relation p1" so that the user gets a clue.

It might be a good idea to mention if the table is a partition in the HINT
message.  ERROR message can remain the same.

Thanks,
Amit




Re: Remove mention in docs that foreign keys on partitioned tables are not supported

2018-07-01 Thread Ashutosh Bapat
On Fri, Jun 29, 2018 at 6:35 PM, Tomas Vondra
 wrote:
>
>
> On 06/29/2018 02:30 PM, Robert Haas wrote:
>>
>> On Wed, Jun 27, 2018 at 10:26 PM, Amit Langote
>>  wrote:
>>>
>>> By the way, picking up on the word "inherited" in the error message shown
>>> above, I wonder if you decided against using similar terminology
>>> intentionally.
>>
>>
>> Good question.
>>
>>> I guess the thinking there is that the terminology being
>>> used extensively with columns and constraints ("inherited column/check
>>> constraint cannot be dropped", etc.) is just a legacy of partitioning
>>> sharing implementation with inheritance.
>>
>>
>> It seems to me that we can talk about things being inherited by
>> partitions even if we're calling the feature partitioning, rather than
>> inheritance.  Maybe that's confusing, but then again, maybe it's not
>> that confusing.
>>
>
> my 2c: I don't think it's confusing. Inheritance is a generic concept, not
> attached exclusively to the "table inheritance". If you look at docs from
> other databases, you'll see "partition inherits" all the time.

I generally agree with this. But I think we need to think more in the
context of the particular example Amit gave.

-- quoting Amit's example,

   Table "public.p1"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 a  | integer |   |  |
 b  | integer |   |  |
Partition of: p FOR VALUES IN (1)
Check constraints:
"p1_b_check" CHECK (b >= 0)
"p_a_check" CHECK (a >= 0)

alter table p1 drop constraint p_a_check;
ERROR:  cannot drop inherited constraint "p_a_check" of relation "p1"

--unquote

This constraint was added to the partitioned table and inherited from
there. If user wants to drop that constraint for some reason, this
error message doesn't help. The error message tells why he can't drop
it, but doesn't tell, directly or indirectly, the user what he should
do in order to drop it. When there was only a single way, i.e table
inheritance, to "inherit" things users could probably guess that. But
now there are multiple ways to inherit things, we have to help user a
bit more. The user might figure out that ti's a partition of a table,
so the constraint is inherited from the partitioned table. But it will
help if we give a hint about from where the constraint was inherited
like the error message itself reads like "can not drop constraint
"p_a_check" on relation "p1" inherited from "partitioned table's name"
OR a hint "you may drop constraint "p_a_check" on the partitioned
table "partitioned table's name". It might even suffice to say
"partition p1" instead "relation p1" so that the user gets a clue.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company



pg_stat_statements_reset() function execute permissions doc update

2018-07-01 Thread Haribabu Kommi
The pg_stat_statements_reset() function can be executed by super and
members of the role pg_read_all_stats, but document doesn't say that.
This is found during the code review of another patch in [1].

Patch attached. This needs a backpatch till 10 where the default roles are
added.

[1] -
https://www.postgresql.org/message-id/flat/3368121530260059%40web21g.yandex.ru#655724e8e402df4f8b29fc91d44cf260

Regards,
Haribabu Kommi
Fujitsu Australia


0001-pg_stat_statements_reset-function-execution-permissi.patch
Description: Binary data


Re: Monitoring time of fsyncing WALs

2018-07-01 Thread Craig Ringer
On 1 July 2018 at 11:29, Michael Paquier  wrote:


>
> So at the end, I would like to use the proposed patch and call it a
> day.  Thoughts?
>
>
Patch looks good.

I'll hijack the thread to add a few more perf/dtrace tracepoints in the WAL
code, as they were also missing. Proposed rider patch attached.

I've updated
https://wiki.postgresql.org/wiki/Profiling_with_perf#PostgreSQL_pre-defined_tracepoint_events
to document how to add tracepoints.

It's not that hard to trace duration of a given function call with dtrace
without such annotations, but they make it much easier to discover where in
a large codebase to look, providing a form of documentation. With perf they
make life much easier. I should add some more to make it easier to analyse
relation extension contention on indexes and the heap, instrument btree
index ops like page splits, instrument heavyweight locking (beyond
LOCK_WAIT_START), etc. They're also handy for gdb -
https://sourceware.org/gdb/onlinedocs/gdb/Static-Probe-Points.html - also
handy for gdb's "info probes".

BTW, we might want to instrument the pgstat_ counter calls
and pgstat_report_wait_start / pgstat_report_wait_end, but it's easy enough
to use dynamic tracepoints for those so I haven't yet. Maybe even just
document them as points of interest.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
From 74cfc819e2c4323613191bcbf6c91f128617c353 Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Mon, 2 Jul 2018 11:33:46 +0800
Subject: [PATCH v1] Add more statically defined tracepoints around xlog
 handling

New tracepoints are added to allow for tracing of WAL flushes and
segment initialization/recycling.

The existing wal_insert tracepoint is renamed to wal_insert_start
and paired with a wal_insert_done, so that insert durations
may be recorded including fsync time.
---
 src/backend/access/transam/xlog.c   | 14 ++
 src/backend/access/transam/xloginsert.c |  4 +++-
 src/backend/utils/probes.d  | 18 +-
 3 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index dcfef36591..b3dead5ac0 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -955,6 +955,8 @@ static void WALInsertLockUpdateInsertingAt(XLogRecPtr insertingAt);
  * (LSN is the XLOG point up to which the XLOG must be flushed to disk
  * before the data page can be written out.  This implements the basic
  * WAL rule "write the log before the data".)
+ *
+ * You want the WAL_INSERT tracepoints if you're looking for activity here.
  */
 XLogRecPtr
 XLogInsertRecord(XLogRecData *rdata,
@@ -2810,6 +2812,8 @@ XLogFlush(XLogRecPtr record)
 			 (uint32) (LogwrtResult.Flush >> 32), (uint32) LogwrtResult.Flush);
 #endif
 
+	TRACE_POSTGRESQL_WAL_FLUSH_START(record);
+
 	START_CRIT_SECTION();
 
 	/*
@@ -2942,6 +2946,8 @@ XLogFlush(XLogRecPtr record)
 			 "xlog flush request %X/%X is not satisfied --- flushed only to %X/%X",
 			 (uint32) (record >> 32), (uint32) record,
 			 (uint32) (LogwrtResult.Flush >> 32), (uint32) LogwrtResult.Flush);
+
+	TRACE_POSTGRESQL_WAL_FLUSH_DONE();
 }
 
 /*
@@ -3190,6 +3196,8 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 
 	XLogFilePath(path, ThisTimeLineID, logsegno, wal_segment_size);
 
+	TRACE_POSTGRESQL_WAL_SEG_INIT_START();
+
 	/*
 	 * Try to use existent file (checkpoint maker may have created it already)
 	 */
@@ -3204,7 +3212,10 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 		 errmsg("could not open file \"%s\": %m", path)));
 		}
 		else
+		{
+			TRACE_POSTGRESQL_WAL_SEG_INIT_DONE(*use_existent);
 			return fd;
+		}
 	}
 
 	/*
@@ -3327,6 +3338,7 @@ XLogFileInit(XLogSegNo logsegno, bool *use_existent, bool use_lock)
 
 	elog(DEBUG2, "done creating and filling new WAL file");
 
+	TRACE_POSTGRESQL_WAL_SEG_INIT_DONE(*use_existent);
 	return fd;
 }
 
@@ -10156,6 +10168,7 @@ assign_xlog_sync_method(int new_sync_method, void *extra)
 void
 issue_xlog_fsync(int fd, XLogSegNo segno)
 {
+	TRACE_POSTGRESQL_WAL_SYNC_START(fd, segno);
 	switch (sync_method)
 	{
 		case SYNC_METHOD_FSYNC:
@@ -10191,6 +10204,7 @@ issue_xlog_fsync(int fd, XLogSegNo segno)
 			elog(PANIC, "unrecognized wal_sync_method: %d", sync_method);
 			break;
 	}
+	TRACE_POSTGRESQL_WAL_SYNC_DONE();
 }
 
 /*
diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c
index 5bea073a2b..8c9b57b0da 100644
--- a/src/backend/access/transam/xloginsert.c
+++ b/src/backend/access/transam/xloginsert.c
@@ -429,7 +429,7 @@ XLogInsert(RmgrId rmid, uint8 info)
   XLR_CHECK_CONSISTENCY)) != 0)
 		elog(PANIC, "invalid xlog info mask %02X", info);
 
-	TRACE_POSTGRESQL_WAL_INSERT(rmid, info);
+	TRACE_POSTGRESQL_WAL_INSERT_START(rmid, info);
 
 	/*
 	 * In bootstrap mode, we don't actually log anything but XLOG resources;
@@ -464,6 +464,8 @@ 

mxid_age() and age(xid) appear undocumented

2018-07-01 Thread David Rowley
I see d692308cf494f6126 mentions mxid_age() in passing, but there
appears to be no formal definition of either of these functions.

Should there be?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Large Commitfest items

2018-07-01 Thread Andrew Dunstan
On Sun, Jul 1, 2018 at 9:36 PM, Craig Ringer  wrote:
> On 2 July 2018 at 02:50, Andres Freund  wrote:
>>
>> Hi,
>>
>> On 2018-07-01 14:46:47 -0400, Andrew Dunstan wrote:
>> > There has been some discussion around excluding large items from the
>> > current commitfest, for several reasons. However I don't recall ever
>> > seeing a definition of a large item. It seems to be a bit like "I know
>> > it when I see it."   I've been looking at the current commitfest
>> > entries. Based on that I suggest a heuristic that says a commitfest
>> > item with patches greater than 5000 lines is large.
>>
>> FWIW, I personally think the criteria should rather be "old" or "very
>> small". I.e. for patches that have waited for review being large
>> shouldn't necessarily be an impediment for getting worked on (depending
>> on invasiveness maybe not committed), and very small for newer things
>> should be way below 5kloc.
>>
>
> I agree. I think the idea is to stop people (um, totally not guilty of this)
> from dropping big or intrusive patches in late CFs.
>
> A 10 line patch can be massively intrusive and contentious. A 5000 line
> patch can be a mechanical change that nobody disagrees with, or a mature
> patch that just needed a few tweaks and missed commit in the last CF.
>
> If a line limit is used, we'll get people optimising for the line limit. I
> don't think that's a win.
>
> This benefits from being fuzzy IMO.
>

I totally agree. I never intended to apply this in anything but a
fuzzy way. I just used a heuristic as a way of breaking up the work a
bit.

Note that the intention is to treat this CF a little differently, as
it's taking place while we're still wrapping up Release 11.

If by "very small" we mean small impact, then the 22,000 line update
to the snowball stemmers is argubly small :-)

The following large items (by my rough heuristic) are marked Ready For
Committer and have been around from 3 to 5 Cfs:

Item 1160 Improve Geometric Types
Item 1294 Custom Compression Methods
Item 1421 Flexible Configuration for FTS

I propose we keep them.

As I mentioned, the update to the Snowball stemmers is fairly low
impact, and we can reasonably keep it.

Discussion seems to be ongoing re Pluggable storage API, and in any
case it could have a huge impact, so I suggest we push it out to
Spetember, even though it's been around for 5 CFs.

The following possibly qualify as "old", and could be kept on that
basis, depending on how large we think their impact might be:

Item 1062 Generic Type Subscripting
Item 1238 Multivariate MVC lists and histograms
Item 1247 push aggregation down to base relations and joins
Item 1348 BRIN Bloom and multi-minmax indexes

I'd like to hear some discussion on these.

That leaves the following "large" items which aren't particularly old:

Item 1471 jsonpath
Item 1472 SQL/JSON functions
Item 1473 JSONTABLE
Item 1553 Advanced partition matching for partition-wise join
Item 1574 Transactions involving multiple postgres foreign servers
Item 1648 Precalculate Stable Functions
Item 1649 Undo Log
Item 1685 Push down aggregates below joins

I suggest we keep jsonpath so we can make some progress on SQL/JSON,
and move the remainder to the September CF.

Before long I will post a similar examination of the next set of
smaller items (2000 to 5000 lines).

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



RE: Bulk Insert into PostgreSQL

2018-07-01 Thread Tsunakawa, Takayuki
From: Peter Geoghegan [mailto:p...@bowt.ie]
> What kind of data was indexed? Was it a bigserial primary key, or
> something else?

Sorry, I don't remember it.  But the table was for storing some machine usage 
data, and I don't think any sequence was used in the index.

According to my faint memory, iostat showed many reads on the database storage, 
and correspondingly pstack showed ReadBufferExtended during the btree 
operations.  shared_buffers was multiple GBs.  I wondered why btree operations 
didn't benefit from the caching of non-leaf nodes.

Regards
Takayuki Tsunakawa





Re: Commitfest 2018-07 is underway

2018-07-01 Thread Tatsuo Ishii
>>> Does this test doc building?
>>
>> Good question. Thomas?
> 
> Yes.  You can't see the resulting HTML or anything... but it does fail
> if you break the documentation build.

Great! Thanks for the explanation.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: Bulk Insert into PostgreSQL

2018-07-01 Thread Peter Geoghegan
On Sun, Jul 1, 2018 at 5:19 PM, Tsunakawa, Takayuki
 wrote:
> 400 GB / 15 hours = 7.6 MB/s
>
> That looks too slow.  I experienced a similar slowness.  While our user tried 
> to INSERT (not COPY) a billion record, they reported INSERTs slowed down by 
> 10 times or so after inserting about 500 million records.  Periodic pstack 
> runs on Linux showed that the backend was busy in btree operations.  I didn't 
> pursue the cause due to other businesses, but there might be something to be 
> improved.

What kind of data was indexed? Was it a bigserial primary key, or
something else?

-- 
Peter Geoghegan



Re: Large Commitfest items

2018-07-01 Thread Craig Ringer
On 2 July 2018 at 02:50, Andres Freund  wrote:

> Hi,
>
> On 2018-07-01 14:46:47 -0400, Andrew Dunstan wrote:
> > There has been some discussion around excluding large items from the
> > current commitfest, for several reasons. However I don't recall ever
> > seeing a definition of a large item. It seems to be a bit like "I know
> > it when I see it."   I've been looking at the current commitfest
> > entries. Based on that I suggest a heuristic that says a commitfest
> > item with patches greater than 5000 lines is large.
>
> FWIW, I personally think the criteria should rather be "old" or "very
> small". I.e. for patches that have waited for review being large
> shouldn't necessarily be an impediment for getting worked on (depending
> on invasiveness maybe not committed), and very small for newer things
> should be way below 5kloc.
>
>
I agree. I think the idea is to stop people (um, totally not guilty of
this) from dropping big or intrusive patches in late CFs.

A 10 line patch can be massively intrusive and contentious. A 5000 line
patch can be a mechanical change that nobody disagrees with, or a mature
patch that just needed a few tweaks and missed commit in the last CF.

If a line limit is used, we'll get people optimising for the line limit. I
don't think that's a win.

This benefits from being fuzzy IMO.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Synchronous replay take III

2018-07-01 Thread Thomas Munro
On Sat, Mar 3, 2018 at 2:11 AM, Adam Brusselback
 wrote:
> Thanks Thomas, appreciate the rebase and the work you've done on this.
> I should have some time to test this out over the weekend.

Rebased.  Moved to September.  I still need to provide a TAP test and
explain that weirdness reported by Dmitry Dolgov, but I didn't get to
that in time for the bonus early commitfest that we're now in.

-- 
Thomas Munro
http://www.enterprisedb.com


0001-Synchronous-replay-mode-for-avoiding-stale-reads--v6.patch
Description: Binary data


RE: Bulk Insert into PostgreSQL

2018-07-01 Thread Tsunakawa, Takayuki
From: Srinivas Karthik V [mailto:skarthikv.i...@gmail.com]
> I was using copy command to load. Removing the primary key constraint on
> the table and then loading it helps a lot. In fact, a 400GB table was loaded
> and the primary constraint was added in around 15 hours.  Thanks for the
> wonderful suggestions.

400 GB / 15 hours = 7.6 MB/s

That looks too slow.  I experienced a similar slowness.  While our user tried 
to INSERT (not COPY) a billion record, they reported INSERTs slowed down by 10 
times or so after inserting about 500 million records.  Periodic pstack runs on 
Linux showed that the backend was busy in btree operations.  I didn't pursue 
the cause due to other businesses, but there might be something to be improved.


Regards
Takayuki Tsunakawa





Re: Commitfest 2018-07 is underway

2018-07-01 Thread Thomas Munro
On Mon, Jul 2, 2018 at 11:24 AM, Andrew Dunstan
 wrote:
> On Sun, Jul 1, 2018 at 7:02 PM, Tatsuo Ishii  wrote:
>>> Patch authors are reminded of the very useful patch tester too at
>>> . You should check regularly if your
>>> patch is still applying, building, and testing nicely. There are
>>> currently quite a significant number of patches listed which don't
>>> apply or don't build/test cleanly in travis or appveyor.
>>
>> Does this test doc building?
>
> Good question. Thomas?

Yes.  You can't see the resulting HTML or anything... but it does fail
if you break the documentation build.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Commitfest 2018-07 is underway

2018-07-01 Thread Andrew Dunstan
On Sun, Jul 1, 2018 at 7:02 PM, Tatsuo Ishii  wrote:
>> Patch authors are reminded of the very useful patch tester too at
>> . You should check regularly if your
>> patch is still applying, building, and testing nicely. There are
>> currently quite a significant number of patches listed which don't
>> apply or don't build/test cleanly in travis or appveyor.
>
> Does this test doc building?
>


Good question. Thomas?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Commitfest 2018-07 is underway

2018-07-01 Thread Tatsuo Ishii
> Patch authors are reminded of the very useful patch tester too at
> . You should check regularly if your
> patch is still applying, building, and testing nicely. There are
> currently quite a significant number of patches listed which don't
> apply or don't build/test cleanly in travis or appveyor.

Does this test doc building?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: [HACKERS] Removing [Merge]Append nodes which contain a single subpath

2018-07-01 Thread David Rowley
Now that the September 'fest is open for new patches I'm going to move
this patch over there.  This patch has become slightly less important
than some other stuff, but I'd still like to come back to it.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Postgres 11 release notes

2018-07-01 Thread Jonathan S. Katz

> On Jun 30, 2018, at 5:52 PM, Tom Lane  wrote:
> 
> Yaroslav  writes:
>> I've noticed the following in v11 release notes:
>> "Window functions now support all options shown in the SQL:2011 standard,
>> including RANGE distance PRECEDING/FOLLOWING, GROUPS mode, and frame
>> exclusion options"
> 
>> But, as far as I see, IGNORE NULLS option for lead, lag, etc. is still not
>> supported. May be, the item could be reformulated?
> 
> Mmm, yeah, I suppose it should say "all framing options" rather than
> implying that we've implemented every other window-related frammish
> there is in the spec.

+1. Attached patch that does exactly that.

Jonathan



0001-window-docs.patch
Description: Binary data


Re: Large Commitfest items

2018-07-01 Thread Andres Freund
Hi,

On 2018-07-01 14:46:47 -0400, Andrew Dunstan wrote:
> There has been some discussion around excluding large items from the
> current commitfest, for several reasons. However I don't recall ever
> seeing a definition of a large item. It seems to be a bit like "I know
> it when I see it."   I've been looking at the current commitfest
> entries. Based on that I suggest a heuristic that says a commitfest
> item with patches greater than 5000 lines is large.

FWIW, I personally think the criteria should rather be "old" or "very
small". I.e. for patches that have waited for review being large
shouldn't necessarily be an impediment for getting worked on (depending
on invasiveness maybe not committed), and very small for newer things
should be way below 5kloc.

Greetings,

Andres Freund



Large Commitfest items

2018-07-01 Thread Andrew Dunstan
There has been some discussion around excluding large items from the
current commitfest, for several reasons. However I don't recall ever
seeing a definition of a large item. It seems to be a bit like "I know
it when I see it."   I've been looking at the current commitfest
entries. Based on that I suggest a heuristic that says a commitfest
item with patches greater than 5000 lines is large.

That's around 20 items in the current commitfest. I think we need to
be able to use some discretion to allow items with more lines or
exclude items with less lines that nevertheless have far-reaching
impacts, but this seems like a reasonable starting point.

We could double that number by reducing the 5000 to 2000, but that
seems too restrictive to me.

Thoughts?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Should contrib modules install .h files?

2018-07-01 Thread Andres Freund
Hi,

On 2018-07-01 19:23:03 +0100, Andrew Gierth wrote:
> So I have this immediate problem: a PGXS build of a module, specifically
> an hstore transform for a non-core PL, is much harder than it should be
> because it has no way to get at hstore.h since that file is never
> installed anywhere.
> 
> Should that be changed?

I've hit this before, and the more capable our extension framework and
more complex individual extensions get, the more we'll hit this. One
question is where to install them - the extensions with headers often
just have them in the source code itself, which isn't a great
solution. contrib/$extension/ or $extension/ both seems ok, but i'd be
somewhat against just $extension.h

Greetings,

Andres Freund



Should contrib modules install .h files?

2018-07-01 Thread Andrew Gierth
So I have this immediate problem: a PGXS build of a module, specifically
an hstore transform for a non-core PL, is much harder than it should be
because it has no way to get at hstore.h since that file is never
installed anywhere.

Should that be changed?

-- 
Andrew (irc:RhodiumToad)



Re: Cache lookup errors with functions manipulation object addresses

2018-07-01 Thread Andrew Dunstan




On 07/01/2018 10:27 AM, Michael Paquier wrote:

On Wed, May 16, 2018 at 01:03:18PM +0900, Michael Paquier wrote:

Okay, I have done so in the updated set attached.  I have added some
documentation as well in fdwhandler.sgml about those two new things.
That's too late for v11 of course, so let's them sit until the time
comes.

Attached are refreshed versions for this commit fest.  The deal for this
commit fest is to not consider large patches this time, so please let me
suggest to discard 0003 even if it is very mechanical.  I would like
however to get 0001 and 0002 merged during this commit fest so as we can
move on with this thread as those are simple changes, and finish 0003
afterwards.



I think you're asserting far too broad a policy for the CF, and in any 
case there has been no discussion of what exactly is a large patch. I 
don't see any great need to defer patch 3. It is substantial although 
not what I would class as large, but it also has relatively low impact, 
ISTM.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: branches_of_interest.txt

2018-07-01 Thread Tom Lane
Andrew Dunstan  writes:
> This file on the buildfarm server is used to tell clients which branches 
> we'd like built. When a new stable branch is created it's added manually 
> to this file, and when one gets to EOL it's removed from the file. This 
> is a rather cumbersome process, and it occurred to me that it could be 
> streamlined by our keeping it in the core repo instead.

I can see the value of people other than you being able to change it,
but keeping it in the core repo seems like a kluge not a proper solution.
In particular, once it'd been around for awhile so that the master copy
had diverged from the back branches' copies, that would be pretty
confusing IMO.

Is it worth the trouble of having a separate repo for info that shouldn't
be tied to a particular PG development branch?  I'm not quite sure what
else we would keep there besides this file, but perhaps other use-cases
will come up.  Some of the stuff in src/tools/ has a bit of this flavor.

regards, tom lane



Re: Cache lookup errors with functions manipulation object addresses

2018-07-01 Thread Michael Paquier
On Wed, May 16, 2018 at 01:03:18PM +0900, Michael Paquier wrote:
> Okay, I have done so in the updated set attached.  I have added some
> documentation as well in fdwhandler.sgml about those two new things.
> That's too late for v11 of course, so let's them sit until the time
> comes.

Attached are refreshed versions for this commit fest.  The deal for this
commit fest is to not consider large patches this time, so please let me
suggest to discard 0003 even if it is very mechanical.  I would like
however to get 0001 and 0002 merged during this commit fest so as we can
move on with this thread as those are simple changes, and finish 0003
afterwards.
--
Michael
From ecd826a82b43011a831dc0b52d0b2a2d5d09fa9a Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Sun, 1 Jul 2018 23:26:10 +0900
Subject: [PATCH 1/3] Extend lookup routines for FDW and foreign server with
 NULL handling

The cache lookup routines for foreign-data wrappers and foreign servers
are extended with an extra argument able to control if an error or a
NULL object is returned to the caller in the event of an undefined
object. This is added in a set of new routines to not impact
unnecessrily any FPW plugins.
---
 doc/src/sgml/fdwhandler.sgml  | 30 +
 src/backend/foreign/foreign.c | 36 +--
 src/include/foreign/foreign.h |  4 
 3 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml
index 7b758bdf09..bfc538f249 100644
--- a/doc/src/sgml/fdwhandler.sgml
+++ b/doc/src/sgml/fdwhandler.sgml
@@ -1404,6 +1404,21 @@ ReparameterizeForeignPathByChild(PlannerInfo *root, List *fdw_private,
 
 
 ForeignDataWrapper *
+GetForeignDataWrapperExtended(Oid fdwid, bool missing_ok);
+
+
+ This function returns a ForeignDataWrapper
+ object for the foreign-data wrapper with the given OID.  A
+ ForeignDataWrapper object contains properties
+ of the FDW (see foreign/foreign.h for details).
+ If missing_ok is true, a NULL
+ result is returned to the caller instead of an error for an undefined
+ FDW.
+
+
+
+
+ForeignDataWrapper *
 GetForeignDataWrapper(Oid fdwid);
 
 
@@ -1416,6 +1431,21 @@ GetForeignDataWrapper(Oid fdwid);
 
 
 ForeignServer *
+GetForeignServerExtended(Oid serverid, bool missing_ok);
+
+
+ This function returns a ForeignServer object
+ for the foreign server with the given OID.  A
+ ForeignServer object contains properties
+ of the server (see foreign/foreign.h for details).
+ If missing_ok is true, a NULL
+ result is returned to the caller instead of an error for an undefined
+ foreign server.
+
+
+
+
+ForeignServer *
 GetForeignServer(Oid serverid);
 
 
diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c
index eac78a5d31..01b5175e71 100644
--- a/src/backend/foreign/foreign.c
+++ b/src/backend/foreign/foreign.c
@@ -33,6 +33,18 @@
  */
 ForeignDataWrapper *
 GetForeignDataWrapper(Oid fdwid)
+{
+	return GetForeignDataWrapperExtended(fdwid, false);
+}
+
+
+/*
+ * GetForeignDataWrapperExtended -	look up the foreign-data wrapper
+ * by OID. If missing_ok is true, return NULL if the object cannot be
+ * found instead of raising an error.
+ */
+ForeignDataWrapper *
+GetForeignDataWrapperExtended(Oid fdwid, bool missing_ok)
 {
 	Form_pg_foreign_data_wrapper fdwform;
 	ForeignDataWrapper *fdw;
@@ -43,7 +55,11 @@ GetForeignDataWrapper(Oid fdwid)
 	tp = SearchSysCache1(FOREIGNDATAWRAPPEROID, ObjectIdGetDatum(fdwid));
 
 	if (!HeapTupleIsValid(tp))
-		elog(ERROR, "cache lookup failed for foreign-data wrapper %u", fdwid);
+	{
+		if (!missing_ok)
+			elog(ERROR, "cache lookup failed for foreign-data wrapper %u", fdwid);
+		return NULL;
+	}
 
 	fdwform = (Form_pg_foreign_data_wrapper) GETSTRUCT(tp);
 
@@ -91,6 +107,18 @@ GetForeignDataWrapperByName(const char *fdwname, bool missing_ok)
  */
 ForeignServer *
 GetForeignServer(Oid serverid)
+{
+	return GetForeignServerExtended(serverid, false);
+}
+
+
+/*
+ * GetForeignServerExtended - look up the foreign server definition. If
+ * missing_ok is true, return NULL if the object cannot be found instead
+ * of raising an error.
+ */
+ForeignServer *
+GetForeignServerExtended(Oid serverid, bool missing_ok)
 {
 	Form_pg_foreign_server serverform;
 	ForeignServer *server;
@@ -101,7 +129,11 @@ GetForeignServer(Oid serverid)
 	tp = SearchSysCache1(FOREIGNSERVEROID, ObjectIdGetDatum(serverid));
 
 	if (!HeapTupleIsValid(tp))
-		elog(ERROR, "cache lookup failed for foreign server %u", serverid);
+	{
+		if (!missing_ok)
+			elog(ERROR, "cache lookup failed for foreign server %u", serverid);
+		return NULL;
+	}
 
 	serverform = (Form_pg_foreign_server) GETSTRUCT(tp);
 
diff --git a/src/include/foreign/foreign.h b/src/include/foreign/foreign.h
index 3ca12e64d2..5cc89e967c 100644
--- a/src/include/foreign/foreign.h
+++ b/src/include/foreign/foreign.h
@@ -70,9 +70,13 @@ typedef struct ForeignTable
 
 
 

Re: Small fixes about backup history file in doc and pg_standby

2018-07-01 Thread Peter Eisentraut
On 27.06.18 18:22, Yugo Nagata wrote:
> On Wed, 27 Jun 2018 18:36:46 +0900
> Michael Paquier  wrote:
> 
>> On Wed, Jun 27, 2018 at 05:42:07PM +0900, Yugo Nagata wrote:
>>> On Wed, 27 Jun 2018 00:58:18 +0900
>>> Fujii Masao  wrote:
> In addition, the current pg_standby still can handle a backup history 
> file that are
> never requested. It is harmless but unnecessary code. Another attached 
> patch
> (pg_standby.patch) removes this part of code.

 Since this is not bug fix, let's discuss this in 12dev cycle.
>>
>> +1.
> 
> I added this to CF.

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: patch to allow disable of WAL recycling

2018-07-01 Thread Peter Eisentraut
On 26.06.18 15:35, Jerry Jelinek wrote:
> Attached is a patch to provide an option to disable WAL recycling. We
> have found that this can help performance by eliminating
> read-modify-write behavior on old WAL files that are no longer resident
> in the filesystem cache. The is a lot more detail on the background of
> the motivation for this in the following thread.

Your patch describes this feature as a performance feature.  We would
need to see more measurements about what this would do on other
platforms and file systems than your particular one.  Also, we need to
be careful with user options that trade off reliability for performance
and describe them in much more detail.

If the problem is specifically the file system caching behavior, then we
could also consider using the dreaded posix_fadvise().

Then again, I can understand that turning off WAL recycling is sensible
on ZFS, since there is no point in preallocating space that will never
be used.  But then we should also turn off all other preallocation of
WAL files, including the creation of new (non-recycled) ones.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: libpq example doesn't work

2018-07-01 Thread Peter Eisentraut
On 27.06.18 08:29, Ideriha, Takeshi wrote:
> When I tried to use libpq, I found that libpq example[1] does not work.
> That's because SELECT pg_catlog.set_config() never returns PGRES_COMMAND_OK 
> which is expected,
> but actually returns PGRES_TUPLES_OK.

Fixed.  There were additional similar cases that I fixed also.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2018-07-01 Thread Peter Eisentraut
This patch set was not updated for the 2018-07 commitfest, so moved to -09.


On 09.03.18 17:07, Peter Eisentraut wrote:
> I think this patch is not going to be ready for PG11.
> 
> - It depends on some work in the thread "logical decoding of two-phase
> transactions", which is still in progress.
> 
> - Various details in the logical_work_mem patch (0001) are unresolved.
> 
> - This being partially a performance feature, we haven't seen any
> performance tests (e.g., which settings result in which latencies under
> which workloads).
> 
> That said, the feature seems useful and desirable, and the
> implementation makes sense.  There are documentation and tests.  But
> there is a significant amount of design and coding work still necessary.
> 
> Attached is a fixup patch that I needed to make it compile.
> 
> The last two patches in your series (0008, 0009) are labeled as bug
> fixes.  Would you like to argue that they should be applied
> independently of the rest of the feature?
> 


-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: location reporting in TAP test failures

2018-07-01 Thread Michael Paquier
On Sun, Jul 01, 2018 at 01:17:07PM +0200, Peter Eisentraut wrote:
> On 21.06.18 19:30, Heikki Linnakangas wrote:
>> Since this is purely a test change, IMO this could go to master now, no 
>> need to wait for until v12 development starts.
> 
> I didn't want to deal with the potential for some version differences in
> the test modules to cause rare problems, so v12 only for now.

Totally agreed.  Thanks for taking this approach.
--
Michael


signature.asc
Description: PGP signature


Re: location reporting in TAP test failures

2018-07-01 Thread Peter Eisentraut
On 21.06.18 19:30, Heikki Linnakangas wrote:
> Looks good. I wish there was some way to remind to do this for any 
> future test helper functions, too, but I don't have any ideas off the 
> top of my head.

Committed.

> Since this is purely a test change, IMO this could go to master now, no 
> need to wait for until v12 development starts.

I didn't want to deal with the potential for some version differences in
the test modules to cause rare problems, so v12 only for now.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Subplan result caching

2018-07-01 Thread Peter Eisentraut
On 23.05.18 11:31, Heikki Linnakangas wrote:
> I've been working on a patch to add a little cache to SubPlans, to speed 
> up queries with correlated subqueries, where the same subquery is 
> currently executed multiple times with the same parameters. The idea is 
> to cache the result of the subplan, with the correlation vars as the 
> cache key.

It looks like this patch might be "returned with feedback" for the moment?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Commitfest 2018-07 is underway

2018-07-01 Thread Andrew Dunstan



Hi


Commitfest 2018-07, the first of the release 12 development cycle, is 
underway. New patches will need to be submitted to the next commitfest, 
2018-09.



Patch authors are reminded of the very useful patch tester too at 
. You should check regularly if your 
patch is still applying, building, and testing nicely. There are 
currently quite a significant number of patches listed which don't apply 
or don't build/test cleanly in travis or appveyor.



I'll be posting a whole lot more information later today, but for now, 
let the reviewing and committing begin!



cheers


andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: ERROR: cannot start subtransactions during a parallel operation

2018-07-01 Thread Mai Peng
Hello, how could I relax the subtransaction restriction, I used the
Parallel Unsafe option, but still have the same issue.
Rgds.

Le ven. 29 juin 2018 20:47, Andres Freund  a écrit :

> Hi,
>
> On 2018-06-29 20:37:23 +0200, Tomas Vondra wrote:
> > My guess is that it's a PL/pgSQL function with an EXCEPTION block, and
> > there's no easy way to "fix" that.
>
> Obviously not going to immediately help the OP, but I do think we should
> be able to relax the subtransaction restriction around parallelism
> without too much work.  Can't allow xids to be assigned, but that's
> probably ok for a lot of exception handling cases.
>
> Greetings,
>
> Andres Freund
>


Re: Microoptimization of Bitmapset usage in postgres_fdw

2018-07-01 Thread Michael Paquier
On Sun, Jun 17, 2018 at 07:23:19PM +0200, Daniel Gustafsson wrote:
> On 17 Jun 2018, at 14:47, Michael Paquier  wrote:
>> -   if (bms_num_members(clauses_attnums) < 2)
>> +   if (bms_membership(clauses_attnums) != BMS_MULTIPLE)
>> For this one, the comment above directly mentions that at least two
>> attnums need to be present, so it seems to me that the current coding is
>> easier to understand and intentional...  So I would be incline to not
>> change it.
> 
> I don’t have any strong feelings either way, and will leave that call to the
> committer who picks this up.  I agree that the current coding is easy to
> understand but I don’t see this being much harder.

I have looked at that again, and pushed the portion for postgres_fdw as
the intention is clear, while leaving out the part from the statistics
per the comment close by.  Thanks!
--
Michael


signature.asc
Description: PGP signature