Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-27 Thread Amit Langote
On 2017/07/26 16:58, Amit Langote wrote:
> Rajkumar Raghuwanshi reported [1] on the "UPDATE partition key" thread
> that whole-row vars don't play nicely with partitioned table operations.
> 
> For example, when used to convert WITH CHECK OPTION constraint expressions
> and RETURNING target list expressions, it will error out if the
> expressions contained whole-row vars.  That's a bug, because whole-row
> vars are legal in those expressions.  I think the decision to output error
> upon encountering a whole-row in the input expression was based on the
> assumption that it will only ever be used to convert partition constraint
> expressions, which cannot contain those.  So, we can relax that
> requirement so that its other users are not bitten by it.
> 
> Attached fixes that.

Attached a revised version of the patch.

Updated patch moves the if (found_whole_row) elog(ERROR) bit in
map_partition_varattnos to the callers.  Only the callers know if
whole-row vars are not expected in the expression it's getting converted
from map_partition_varattnos.  Given the current message text (mentioning
"partition key"), it didn't seem appropriate to have the elog inside this
function, since it's callable from places where it wouldn't make sense.

Thanks,
Amit
From 967bef28bb9ac25bb773934963f61c19c3ae7478 Mon Sep 17 00:00:00 2001
From: amit 
Date: Wed, 26 Jul 2017 16:45:46 +0900
Subject: [PATCH] Fix map_partition_varattnos to sometimes accept wholerow vars

It was thought that it would never encount wholerow vars in its input
expressions (partition constraint expressions for example).  But then
it was used to convert expressions where wholerow vars are legal, such
as, WCO constraint expressions and RETURNING target list members.  So,
add an argument to tell it whether or not to error on finding wholerows.

Adds test in insert.sql and updatable_views.sql.

Reported by: Rajkumar Raghuwanshi
Report: 
https://postgr.es/m/CAKcux6%3Dz38gH4K6YAFi%2BYvo5tHTwBL4tam4VM33CAPZ5dDMk1Q%40mail.gmail.com
---
 src/backend/catalog/partition.c   | 17 ++---
 src/backend/commands/tablecmds.c  |  8 +++-
 src/backend/executor/nodeModifyTable.c| 18 ++
 src/include/catalog/partition.h   |  3 ++-
 src/test/regress/expected/insert.out  | 10 ++
 src/test/regress/expected/updatable_views.out | 10 ++
 src/test/regress/sql/insert.sql   |  6 ++
 src/test/regress/sql/updatable_views.sql  |  9 +
 8 files changed, 68 insertions(+), 13 deletions(-)

diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c
index e20ddce2db..824898939e 100644
--- a/src/backend/catalog/partition.c
+++ b/src/backend/catalog/partition.c
@@ -904,10 +904,10 @@ get_qual_from_partbound(Relation rel, Relation parent,
  */
 List *
 map_partition_varattnos(List *expr, int target_varno,
-   Relation partrel, Relation 
parent)
+   Relation partrel, Relation 
parent,
+   bool *found_whole_row)
 {
AttrNumber *part_attnos;
-   boolfound_whole_row;
 
if (expr == NIL)
return NIL;
@@ -915,14 +915,12 @@ map_partition_varattnos(List *expr, int target_varno,
part_attnos = convert_tuples_by_name_map(RelationGetDescr(partrel),

 RelationGetDescr(parent),

 gettext_noop("could not convert row type"));
+   *found_whole_row = false;
expr = (List *) map_variable_attnos((Node *) expr,

target_varno, 0,

part_attnos,

RelationGetDescr(parent)->natts,
-   
_whole_row);
-   /* There can never be a whole-row reference here */
-   if (found_whole_row)
-   elog(ERROR, "unexpected whole-row reference found in partition 
key");
+   
found_whole_row);
 
return expr;
 }
@@ -1783,6 +1781,7 @@ generate_partition_qual(Relation rel)
List   *my_qual = NIL,
   *result = NIL;
Relationparent;
+   boolfound_whole_row;
 
/* Guard against stack overflow due to overly deep partition tree */
check_stack_depth();
@@ -1825,7 +1824,11 @@ generate_partition_qual(Relation rel)
 * in it to bear this relation's attnos. It's safe to assume varno = 1
 * here.
 */
-   

[HACKERS] Re: [bug fix] PG10: libpq doesn't connect to alternative hosts when some errors occur

2017-07-27 Thread Noah Misch
On Fri, May 19, 2017 at 11:08:41AM +0900, Michael Paquier wrote:
> On Fri, May 19, 2017 at 11:01 AM, Tsunakawa, Takayuki
>  wrote:
> > From: pgsql-hackers-ow...@postgresql.org
> >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Eisentraut
> >> The problem is that if we decide to change the behavior mid-beta, then 
> >> we'll
> >> only have the rest of beta to find out whether people will like the other
> >> behavior.
> >>
> >> I would aim for the behavior that is most suitable for refinement in the
> >> future.  The current behavior seems to match that.
> >
> > I think the pre-final release period is the very timing for refinement, in 
> > the perspective of users and PG developers as users.
> 
> Sure that is the correct period to argue.

We've reached that period.  If anyone is going to push for a change here, now
is the time.  Absent such arguments, the behavior won't change.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-07-27 Thread Masahiko Sawada
On Thu, Jul 27, 2017 at 8:02 PM, Robert Haas  wrote:
> On Thu, Jul 27, 2017 at 5:08 AM, Stas Kelvich  
> wrote:
>> As far as I understand any solution that provides proper isolation for 
>> distributed
>> transactions in postgres will require distributed 2PC somewhere under the 
>> hood.
>> That is just consequence of parallelism that database allows — transactions 
>> can
>> abort due concurrent operations. So dichotomy is simple: either we need 2PC 
>> or
>> restrict write transactions to be physically serial.
>>
>> In particular both Postgres-XL/XC and postgrespro multimaster are using 2PC 
>> to
>> commit distributed transaction.
>
> Ah, OK.  I was imagining that a transaction manager might be
> responsible for managing both snapshots and distributed commit.  But
> if the transaction manager only handles the snapshots (how?) and the
> commit has to be done using 2PC, then we need this.

One way to provide snapshots to participant nodes is giving a snapshot
data to them using libpq protocol with the query when coordinator
nodes starts transaction on a remote node (or we now can use exporting
snapshot infrastructure?). IIUC Postgres-XL/XC uses this approach.
That also requires to share the same XID space with all remote nodes.
Perhaps the CSN based snapshot can make this more simple.

>> Also I see the quite a big value in this patch even without 
>> tm/snapshots/whatever.
>> Right now fdw doesn’t guarantee neither isolation nor atomicity. And if one 
>> isn’t
>> doing cross-node analytical transactions it will be safe to live without 
>> isolation.
>> But living without atomicity means that some parts of data can be lost 
>> without simple
>> way to detect and fix that.
>
> OK, thanks for weighing in.
>

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Quorum commit for multiple synchronous replication.

2017-07-27 Thread Noah Misch
On Thu, Apr 06, 2017 at 08:55:37AM +0200, Petr Jelinek wrote:
> On 06/04/17 03:51, Noah Misch wrote:
> > On Thu, Apr 06, 2017 at 12:48:56AM +0900, Fujii Masao wrote:
> >> On Wed, Apr 5, 2017 at 3:45 PM, Noah Misch  wrote:
> >>> On Mon, Dec 19, 2016 at 09:49:58PM +0900, Fujii Masao wrote:
>  Regarding this feature, there are some loose ends. We should work on
>  and complete them until the release.
> 
>  (1)
>  Which synchronous replication method, priority or quorum, should be
>  chosen when neither FIRST nor ANY is specified in s_s_names? Right now,
>  a priority-based sync replication is chosen for keeping backward
>  compatibility. However some hackers argued to change this decision
>  so that a quorum commit is chosen because they think that most users
>  prefer to a quorum.

> >> The items (1) and (3) are not bugs. So I don't think that they need to be
> >> resolved before the beta release. After the feature freeze, many users
> >> will try and play with many new features including quorum-based syncrep.
> >> Then if many of them complain about (1) and (3), we can change the code
> >> at that timing. So we need more time that users can try the feature.
> > 
> > I've moved (1) to a new section for things to revisit during beta.  If 
> > someone
> > feels strongly that the current behavior is Wrong and must change, speak up 
> > as
> > soon as you reach that conclusion.  Absent such arguments, the behavior 
> > won't
> > change.
> > 
> 
> I was one of the people who said in original thread that I think the
> default behavior should change to quorum and I am still of that opinion.

This item appears under "decisions to recheck mid-beta".  If anyone is going
to push for a change here, now is the time.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-27 Thread Andres Freund
On 2017-07-27 22:04:59 -0700, Noah Misch wrote:
> On Thu, Jul 27, 2017 at 09:49:18PM -0700, Andres Freund wrote:
> > On 2017-07-27 21:46:57 -0700, Noah Misch wrote:
> > > On Thu, Jul 27, 2017 at 02:29:32AM +, Noah Misch wrote:
> > > > On Mon, Jul 24, 2017 at 08:04:30AM +0100, Andres Freund wrote:
> > > > > 
> > > > > 
> > > > > On July 24, 2017 7:10:19 AM GMT+01:00, Noah Misch  
> > > > > wrote:
> > > > > >On Tue, Jul 18, 2017 at 01:04:10PM -0700, Andres Freund wrote:
> > > > > >> Ok, I'll flesh out the patch till Thursday.  But I do think we're
> > > > > >going
> > > > > >> to have to do something about the back branches, too.
> > > > > >
> > > > > >This PostgreSQL 10 open item is past due for your status update. 
> > > > > >Kindly send
> > > > > >a status update within 24 hours, and include a date for your 
> > > > > >subsequent
> > > > > >status
> > > > > >update.  Refer to the policy on open item ownership:
> > > > > >https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > > > 
> > > > > I sent out a note fleshed out patch last week, which Tom reviewed. 
> > > > > Planning to update it to address that review today or tomorrow.
> > > > 
> > > > This PostgreSQL 10 open item is past due for your status update.  
> > > > Kindly send
> > > > a status update within 24 hours, and include a date for your subsequent 
> > > > status
> > > > update.  Refer to the policy on open item ownership:
> > > > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > 
> > > IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past 
> > > due
> > > for your status update.  Please reacquaint yourself with the policy on 
> > > open
> > > item ownership[1] and then reply immediately.  If I do not hear from you 
> > > by
> > > 2017-07-29 05:00 UTC, I will transfer this item to release management team
> > > ownership without further notice.
> > > 
> > > [1] 
> > > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > 
> > I've updated the patch based on review (today). Awaiting new review.
> > 
> > FWIW, I don't see the point of these messages when there is a new patch
> > version posted today.
> 
> The policy says, "Each update shall state a date when the community will
> receive another update".  Nothing you've sent today specifies a deadline for
> your next update, so your ownership of this item remains out of
> compliance.

For me that means the policy isn't quite right.  It's not like I can
force Tom to review the patch at a specific date. But the thread has
been progressing steadily over the last days, so I'm not particularly
concerned.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] map_partition_varattnos() and whole-row vars

2017-07-27 Thread Noah Misch
On Wed, Jul 26, 2017 at 04:58:08PM +0900, Amit Langote wrote:
> Rajkumar Raghuwanshi reported [1] on the "UPDATE partition key" thread
> that whole-row vars don't play nicely with partitioned table operations.
> 
> For example, when used to convert WITH CHECK OPTION constraint expressions
> and RETURNING target list expressions, it will error out if the
> expressions contained whole-row vars.  That's a bug, because whole-row
> vars are legal in those expressions.  I think the decision to output error
> upon encountering a whole-row in the input expression was based on the
> assumption that it will only ever be used to convert partition constraint
> expressions, which cannot contain those.  So, we can relax that
> requirement so that its other users are not bitten by it.
> 
> Attached fixes that.
> 
> Adding this to the PG 10 open items list.

[Action required within three days.  This is a generic notification.]

The above-described topic is currently a PostgreSQL 10 open item.  Robert,
since you committed the patch believed to have created it, you own this open
item.  If some other commit is more relevant or if this does not belong as a
v10 open item, please let us know.  Otherwise, please observe the policy on
open item ownership[1] and send a status update within three calendar days of
this message.  Include a date for your subsequent status update.  Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping v10.  Consequently, I will appreciate your efforts
toward speedy resolution.  Thanks.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-27 Thread Noah Misch
On Thu, Jul 27, 2017 at 09:49:18PM -0700, Andres Freund wrote:
> On 2017-07-27 21:46:57 -0700, Noah Misch wrote:
> > On Thu, Jul 27, 2017 at 02:29:32AM +, Noah Misch wrote:
> > > On Mon, Jul 24, 2017 at 08:04:30AM +0100, Andres Freund wrote:
> > > > 
> > > > 
> > > > On July 24, 2017 7:10:19 AM GMT+01:00, Noah Misch  
> > > > wrote:
> > > > >On Tue, Jul 18, 2017 at 01:04:10PM -0700, Andres Freund wrote:
> > > > >> Ok, I'll flesh out the patch till Thursday.  But I do think we're
> > > > >going
> > > > >> to have to do something about the back branches, too.
> > > > >
> > > > >This PostgreSQL 10 open item is past due for your status update. 
> > > > >Kindly send
> > > > >a status update within 24 hours, and include a date for your subsequent
> > > > >status
> > > > >update.  Refer to the policy on open item ownership:
> > > > >https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > > 
> > > > I sent out a note fleshed out patch last week, which Tom reviewed. 
> > > > Planning to update it to address that review today or tomorrow.
> > > 
> > > This PostgreSQL 10 open item is past due for your status update.  Kindly 
> > > send
> > > a status update within 24 hours, and include a date for your subsequent 
> > > status
> > > update.  Refer to the policy on open item ownership:
> > > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > 
> > IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
> > for your status update.  Please reacquaint yourself with the policy on open
> > item ownership[1] and then reply immediately.  If I do not hear from you by
> > 2017-07-29 05:00 UTC, I will transfer this item to release management team
> > ownership without further notice.
> > 
> > [1] 
> > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> 
> I've updated the patch based on review (today). Awaiting new review.
> 
> FWIW, I don't see the point of these messages when there is a new patch
> version posted today.

The policy says, "Each update shall state a date when the community will
receive another update".  Nothing you've sent today specifies a deadline for
your next update, so your ownership of this item remains out of compliance.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: weekly progress reports (week 8)

2017-07-27 Thread Shubham Barai
Hi.

I am attaching a patch for predicate locking in SP-Gist index


Regards,
Shubham



 Sent with Mailtrack


On 26 July 2017 at 20:50, Shubham Barai  wrote:

> Project: Explicitly support predicate locks in index AMs besides b-tree
>
> Hi,
>
> During this week, I worked on predicate locking in spgist index. I think,
> for spgist index, predicate lock only on leaf pages will be enough as
> spgist searches can determine if there is a match or not only at leaf level.
>
> I have done following things in this week.
>
> 1) read the source code of spgist index to understand  the access method
>
> 2) found appropriate places to insert calls to existing functions
>
> 3) created tests (to verify serialization failures and to demonstrate the
> feature of reduced false positives) for 'point' and 'box' data types.
>
>
> link to code and tests: https://github.com/shub
> hambaraiss/postgres/commit/d9ae709c93ff038478ada411c621faeabe6813cb
>
> I will attach the patch shortly.
>
>
> Regards,
> Shubham
>
>
>
>  Sent with Mailtrack
> 
>


Predicate-Locking-in-spgist-index.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-27 Thread Andres Freund
On 2017-07-27 21:46:57 -0700, Noah Misch wrote:
> On Thu, Jul 27, 2017 at 02:29:32AM +, Noah Misch wrote:
> > On Mon, Jul 24, 2017 at 08:04:30AM +0100, Andres Freund wrote:
> > > 
> > > 
> > > On July 24, 2017 7:10:19 AM GMT+01:00, Noah Misch  
> > > wrote:
> > > >On Tue, Jul 18, 2017 at 01:04:10PM -0700, Andres Freund wrote:
> > > >> Ok, I'll flesh out the patch till Thursday.  But I do think we're
> > > >going
> > > >> to have to do something about the back branches, too.
> > > >
> > > >This PostgreSQL 10 open item is past due for your status update. 
> > > >Kindly send
> > > >a status update within 24 hours, and include a date for your subsequent
> > > >status
> > > >update.  Refer to the policy on open item ownership:
> > > >https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > > 
> > > I sent out a note fleshed out patch last week, which Tom reviewed. 
> > > Planning to update it to address that review today or tomorrow.
> > 
> > This PostgreSQL 10 open item is past due for your status update.  Kindly 
> > send
> > a status update within 24 hours, and include a date for your subsequent 
> > status
> > update.  Refer to the policy on open item ownership:
> > https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> 
> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
> for your status update.  Please reacquaint yourself with the policy on open
> item ownership[1] and then reply immediately.  If I do not hear from you by
> 2017-07-29 05:00 UTC, I will transfer this item to release management team
> ownership without further notice.
> 
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

I've updated the patch based on review (today). Awaiting new review.

FWIW, I don't see the point of these messages when there is a new patch
version posted today.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-27 Thread Noah Misch
On Thu, Jul 27, 2017 at 02:29:32AM +, Noah Misch wrote:
> On Mon, Jul 24, 2017 at 08:04:30AM +0100, Andres Freund wrote:
> > 
> > 
> > On July 24, 2017 7:10:19 AM GMT+01:00, Noah Misch  wrote:
> > >On Tue, Jul 18, 2017 at 01:04:10PM -0700, Andres Freund wrote:
> > >> Ok, I'll flesh out the patch till Thursday.  But I do think we're
> > >going
> > >> to have to do something about the back branches, too.
> > >
> > >This PostgreSQL 10 open item is past due for your status update. 
> > >Kindly send
> > >a status update within 24 hours, and include a date for your subsequent
> > >status
> > >update.  Refer to the policy on open item ownership:
> > >https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> > 
> > I sent out a note fleshed out patch last week, which Tom reviewed. Planning 
> > to update it to address that review today or tomorrow.
> 
> This PostgreSQL 10 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 10 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2017-07-29 05:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] 
https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Write Amplification Reduction Method (WARM)

2017-07-27 Thread Pavan Deolasee
On Fri, Jul 28, 2017 at 5:57 AM, Peter Geoghegan  wrote:

> Pavan Deolasee  wrote:
> > I'll be happy if someone wants to continue hacking the patch further and
> > get it in a committable shape. I can stay actively involved. But TBH the
> > amount of time I can invest is far as compared to what I could during the
> > last cycle.
>
> That's disappointing.
>
>
Yes, it is even more for me. But I was hard pressed to choose between
Postgres-XL 10 and WARM. Given ever increasing interest in XL and my
ability to control the outcome, I thought it makes sense to focus on XL for
now.


> I personally find it very difficult to assess something like this.


One good thing is that the patch is ready and fully functional. So that
allows those who are keen to run real performance tests and see the actual
impact of the patch.


> The
> problem is that even if you can demonstrate that the patch is strictly
> better than what we have today, the risk of reaching a local maxima
> exists.  Do we really want to double-down on HOT?
>

Well HOT has served us well for over a decade now. So I won't hesitate to
place my bets on WARM.


>
> If I'm not mistaken, the goal of WARM is, roughly speaking, to make
> updates that would not be HOT-safe today do a "partial HOT update".  My
> concern with that idea is that it doesn't do much for the worst case.
>

I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Refreshing subscription relation state inside a transaction block

2017-07-27 Thread Masahiko Sawada
On Thu, Jul 27, 2017 at 9:31 AM, Masahiko Sawada  wrote:
> On Wed, Jul 26, 2017 at 10:29 PM, Robert Haas  wrote:
>> On Mon, Jun 19, 2017 at 4:30 AM, Masahiko Sawada  
>> wrote:
 I think that either of the options you suggested now would be better.
 We'll need that for stopping the tablesync which is in progress during
 DropSubscription as well as those will currently still keep running. I
 guess we could simply just register syscache callback, the only problem
 with that is we'd need to AcceptInvalidationMessages regularly while we
 do the COPY which is not exactly free so maybe killing at the end of
 transaction would be better (both for refresh and drop)?
>>>
>>> Attached patch makes table sync worker check its relation subscription
>>> state at the end of COPY and exits if it has disappeared, instead of
>>> killing table sync worker in DDL. There is still a problem that a
>>> table sync worker for a large table can hold a slot for a long time
>>> even after its state is deleted. But it would be for PG11 item.
>>
>> Do we still need to do something about this?  Should it be an open item?
>>
>
> Thank you for looking at this.
>
> Yeah, I think it should be added to the open item list. The patch is
> updated by Petr and discussed on another thread[1] that also addresses
> other issues of subscription codes. 0004 patch of that thread is an
> updated patch of the patch attached on this thread.
>

Does anyone have any opinions? Barring any objections, I'll add this
to the open item list.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?

2017-07-27 Thread Tom Lane
I wrote:
> So the question is, does anyone care?  I wouldn't except that our
> documentation appears to claim that we work with Perl "5.8 or later".

BTW, what actually says that is installation.sgml:

  Perl 5.8 or later is needed to build from a Git checkout,
  or if you changed the input files for any of the build steps that
  use Perl scripts.  If building on Windows you will need
  Perl in any case.  Perl is
  also required to run some test suites.

Strictly speaking, there is no statement anywhere (AFAICT) about what
Perl versions PL/Perl works with.

As an experiment, I built from a "make maintainer-clean" state using
that 5.8.0 install, and it worked.  So indeed installation.sgml's
statement is correct as far as it goes.  But I dunno what the situation
is for the MSVC build scripts.  I did not try the TAP tests either.

A look in the buildfarm logs says that the oldest Perl version we're
actually testing is 5.8.6 on prairiedog.  (castoroides/protosciurus
have 5.8.4 but they are not building --with-perl, so that only
exercises the build scripts not plperl; they're not doing TAP either.)
I only looked into configure.log results though, so I'm not sure
about the Windows critters.

I kinda suspect we're not actively testing non-MULTIPLICITY builds
either.  The 5.8.7 test I just ran was with a non-MULTIPLICITY build,
so the case doesn't seem actively broken, but I doubt there is any
buildfarm coverage.  I wonder if it'd be worth getting the buildfarm
to log the output of "perl -V" so we could get a clearer picture
of what's being tested.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Complex Source Code Reading Strategy

2017-07-27 Thread Craig Ringer
On 28 July 2017 at 07:45, Tom Lane  wrote:

> Peter Geoghegan  writes:
> > 2. Start somewhere. I have no idea where that should be, but it has to
> > be some particular place that seems interesting to you.
>
> Don't forget to start with the available documentation, ie
> https://www.postgresql.org/docs/devel/static/internals.html
> You should certainly read
> https://www.postgresql.org/docs/devel/static/overview.html
> and depending on what your interests are, there are probably other
> chapters of Part VII that are worth your time.
>
> Also keep an eye out for README files in the part of the source
> tree you're browsing in.


In fact, even though you won't initially understand much from some of them,
reading most of

find src/ -name README\*

can be quite useful. It's nearly time for me to do that again myself; each
time I absorb more.

There are very useful comments at the start of some of the source files
too. Unfortunately in some cases the really important explanation will be
on some function that you won't know to look for, not the comment at the
top of the file, so there's an element of discovery there.

I'd start with the docs as Tom suggested, then

* https://www.postgresql.org/developer/
* https://momjian.us/main/presentations/internals.html
* https://wiki.postgresql.org/wiki/Development_information
* https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F
* https://wiki.postgresql.org/wiki/Developer_FAQ

(some of which need to be added to the "developer information" wiki page I
think)

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Incorrect comment of XLByteToSeg() and XLByteToPrevSeg()

2017-07-27 Thread Tatsuo Ishii
> I found a type in the comment for XLByteToSeg() and XLByteToPrevSeg().
> This says "Compute ID and segment from an XLogRecPtr", but these
> macros compute only segment numbers. I think "Compute a segment number
> from an XLogRecPtr" is correct.
> 
> The definition of these macros were modified by the following commit,
> but the comment were not.
> 
> commit dfda6ebaec6763090fb78b458a979b558c50b39b
> Author: Heikki Linnakangas 
> Date:   Sun Jun 24 18:06:38 2012 +0300

Thanks for the patch. Looks good to me. I will commit/push into all
supported branches if there's no objection.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PL_stashcache, or, what's our minimum Perl version?

2017-07-27 Thread Tom Lane
I wanted to do some portability testing on the recently-proposed
plperl changes, so I tried to build against a 5.8.0 Perl that I had
laying about.  It blew up real good:

plperl.c: In function `plperl_trusted_init':
plperl.c:1017: `PL_stashcache' undeclared (first use in this function)
plperl.c:1017: (Each undeclared identifier is reported only once
plperl.c:1017: for each function it appears in.)
make: *** [plperl.o] Error 1

It's complaining about this:

/* invalidate assorted caches */
++PL_sub_generation;
hv_clear(PL_stashcache);

which was introduced by you in commit 1f474d299 (2010-05-13).  Some
digging suggests that PL_stashcache was added in Perl 5.8.1 circa 2003,
although this is impressively underdocumented in any Perl changelog
that I could find.

So the question is, does anyone care?  I wouldn't except that our
documentation appears to claim that we work with Perl "5.8 or later".
And the lack of field complaints suggests strongly that nobody else
cares.  So I'm inclined to think we just need to be more specific
about the minimum Perl version --- but what exactly?  Alternatively,
can we make this work with 5.8.0?  It looks like PL_stashcache is
a macro, so we could make it compile with an "#ifdef PL_stashcache",
but I'm pretty nervous about whether that would be breaking needed
cleanup behavior.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: LP_DEAD hinting and not holding on to a buffer pin on leaf page (Was: [HACKERS] [WIP] Zipfian distribution in pgbench)

2017-07-27 Thread Peter Geoghegan

Peter Geoghegan  wrote:

In Alik's workload, there are two queries: One UPDATE, one SELECT.  Even
though the bloated index was a unique index, and so still gets
_bt_check_unique() item killing, the regression is still going to block
LP_DEAD cleanup by the SELECTs, which seems like it might be quite
important there.  After all, _bt_check_unique() cleanup has to happen
with an exclusive buffer lock held, whereas the kill_prior_tuple stuff
happens with only a shared buffer lock held.  It's not hard to imaging
that there will be a whole lot less LP_DEAD setting, overall.


Actually, there is a much bigger reason why SELECT statement LP_DEAD
killing matters more to Alik's workload than _bt_check_unique() LP_DEAD
killing: The UPDATE query itself does not affect indexed columns. Most
UPDATEs are actually HOT-safe (despite the degree of index bloat we
see), and so most UPDATEs will never reach _bt_check_unique().

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: LP_DEAD hinting and not holding on to a buffer pin on leaf page (Was: [HACKERS] [WIP] Zipfian distribution in pgbench)

2017-07-27 Thread Peter Geoghegan

Robert Haas  wrote:

We now see that no update ever kills items within _bt_killitems(),
because our own update to the index leaf page itself nullifies our
ability to kill anything, by changing the page LSN from the one
stashed in the index scan state variable. Fortunately, we are not
really "self-blocking" dead item cleanup here, because the
_bt_check_unique() logic for killing all_dead index entries saves the
day by not caring about LSNs. However, that only happens because the
index on "aid" is a unique index.


This seems like an oversight.  If we modify the page ourselves, could
we check whether the saved LSN is still current just before, and if
so, update the saved LSN just after?


I really don't know if that would be worthwhile. It would certainly fix
the regression shown in my test case, but that might not go far enough.
I strongly suspect that there are more complicated workloads where
LP_DEAD cleanup from SELECT statements matters, which is prevented by
the LSN check thing, just because there are always other sessions that
modify the page concurrently. This might be true of Alik's Zipfian test
case, for example.

In Alik's workload, there are two queries: One UPDATE, one SELECT.  Even
though the bloated index was a unique index, and so still gets
_bt_check_unique() item killing, the regression is still going to block
LP_DEAD cleanup by the SELECTs, which seems like it might be quite
important there.  After all, _bt_check_unique() cleanup has to happen
with an exclusive buffer lock held, whereas the kill_prior_tuple stuff
happens with only a shared buffer lock held.  It's not hard to imaging
that there will be a whole lot less LP_DEAD setting, overall.

Alik's workload was one with a high degree of contention on just a few
leaf pages, pages that become very bloated. It's not fair to blame the
bloat we saw there on this regression, but I have to wonder how much it
may have contributed.

--
Peter Geoghegan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: LP_DEAD hinting and not holding on to a buffer pin on leaf page (Was: [HACKERS] [WIP] Zipfian distribution in pgbench)

2017-07-27 Thread Robert Haas
On Tue, Jul 25, 2017 at 11:02 PM, Peter Geoghegan  wrote:
> While the benchmark Alik came up with is non-trivial to reproduce, I
> can show a consistent regression for a simple case with only one
> active backend.

That's not good.

> We now see that no update ever kills items within _bt_killitems(),
> because our own update to the index leaf page itself nullifies our
> ability to kill anything, by changing the page LSN from the one
> stashed in the index scan state variable. Fortunately, we are not
> really "self-blocking" dead item cleanup here, because the
> _bt_check_unique() logic for killing all_dead index entries saves the
> day by not caring about LSNs. However, that only happens because the
> index on "aid" is a unique index.

This seems like an oversight.  If we modify the page ourselves, could
we check whether the saved LSN is still current just before, and if
so, update the saved LSN just after?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A suspicious code in pgoutput_startup().

2017-07-27 Thread Yugo Nagata
Hi,

I found a suspicious code in pgoutput_startup().

175 /* Check if we support requested protocol */
176 if (data->protocol_version != LOGICALREP_PROTO_VERSION_NUM)
177 ereport(ERROR,
178 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
179  errmsg("client sent proto_version=%d but we only 
support protocol %d or lower",
180 data->protocol_version, 
LOGICALREP_PROTO_VERSION_NUM)));

Although the if condition is not-equal, the error message says 
"we only support protocol %d or lower".  Is this intentional?
Or should this be fixed as below? 

176 if (data->protocol_version > LOGICALREP_PROTO_VERSION_NUM)

Attached is a simple patch in case of fixing.

Regards,

-- 
Yugo Nagata 
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 4a56288..370b74f 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -173,7 +173,7 @@ pgoutput_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
 >publication_names);
 
 		/* Check if we support requested protocol */
-		if (data->protocol_version != LOGICALREP_PROTO_VERSION_NUM)
+		if (data->protocol_version > LOGICALREP_PROTO_VERSION_NUM)
 			ereport(ERROR,
 	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 	 errmsg("client sent proto_version=%d but we only support protocol %d or lower",

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Notice message of ALTER SUBSCRIPTION ... RERESH PUBLICATIION

2017-07-27 Thread Yugo Nagata
Hi,

When we run ALTER SUBSCRIPTION ... REFRESH PUBLICATION and there is 
an unkown table at local, it says;

 NOTICE:  added subscription for table public.tbl

I feel this a bit misleading because it says a subscription is added
but actually new subscription object is not created. Of cause, I can
understand the intention this message says, but I feel that
it is better to use other expression. For example, how about saying

 NOTICE:  table public.tbl is added to subscription sub1

as proposed in the attached patch. This also fixes the message
when a table is removed from a subscription. 

Regards,

-- 
Yugo Nagata 
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 6dc3f6e..5a7e6d5 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -573,9 +573,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
 	copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY,
 	InvalidXLogRecPtr, false);
 			ereport(NOTICE,
-	(errmsg("added subscription for table %s.%s",
+	(errmsg("table %s.%s is added to subscription %s",
 			quote_identifier(rv->schemaname),
-			quote_identifier(rv->relname;
+			quote_identifier(rv->relname),
+			quote_identifier(sub->name;
 		}
 	}
 
@@ -601,9 +602,10 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data)
 
 			namespace = get_namespace_name(get_rel_namespace(relid));
 			ereport(NOTICE,
-	(errmsg("removed subscription for table %s.%s",
+	(errmsg("table %s.%s is removed from subscription %s",
 			quote_identifier(namespace),
-			quote_identifier(get_rel_name(relid);
+			quote_identifier(get_rel_name(relid)),
+			quote_identifier(sub->name;
 		}
 	}
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Write Amplification Reduction Method (WARM)

2017-07-27 Thread Peter Geoghegan
Pavan Deolasee  wrote:
> I'll be happy if someone wants to continue hacking the patch further and
> get it in a committable shape. I can stay actively involved. But TBH the
> amount of time I can invest is far as compared to what I could during the
> last cycle.

That's disappointing.

I personally find it very difficult to assess something like this. The
problem is that even if you can demonstrate that the patch is strictly
better than what we have today, the risk of reaching a local maxima
exists.  Do we really want to double-down on HOT?

If I'm not mistaken, the goal of WARM is, roughly speaking, to make
updates that would not be HOT-safe today do a "partial HOT update".  My
concern with that idea is that it doesn't do much for the worst case.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Complex Source Code Reading Strategy

2017-07-27 Thread Joshua D. Drake

On 07/27/2017 04:45 PM, Tom Lane wrote:

Peter Geoghegan  writes:

2. Start somewhere. I have no idea where that should be, but it has to
be some particular place that seems interesting to you.


Don't forget to start with the available documentation, ie
https://www.postgresql.org/docs/devel/static/internals.html
You should certainly read
https://www.postgresql.org/docs/devel/static/overview.html
and depending on what your interests are, there are probably other
chapters of Part VII that are worth your time.

Also keep an eye out for README files in the part of the source
tree you're browsing in.


The doxygen instance at doxygen.postgresql.org is also helpful in 
navigating dependencies as you start attacking a features.


JD



regards, tom lane





--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Complex Source Code Reading Strategy

2017-07-27 Thread Tom Lane
Peter Geoghegan  writes:
> 2. Start somewhere. I have no idea where that should be, but it has to
> be some particular place that seems interesting to you.

Don't forget to start with the available documentation, ie
https://www.postgresql.org/docs/devel/static/internals.html
You should certainly read
https://www.postgresql.org/docs/devel/static/overview.html
and depending on what your interests are, there are probably other
chapters of Part VII that are worth your time.

Also keep an eye out for README files in the part of the source
tree you're browsing in.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Complex Source Code Reading Strategy

2017-07-27 Thread Peter Geoghegan
On Tue, Jul 25, 2017 at 11:54 PM, Zeray Kalayu  wrote:
> I want to be PG hacker but it seems a complex beast to find my way out in it.
>  So, can anyone suggest me from his experience/style the general
> approaches/techniques/strategies on how to read complex source code in
> general and PG in particular effectively.

I can only think of two things:

1. Get familiar with how cscope or a similar tool works. Figure out
how to make it integrate well with your text editor.

2. Start somewhere. I have no idea where that should be, but it has to
be some particular place that seems interesting to you.

The trick, if there is one, is to find what you read in some way
relevant, interesting, or useful, in the short term, so that a
virtuous circle starts. I don't think that there's a right place to
begin. If you find a way to learn that is sustainable, then you can
eventually have a good understanding of the system as a whole.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Tom Lane
Andrew Dunstan  writes:
> On 07/27/2017 04:33 PM, Tom Lane wrote:
>> So I was trying to figure a way to not include XSUB.h except in a very
>> limited part of plperl, like ideally just the .xs files.  It's looking
>> like that would take nontrivial refactoring though :-(.  Another problem
>> is that this critical bit of the library API is in XSUB.h:

> That's the sort of thing that prompted me to ask what was the minimal
> set of defines required to fix the original problem (assuming such a
> thing exists)
> We haven't used PERL_IMPLICIT_CONTEXT to date, and without ill effect.
> For example. it's in the ExtUtils::Embed::ccopts for the perl that
> jacana and bowerbird happily build and test against.

Well, actually, PERL_IMPLICIT_CONTEXT is turned on automatically in any
MULTIPLICITY build, and since it changes all the Perl ABIs, we *are*
relying on it.  However, after further study of the Perl docs I noticed
that we could dispense with XSUB.h if we defined PERL_NO_GET_CONTEXT
(which turns the quoted stanza into a no-op).  That results in needing to
sprinkle plperl.c with "dTHX" declarations, as explained in perlguts.pod.
They're slightly tricky to place correctly, because they load up a pointer
to the current Perl interpreter, so you have to be wary of where to put
them in functions that change interpreters.  But I seem to have it working
in the attached patch.  (One benefit of doing this extra work is that it
should be a bit more efficient, in that we load up a Perl interpreter
pointer only once per function called, not once per usage therein.  We
could remove many of those fetches too, if we wanted to sprinkle the
plperl code with yet more THX droppings; but I left that for another day.)

Armed with that, I got rid of XSUB.h in plperl.c and moved the
PG_TRY-using functions in the .xs files to plperl.c.  I think this would
fix Ashutosh's problem, though I am not in a position to try it with a
PERL_IMPLICIT_SYS build here.

regards, tom lane

diff --git a/src/pl/plperl/SPI.xs b/src/pl/plperl/SPI.xs
index 0447c50..7651b62 100644
*** a/src/pl/plperl/SPI.xs
--- b/src/pl/plperl/SPI.xs
***
*** 15,52 
  #undef _
  
  /* perl stuff */
  #include "plperl.h"
  #include "plperl_helpers.h"
  
  
- /*
-  * Interface routine to catch ereports and punt them to Perl
-  */
- static void
- do_plperl_return_next(SV *sv)
- {
- 	MemoryContext oldcontext = CurrentMemoryContext;
- 
- 	PG_TRY();
- 	{
- 		plperl_return_next(sv);
- 	}
- 	PG_CATCH();
- 	{
- 		ErrorData  *edata;
- 
- 		/* Must reset elog.c's state */
- 		MemoryContextSwitchTo(oldcontext);
- 		edata = CopyErrorData();
- 		FlushErrorState();
- 
- 		/* Punt the error to Perl */
- 		croak_cstr(edata->message);
- 	}
- 	PG_END_TRY();
- }
- 
- 
  MODULE = PostgreSQL::InServer::SPI PREFIX = spi_
  
  PROTOTYPES: ENABLE
--- 15,25 
  #undef _
  
  /* perl stuff */
+ #define PG_NEED_PERL_XSUB_H
  #include "plperl.h"
  #include "plperl_helpers.h"
  
  
  MODULE = PostgreSQL::InServer::SPI PREFIX = spi_
  
  PROTOTYPES: ENABLE
*** void
*** 76,82 
  spi_return_next(rv)
  	SV *rv;
  	CODE:
! 		do_plperl_return_next(rv);
  
  SV *
  spi_spi_query(sv)
--- 49,55 
  spi_return_next(rv)
  	SV *rv;
  	CODE:
! 		plperl_return_next(rv);
  
  SV *
  spi_spi_query(sv)
diff --git a/src/pl/plperl/Util.xs b/src/pl/plperl/Util.xs
index dbba0d7..862fec4 100644
*** a/src/pl/plperl/Util.xs
--- b/src/pl/plperl/Util.xs
***
*** 20,67 
  #undef _
  
  /* perl stuff */
  #include "plperl.h"
  #include "plperl_helpers.h"
  
- /*
-  * Implementation of plperl's elog() function
-  *
-  * If the error level is less than ERROR, we'll just emit the message and
-  * return.  When it is ERROR, elog() will longjmp, which we catch and
-  * turn into a Perl croak().  Note we are assuming that elog() can't have
-  * any internal failures that are so bad as to require a transaction abort.
-  *
-  * This is out-of-line to suppress "might be clobbered by longjmp" warnings.
-  */
- static void
- do_util_elog(int level, SV *msg)
- {
- 	MemoryContext oldcontext = CurrentMemoryContext;
- 	char	   * volatile cmsg = NULL;
- 
- 	PG_TRY();
- 	{
- 		cmsg = sv2cstr(msg);
- 		elog(level, "%s", cmsg);
- 		pfree(cmsg);
- 	}
- 	PG_CATCH();
- 	{
- 		ErrorData  *edata;
- 
- 		/* Must reset elog.c's state */
- 		MemoryContextSwitchTo(oldcontext);
- 		edata = CopyErrorData();
- 		FlushErrorState();
- 
- 		if (cmsg)
- 			pfree(cmsg);
- 
- 		/* Punt the error to Perl */
- 		croak_cstr(edata->message);
- 	}
- 	PG_END_TRY();
- }
  
  static text *
  sv2text(SV *sv)
--- 20,29 
  #undef _
  
  /* perl stuff */
+ #define PG_NEED_PERL_XSUB_H
  #include "plperl.h"
  #include "plperl_helpers.h"
  
  
  static text *
  sv2text(SV *sv)
*** util_elog(level, msg)
*** 105,111 
  level = ERROR;
  if (level < DEBUG5)
  level = DEBUG5;
! do_util_elog(level, msg);
  
  SV *
  

Re: [HACKERS] segfault in HEAD when too many nested functions call

2017-07-27 Thread Andres Freund
On 2017-07-26 16:28:38 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2017-07-26 15:03:37 -0400, Tom Lane wrote:
> >> Hm, that seems kinda backwards to me; I was envisioning the checks
> >> moving to the callees not the callers.  I think it'd end up being
> >> about the same number of occurrences of CHECK_FOR_INTERRUPTS(),
> >> and there would be less of a judgment call about where to put them.
> 
> > Hm, that seems a bit riskier - easy to forget one of the places where we
> > might need a CFI().
> 
> I would argue the contrary.  If we put a CFI at the head of each node
> execution function, then it's just boilerplate that you copy-and-paste
> when you invent a new node type.  The way you've coded it here, it
> seems to involve a lot of judgment calls.  That's very far from being
> copy and paste, and the more different it looks from one node type
> to another, the easier it will be to forget it.
> 
> > We certainly are missing a bunch of them in various nodes
> 
> It's certainly possible that there are long-running loops not involving
> any ExecProcNode recursion at all, but that would be a bug independent
> of this issue.  The CFI in ExecProcNode itself can be replaced exactly
> either by asking all callers to do it, or by asking all callees to do it.
> I think the latter is going to be more uniform and harder to screw up.

Looks a bit better.  Still a lot of judgement-y calls tho, e.g. when one
node function just calls the next, or when there's loops etc.   I found
a good number of missing CFIs...

What do you think?

- Andres
>From a8dd50100915f4bc889bc923d749d0661a88b973 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Tue, 25 Jul 2017 17:37:17 -0700
Subject: [PATCH 1/2] Move interrupt checking from ExecProcNode() to executor
 nodes.

In a followup commit ExecProcNode(), and especially the large switch
it contains, will be replaced by a function pointer directly to the
correct node. The node functions will then get invoked by a thin
inline function wrapper. To avoid having to include miscadmin.h in
headers - CHECK_FOR_INTERRUPTS() - move the interrupt checks into the
individual executor routines.

While looking through all executor nodes, I noticed a number of
missing interrupt checks, add these too.

Author: Andres Freund
Reviewed-By: Tom Lane
Discussion:
https://postgr.es/m/22833.1490390...@sss.pgh.pa.us
---
 src/backend/executor/execProcnode.c   | 2 --
 src/backend/executor/execScan.c   | 1 +
 src/backend/executor/nodeAgg.c| 5 +
 src/backend/executor/nodeAppend.c | 3 +++
 src/backend/executor/nodeBitmapHeapscan.c | 3 +++
 src/backend/executor/nodeCustom.c | 3 +++
 src/backend/executor/nodeGather.c | 4 
 src/backend/executor/nodeGatherMerge.c| 4 
 src/backend/executor/nodeGroup.c  | 3 +++
 src/backend/executor/nodeHash.c   | 6 ++
 src/backend/executor/nodeHashjoin.c   | 9 ++---
 src/backend/executor/nodeIndexonlyscan.c  | 3 +++
 src/backend/executor/nodeIndexscan.c  | 6 ++
 src/backend/executor/nodeLimit.c  | 3 +++
 src/backend/executor/nodeMaterial.c   | 2 ++
 src/backend/executor/nodeMergeAppend.c| 4 +++-
 src/backend/executor/nodeMergejoin.c  | 3 +++
 src/backend/executor/nodeModifyTable.c| 2 ++
 src/backend/executor/nodeNestloop.c   | 3 +++
 src/backend/executor/nodeProjectSet.c | 3 +++
 src/backend/executor/nodeRecursiveunion.c | 2 ++
 src/backend/executor/nodeResult.c | 3 +++
 src/backend/executor/nodeSetOp.c  | 5 +
 src/backend/executor/nodeSort.c   | 2 ++
 src/backend/executor/nodeSubplan.c| 5 +
 src/backend/executor/nodeTableFuncscan.c  | 2 ++
 src/backend/executor/nodeTidscan.c| 3 +++
 src/backend/executor/nodeUnique.c | 3 +++
 src/backend/executor/nodeWindowAgg.c  | 5 +
 29 files changed, 92 insertions(+), 10 deletions(-)

diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c
index 294ad2cff9..20fd9f822e 100644
--- a/src/backend/executor/execProcnode.c
+++ b/src/backend/executor/execProcnode.c
@@ -399,8 +399,6 @@ ExecProcNode(PlanState *node)
 {
 	TupleTableSlot *result;
 
-	CHECK_FOR_INTERRUPTS();
-
 	if (node->chgParam != NULL) /* something changed */
 		ExecReScan(node);		/* let ReScan handle this */
 
diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c
index 4f131b3ee0..dc40f6b699 100644
--- a/src/backend/executor/execScan.c
+++ b/src/backend/executor/execScan.c
@@ -139,6 +139,7 @@ ExecScan(ScanState *node,
 	 */
 	if (!qual && !projInfo)
 	{
+		CHECK_FOR_INTERRUPTS();
 		ResetExprContext(econtext);
 		return ExecScanFetch(node, accessMtd, recheckMtd);
 	}
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index de9a18e71c..c1096ed8b0 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -677,6 +677,7 @@ 

Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Andrew Dunstan


On 07/27/2017 04:33 PM, Tom Lane wrote:
> Robert Haas  writes:
>> How about we fix it like this?
> That seems pretty invasive; I'm not excited about breaking a lot of
> unrelated code (particularly third-party extensions) for plperl's benefit.
> Even if we wanted to do that in HEAD, it seems like a nonstarter for
> released branches.
>
> An even bigger issue is that if Perl feels free to redefine sigsetjmp,
> what other libc calls might they decide to horn in on?
>
> So I was trying to figure a way to not include XSUB.h except in a very
> limited part of plperl, like ideally just the .xs files.  It's looking
> like that would take nontrivial refactoring though :-(.  Another problem
> is that this critical bit of the library API is in XSUB.h:
>
> #if defined(PERL_IMPLICIT_CONTEXT) && !defined(PERL_NO_GET_CONTEXT) && 
> !defined(PERL_CORE)
> #  undef aTHX
> #  undef aTHX_
> #  define aTHXPERL_GET_THX
> #  define aTHX_   aTHX,
> #endif
>
> As best I can tell, that's absolute brain death on the part of the Perl
> crew; it means you can't write working calling code at all without
> including XSUB.h, or at least copying-and-pasting this bit out of it.
>
>   

That's the sort of thing that prompted me to ask what was the minimal
set of defines required to fix the original problem (assuming such a
thing exists)

We haven't used PERL_IMPLICIT_CONTEXT to date, and without ill effect.
For example. it's in the ExtUtils::Embed::ccopts for the perl that
jacana and bowerbird happily build and test against.

cheers

andrew


-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Tom Lane
Robert Haas  writes:
> How about we fix it like this?

That seems pretty invasive; I'm not excited about breaking a lot of
unrelated code (particularly third-party extensions) for plperl's benefit.
Even if we wanted to do that in HEAD, it seems like a nonstarter for
released branches.

An even bigger issue is that if Perl feels free to redefine sigsetjmp,
what other libc calls might they decide to horn in on?

So I was trying to figure a way to not include XSUB.h except in a very
limited part of plperl, like ideally just the .xs files.  It's looking
like that would take nontrivial refactoring though :-(.  Another problem
is that this critical bit of the library API is in XSUB.h:

#if defined(PERL_IMPLICIT_CONTEXT) && !defined(PERL_NO_GET_CONTEXT) && 
!defined(PERL_CORE)
#  undef aTHX
#  undef aTHX_
#  define aTHX  PERL_GET_THX
#  define aTHX_ aTHX,
#endif

As best I can tell, that's absolute brain death on the part of the Perl
crew; it means you can't write working calling code at all without
including XSUB.h, or at least copying-and-pasting this bit out of it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple of postgresql.conf.sample discrepancies

2017-07-27 Thread Robert Haas
On Thu, Jul 27, 2017 at 4:27 AM, Aleksander Alekseev
 wrote:
> I like the idea. However maybe it worth considering to turn it into a
> TAP test? Otherwise there is a good chance everybody will forget to run
> it. For similar reason I would advise to add this patch to the next
> commitfest.

Instead of adding it to a TAP test, I think we should add it to the
build process, so you get a compile failure if it finds any problems.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Robert Haas
On Thu, Jul 27, 2017 at 10:21 AM, Tom Lane  wrote:
> Ashutosh Sharma  writes:
>> Anyways, attached is the patch that corrects this issue. The patch now
>> imports all the switches used by perl into plperl module but, after
>> doing so, i am seeing some compilation errors on Windows. Following is
>> the error observed,
>
>> SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp
>> referenced in function do_plperl_return_next
>
> That's certainly a mess, but how come that wasn't happening before?

How about we fix it like this?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pg-sigsetjmp.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] expand_dbname in postgres_fdw

2017-07-27 Thread Arseny Sher
Tom Lane  writes:

> I really don't see anything wrong with the FDW's documentation.  To claim
> that it's not clear, you have to suppose that a connstring's dbname field
> is allowed to recursively contain a connstring.  However, if you've got a
> concrete suggestion about improving the wording, let's see it.
>
> Now on the other hand, libpq's documentation seems a little confusing
> on this point independently of the FDW: so far as I can see, what "certain
> contexts" means is not clearly defined anywhere, and for that matter
> "checked for extended formats" is a masterpiece of unhelpful obfuscation.
>
>   regards, tom lane

I have to admit that you are right: strictly speaking, FDW doc says it
accepts the same options as libpq connstring => libpq connstring itself
can't contain another connstring => expansion is not allowed. However,
we might probably save the readers a few mental cycles if we avoid the
words 'connection strings' and just say that recognized options are the
same as of libpq ones, with (probably, see below) an explicit addition
that dbname is not expanded.

Regarding "checking for extended formats" phrase, I see three solutions:
1) In the description of 'dbname' parameter mention all the cases where
  it is possibly expanded, which doesn't sound as a good idea;
2) Specify whether dbname is expanded in every reference to the list of
  libpq options, which is arguably better.
3) We can also replace "In certain contexts" with "Where explicitly
   mentioned" in the desciption of 'dbname', and, while referencing
   options, never say anything about dbname if it is not expanded.

Besides, why not substitute "checked for extended formats" with "might
be recognized as a connection string" -- are there any other formats
than connstr?

The changes with point 3 chosen might look as in attached file.

>From 96d74f050724a8373c04e48205510a5a7e80d447 Mon Sep 17 00:00:00 2001
From: Arseny Sher 
Date: Thu, 27 Jul 2017 22:45:04 +0300
Subject: [PATCH] libpq docs: dbname param is expanded only when explicitly
 mentioned.

Also, avoid mentioning connstring in the desciption of parameters accepted by
postgres_fdw server.
---
 doc/src/sgml/libpq.sgml| 7 +++
 doc/src/sgml/postgres-fdw.sgml | 6 +++---
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ad5e9b95b4..dde6647fc5 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1040,10 +1040,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
   dbname
   
   
-   The database name.  Defaults to be the same as the user name.
-   In certain contexts, the value is checked for extended
-   formats; see  for more details on
-   those.
+   The database name.  Defaults to be the same as the user name. Where
+   explicitly said, the value can be recognized as a connection string;
+   see  for more details on those.
   
   
  
diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml
index d83fc9e52b..e3c41bfd97 100644
--- a/doc/src/sgml/postgres-fdw.sgml
+++ b/doc/src/sgml/postgres-fdw.sgml
@@ -100,9 +100,9 @@
 

 A foreign server using the postgres_fdw foreign data wrapper
-can have the same options that libpq accepts in
-connection strings, as described in ,
-except that these options are not allowed:
+can have the same options that are recognized by libpq, as
+described in , except that these
+options are not allowed:
 
 
  
-- 
2.11.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change in "policy" on dump ordering?

2017-07-27 Thread Tom Lane
Robert Haas  writes:
> On Wed, Jul 26, 2017 at 2:41 PM, Tom Lane  wrote:
>> The bigger issue is whether there's some failure case this would cause
>> that I'm missing altogether.  Thoughts?

> I think dependencies are fundamentally the right model for this sort
> of problem.  I can't imagine what could go wrong that wouldn't amount
> to a failure to insert all of the right dependencies, and thus be
> fixable by inserting whatever was missing.

I tend to agree.  One fairly obvious risk factor is that we end up with
circular dependencies, but in that case we have conflicting requirements
and we're gonna have to decide what to do about them no matter what.

Another potential issue is pg_dump performance; this could result in
a large increase in the number of DumpableObjects and dependency links.
The topological sort is O(N log N), so we shouldn't hit any serious big-O
problems, but even a more-or-less-linear slowdown might be problematic for
some people.  On the whole though, I believe pg_dump is mostly limited by
its server queries, and that would probably still be true.

But the big point: even if we had the code for this ready-to-go, I'd
be hesitant to inject it into v10 at this point, let alone back-patch.
If we go down this path we won't be seeing a fix for the matview ordering
problem before v11 (at best).  Maybe that's acceptable considering it's
been there for several years already, but it's annoying.

So I'm thinking we should consider the multi-pass patch I posted as
a short-term, backpatchable workaround, which we could hope to replace
with dependency logic later.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw super user checks

2017-07-27 Thread Jeff Janes
On Thu, Dec 1, 2016 at 7:11 PM, Haribabu Kommi 
wrote:

> On Tue, Oct 18, 2016 at 10:38 AM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Mon, Oct 17, 2016 at 10:51 PM, Robert Haas 
>> wrote:
>> > On Mon, Oct 17, 2016 at 2:18 AM, Michael Paquier
>> >  wrote:
>> >> On Mon, Oct 17, 2016 at 3:33 AM, Jeff Janes 
>> wrote:
>> >>> postgres_fdw has some checks to enforce that non-superusers must
>> connect to
>> >>> the foreign server with a password-based method.  The reason for this
>> is to
>> >>> prevent the authentication to the foreign server from happening on
>> the basis
>> >>> of the OS user who is running the non-foreign server.
>> >>>
>> >>> But I think these super user checks should be run against the userid
>> of the
>> >>> USER MAPPING being used for the connection, not the userid of
>> currently
>> >>> logged on user.
>> >>
>> >> So, if the user mapping user is a superuser locally, this would allow
>> >> any lambda user of the local server to attempt a connection to the
>> >> remote server. It looks dangerous rather dangerous to me to authorize
>> >> that, even if the current behavior is a bit inconsistent I agree.
>> >
>> > I don't know what "any lambda user" means.  Did you mean to write "any
>> > random user"?
>>
>> Yes, in this context that would be "any non-superuser" or "any user
>> without superuser rights". Actually that's a French-ism. I just
>> translated it naturally to English to define a user that has no access
>> to advanced features :)
>>
>
>
> Thanks for the patch, but it breaking the existing functionality as per
> the other
> mails. Marked as "returned with feedback" in 2016-11 commitfest.
>


Here is an updated patch.  This version allows you use the password-less
connection if you either are the super-user directly (which is the existing
committed behavior), or if you are using the super-user's mapping because
you are querying a super-user-owned view which you have been granted access
to.

It first I thought the currently committed behavior might be a security bug
as a directly logged in superuser can use another user's user-defined
mapping but without the password restriction when querying a view made by
someone else.  But consulting with the security list nearly a year ago, the
conclusion was that it is never a good idea for a superuser to blindly
query from other users' views, and that the current situation is no worse
for postgres_fdw than it is for other features, and so nothing needs to be
done about it.  So that is why I've decided to allow the passwordless
solution in either situation--a superuser using someone else mapping, or
someone else using a super user's mapping.

I didn't update any comments because the existing ones seem to apply
equally well to the new code as the old code.

The regression test passes with this version because I still allow the old
behavior.  I didn't add a new test to also test the new behavior, because I
don't know how to do that with the existing make check framework, and a TAP
test seems like overkill.

Cheers,

Jeff


postgres_fdw_superuser_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Mark Rofail
On Thu, Jul 27, 2017 at 7:30 PM, Alexander Korotkov 
wrote:

> Oh, ok.  I missed that.
>>
> Could you remind me why don't we have DELETE CASCADE?  I understand that
> UPDATE CASCADE is problematic because it's unclear which way should we
> delete elements from array.  But what about DELETE CASCADE?
>

Honestly, I didn't touch that part of the patch. It's very interesting
though, I think it would be great to spend the rest of GSoC in it.

Off the top of my head though, there's many ways to go about DELETE
CASCADE. You could only delete the member of the referencing array or the
whole array. I think there's a lot of options the user might want to
consider and it's hard to generalize to DELETE CASCADE. Maybe new grammar
would be introduced here ?|

Best Regards,
Mark Rofail


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Mark Rofail
On Thu, Jul 27, 2017 at 7:15 PM, Erik Rijkers  wrote:

> It would help (me at least) if you could be more explicit about what
> exactly each instance is.
>

I apologize, I thought it was clear through the context.

I meant by the original patch is all the work done before my GSoC project.
The latest of which, was submitted by Tom Lane[1]. And rebased here[2].

The new patch is the latest one submitted by me[3].

And the new patch with index is the same[3], but with a GIN index built
over it. CREATE INDEX ON fktableforarray USING gin (fktest array_ops);

[1] https://www.postgresql.org/message-id/28617.1351095...@sss.pgh.pa.us
[2]
https://www.postgresql.org/message-id/CAJvoCutcMEYNFYK8Hdiui-M2y0ZGg%3DBe17fHgQ%3D8nHexZ6ft7w%40mail.gmail.com
[3]
https://www.postgresql.org/message-id/CAJvoCuuoGo5zJTpmPm90doYTUWoeUc%2BONXK2%2BH_vxsi%2BZi09bQ%40mail.gmail.com


Best Regards,
Mark Rofail


Re: [HACKERS] Change in "policy" on dump ordering?

2017-07-27 Thread Robert Haas
On Wed, Jul 26, 2017 at 2:41 PM, Tom Lane  wrote:
> The bigger issue is whether there's some failure case this would cause
> that I'm missing altogether.  Thoughts?

I think dependencies are fundamentally the right model for this sort
of problem.  I can't imagine what could go wrong that wouldn't amount
to a failure to insert all of the right dependencies, and thus be
fixable by inserting whatever was missing.

It's possible I am lacking in imagination, so take that opinion for
what it's worth.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Tom Lane
Andrew Dunstan  writes:
> What is the minimal set of extra defines required to sort out the
> handshake fingerprint issue?

Also, exactly what defines do you end up importing in your test build?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Andrew Dunstan


On 07/27/2017 12:34 PM, Ashutosh Sharma wrote:
> On Thu, Jul 27, 2017 at 7:51 PM, Tom Lane  > wrote:
> > Ashutosh Sharma  > writes:
> >> Anyways, attached is the patch that corrects this issue. The patch now
> >> imports all the switches used by perl into plperl module but, after
> >> doing so, i am seeing some compilation errors on Windows. Following is
> >> the error observed,
> >
> >> SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp
> >> referenced in function do_plperl_return_next
> >
> > That's certainly a mess, but how come that wasn't happening before?
>
> Earlier we were using Perl-5.20 version which i think didn't have
> handshaking mechanism. From perl-5.22 onwards, the functions like
> Perl_xs_handshake() or HS_KEY were introduced for handshaking purpose
> and to ensure that the handshaking between plperl and perl doesn't
> fail, we are now trying to import the switches used by  perl into
> plperl. As a result of this, macros like PERL_IMPLICIT_SYS is getting
> defined in plperl which eventually opens the following definitions
> from XSUB.h resulting in the compilation error.
>
>499 #if defined(PERL_IMPLICIT_SYS) && !defined(PERL_CORE)
> 518 #undef ioctl
> 519 #undef getlogin
> 520*#undef setjmp*
> ...
> ...
>
> 651 #define times   PerlProc_times
> 652 #define waitPerlProc_wait
> 653 *#define setjmp  PerlProc_setjmp
>
> *



What is the minimal set of extra defines required to sort out the
handshake fingerprint issue?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tab completion for "create user mapping for"

2017-07-27 Thread Tom Lane
Jeff Janes  writes:
> If you have "CREATE USE" tab completion will recommend both USER and USER
> MAPPING FOR.

> But once you have the full "CREATE USER " or "CREATE USER M" it will not
> complete the rest of the "MAPPING FOR".

> Attached patch fixes that.

Pushed, thanks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More race conditions in logical replication

2017-07-27 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Alvaro Herrera wrote:
> 
> > Hmm, yeah, that's not good.  However, I didn't like the idea of putting
> > it inside the locked area, as it's too much code.  Instead I added just
> > before acquiring the spinlock.  We cancel the sleep unconditionally
> > later on if we didn't need to sleep after all.
> 
> I just noticed that Jacana failed again in the subscription test, and it
> looks like it's related to this.  I'll take a look tomorrow if no one
> beats me to it.
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana=2017-07-26%2014%3A39%3A54

Ahh, I misread the message.  This one is actually about the replication
*origin* being still active, not the replication *slot*.  I suppose
origins need the same treatment as we just did for slots.  Anybody wants
to volunteer a patch?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql's \d and \dt are sending their complaints to different output files

2017-07-27 Thread Tom Lane
Daniel Gustafsson  writes:
>> On 19 Jun 2017, at 17:32, Tom Lane  wrote:
>> So, if we're getting into enforcing consistency in describe.c, there's
>> lots to do.

> Addressed in attached patch, see list of patches below.

I've pushed most of this.  There are a couple of remaining
inconsistencies:

listTables and listDbRoleSettings print a custom message rather than
an empty table for no matches (but in QUIET mode they just do the
latter).  I think that's actually a good decision for listDbRoleSettings,
because the user might be confused about which pattern is which, and
we can straighten him out with a custom error message.  But the only
good argument for listTables behaving that way is that people are used
to it.  Should we override backwards compatibility to the extent of
dropping the custom messages in listTables, and just printing an empty
table-of-tables?

> 0004 - Most verbose metacommands include additional information on top of what
> is in the normal output, while \dRp+ dropped two columns (moving one to the
> title).  This include the information from \dRp in \dRp+.  Having the pubname
> in the title as well as the table is perhaps superfuous, but consistent with
> other commands so opted for it.

I'm not sure about this one.  It definitely seems bogus that \dRp+ is
omitting the owner column, but I'm less excited about duplicating the
pubname.  We'd better make up our minds before v10 freezes, though.
Anybody else have an opinion?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Alexander Korotkov
On Thu, Jul 27, 2017 at 3:07 PM, Mark Rofail  wrote:

> On Thu, Jul 27, 2017 at 12:54 PM, Alexander Korotkov  > wrote:
>>
>> How many rows of FK table were referencing the PK table row you're
>> updating/deleting.
>> I wonder how may RI trigger work so fast if it has to do some job besides
>> index search with no results?
>>
> The problem here is that the only to option for the foreign key arrays are
> NO ACTION and RESTRICT which don't allow me to update/delete a refrenced
> row in the PK Table. the EXPLAIN ANALYZE only tells me that this violates
> the FK constraint.
>
> So we have two options. Either implement CASCADE or if there's a
> configration for EXPLAIN to show costs even if it violates the FK
> constraints.
>

Oh, ok.  I missed that.
Could you remind me why don't we have DELETE CASCADE?  I understand that
UPDATE CASCADE is problematic because it's unclear which way should we
delete elements from array.  But what about DELETE CASCADE?

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Claudio Freire
On Thu, Jul 27, 2017 at 2:10 PM, Alvaro Herrera
 wrote:
> Claudio Freire wrote:
>> On Thu, Jul 27, 2017 at 1:46 PM, Alvaro Herrera
>>  wrote:
>> > Claudio Freire wrote:
>> >
>> >> > The vacuuming the very large table with no index could also take a
>> >> > long time, and it scans and vacuums blocks one by one. So I imagined
>> >> > that we can vacuum the FSM once vacuumed a certain amount of blocks.
>> >> > And that can avoid bloating table during the long-time vacuum.
>> >>
>> >> Could do that. I'll see about doing something of the sort and
>> >> submitting it as a separate patch.
>> >
>> > Vacuuming of the FSM is in no way strictly tied to vacuuming the heap
>> > (it's not really "vacuuming", it's just about updating the upper layers
>> > to match the data in the leaves).  I think we could use the new autovac
>> > "workitem" infrastructure and tack an item there once in a while for FSM
>> > vacuuming ...
>>
>> Well, it *is* tied in the sense that vacuum is the one massively
>> adding free space.
>
> Yes, but if vacuum dies after releasing lots of space but before
> vacuuming FSM, then it's not tied anymore and you could just as well run
> it anytime.

I see your point.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Write Amplification Reduction Method (WARM)

2017-07-27 Thread Pavan Deolasee
On Wed, Jul 26, 2017 at 6:26 PM, Robert Haas  wrote:

> On Tue, Apr 18, 2017 at 4:25 AM, Pavan Deolasee
>  wrote:
> > I'll include the fix in the next set of patches.
>
> I haven't see a new set of patches.  Are you intending to continue
> working on this?
>
>
Looks like I'll be short on bandwidth to pursue this further, given other
work commitments including upcoming Postgres-XL 10 release. While I haven't
worked on the patch since April, I think it was in a pretty good shape
where I left it. But it's going to be incredibly difficult to estimate the
amount of further efforts required, especially with testing and validating
all the use cases and finding optimisations to fix regressions in all those
cases. Also, many fundamental concerns around the patch touching the core
of the database engine can only be addressed if some senior hackers, like
you, take serious interest in the patch.

I'll be happy if someone wants to continue hacking the patch further and
get it in a committable shape. I can stay actively involved. But TBH the
amount of time I can invest is far as compared to what I could during the
last cycle.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Erik Rijkers

On 2017-07-27 02:31, Mark Rofail wrote:

I have written some benchmark test.



It would help (me at least) if you could be more explicit about what 
exactly each instance is.


Apparently there is an 'original patch': is this the original patch by 
Marco Nenciarini?

Or is it something you posted earlier?

I guess it could be distilled from the earlier posts but when I looked 
those over yesterday evening I still didn't get it.


A link to the post where the 'original patch' is would be ideal...

thanks!

Erik Rijkers



With two tables a PK table with 5 rows and an FK table with growing row
count.






Once triggering an RI check
at 10 rows,
100 rows,
1,000 rows,
10,000 rows,
100,000 rows and
1,000,000 rows

Please find the graph with the findings attached below


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Alvaro Herrera
Claudio Freire wrote:
> On Thu, Jul 27, 2017 at 1:46 PM, Alvaro Herrera
>  wrote:
> > Claudio Freire wrote:
> >
> >> > The vacuuming the very large table with no index could also take a
> >> > long time, and it scans and vacuums blocks one by one. So I imagined
> >> > that we can vacuum the FSM once vacuumed a certain amount of blocks.
> >> > And that can avoid bloating table during the long-time vacuum.
> >>
> >> Could do that. I'll see about doing something of the sort and
> >> submitting it as a separate patch.
> >
> > Vacuuming of the FSM is in no way strictly tied to vacuuming the heap
> > (it's not really "vacuuming", it's just about updating the upper layers
> > to match the data in the leaves).  I think we could use the new autovac
> > "workitem" infrastructure and tack an item there once in a while for FSM
> > vacuuming ...
> 
> Well, it *is* tied in the sense that vacuum is the one massively
> adding free space.

Yes, but if vacuum dies after releasing lots of space but before
vacuuming FSM, then it's not tied anymore and you could just as well run
it anytime.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Claudio Freire
On Thu, Jul 27, 2017 at 1:46 PM, Alvaro Herrera
 wrote:
> Claudio Freire wrote:
>
>> > The vacuuming the very large table with no index could also take a
>> > long time, and it scans and vacuums blocks one by one. So I imagined
>> > that we can vacuum the FSM once vacuumed a certain amount of blocks.
>> > And that can avoid bloating table during the long-time vacuum.
>>
>> Could do that. I'll see about doing something of the sort and
>> submitting it as a separate patch.
>
> Vacuuming of the FSM is in no way strictly tied to vacuuming the heap
> (it's not really "vacuuming", it's just about updating the upper layers
> to match the data in the leaves).  I think we could use the new autovac
> "workitem" infrastructure and tack an item there once in a while for FSM
> vacuuming ...

Well, it *is* tied in the sense that vacuum is the one massively
adding free space.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Alvaro Herrera
Claudio Freire wrote:

> > The vacuuming the very large table with no index could also take a
> > long time, and it scans and vacuums blocks one by one. So I imagined
> > that we can vacuum the FSM once vacuumed a certain amount of blocks.
> > And that can avoid bloating table during the long-time vacuum.
> 
> Could do that. I'll see about doing something of the sort and
> submitting it as a separate patch.

Vacuuming of the FSM is in no way strictly tied to vacuuming the heap
(it's not really "vacuuming", it's just about updating the upper layers
to match the data in the leaves).  I think we could use the new autovac
"workitem" infrastructure and tack an item there once in a while for FSM
vacuuming ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Claudio Freire
On Thu, Jul 27, 2017 at 6:16 AM, Masahiko Sawada  wrote:
> On Thu, Jul 27, 2017 at 5:48 PM, Sokolov Yura
>  wrote:
>> On 2017-07-27 11:30, Masahiko Sawada wrote:
>>>
>>> On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire 
>>> wrote:

 On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire 
 wrote:
>
> On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
>  wrote:
>>
>> On 2017-07-24 19:11, Claudio Freire wrote:
>>>
>>> I was mostly thinking about something like the attached patch.
>>>
>>> Simple, unintrusive, and shouldn't cause any noticeable slowdown.
>>
>>
>>
>> Your change is small, clear, and currently useful for huge tables under
>> high update load (until "allowing vacuum to use more than 1GB memory"
>> is merged).
>
>
> In high-bloat conditions, it doesn't take long to accumulate 1GB of
> dead tuples (which is about 178M tuples, btw).
>
> The index scan takes way longer than the heap scan in that case.
>
>> But it still delays updating fsm until whole first batch of dead tuples
>> cleared (ie all indices scanned, and all heap pages cleared), and on
>> such
>> huge table it will be hours.
>
>
> So, true, it will get delayed considerably. But as you realized,
> there's not much point in trying to vacuum the FSM sooner, since it
> won't be accurate shortly afterwards anyway. Dead line pointers do use
> up a fair bit of space, especially on narrow tables.
>
> In a particular table I have that exhibits this problem, most of the
> time is spent scanning the index. It performs dozens of index scans
> before it's done, so it would vacuum the FSM quite often enough, even
> if I were to increase the mwm setting n-fold.


 I hate to reply to myself, but I wanted to add: in any case, the case
 I'm trying to avoid is the case where the FSM *never* gets vacuumed.
 That's bad. But it may not be the phenomenon you're experiencing in
 your tests.

>>>
>>> I think the frequently vacuuming the FSM during long-time vacuum would
>>> be worth to have in order to avoid a table bloating. The patch
>>> triggers to vacuum the FSM after vacuumed the table and indexes but I
>>> think that we can have a similar mechanism for a table with no index.
>>>
>>> Regards,
>>>
>>> --
>>> Masahiko Sawada
>>> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
>>> NTT Open Source Software Center
>>
>>
>> I could be wrong, but it looks like table without index doesn't
>> suffer that much. Since there is no indices, there is only one stage -
>> scanning heap, and no quadratic behavior cause of full dead-tuple array
>> and repeating index vacuuming.
>>
>
> The vacuuming the very large table with no index could also take a
> long time, and it scans and vacuums blocks one by one. So I imagined
> that we can vacuum the FSM once vacuumed a certain amount of blocks.
> And that can avoid bloating table during the long-time vacuum.

Could do that. I'll see about doing something of the sort and
submitting it as a separate patch.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Ashutosh Sharma
On Thu, Jul 27, 2017 at 7:51 PM, Tom Lane  wrote:
> Ashutosh Sharma  writes:
>> Anyways, attached is the patch that corrects this issue. The patch now
>> imports all the switches used by perl into plperl module but, after
>> doing so, i am seeing some compilation errors on Windows. Following is
>> the error observed,
>
>> SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp
>> referenced in function do_plperl_return_next
>
> That's certainly a mess, but how come that wasn't happening before?

Earlier we were using Perl-5.20 version which i think didn't have
handshaking mechanism. From perl-5.22 onwards, the functions like
Perl_xs_handshake() or HS_KEY were introduced for handshaking purpose and
to ensure that the handshaking between plperl and perl doesn't fail, we are
now trying to import the switches used by  perl into plperl. As a result of
this, macros like PERL_IMPLICIT_SYS is getting defined in plperl which
eventually opens the following definitions from XSUB.h resulting in the
compilation error.

   499 #if defined(PERL_IMPLICIT_SYS) && !defined(PERL_CORE)
518 #undef ioctl
519 #undef getlogin
520* #undef setjmp*
...
...

651 #define times   PerlProc_times
652 #define waitPerlProc_wait
653

*#define setjmp  PerlProc_setjmp*
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
>
> regards, tom lane


[HACKERS] tab completion for "create user mapping for"

2017-07-27 Thread Jeff Janes
If you have "CREATE USE" tab completion will recommend both USER and USER
MAPPING FOR.

But once you have the full "CREATE USER " or "CREATE USER M" it will not
complete the rest of the "MAPPING FOR".

Attached patch fixes that.

Cheers,

Jeff


user_mapping_tab_complete_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing comment for max_logical_replication_workers in postgresql.conf.sample

2017-07-27 Thread Yugo Nagata
On Thu, 27 Jul 2017 14:38:29 +0900
Masahiko Sawada  wrote:

> On Thu, Jul 27, 2017 at 10:14 AM, Yugo Nagata  wrote:
> > Hi,
> >
> > I found that postgresql.conf.sample is missing a comment
> > to note that changing max_logical_replication_workers requires
> > restart of the server.
> >
> > Other such parameters has the comments, so I think the new
> > parameter also needs this. Attached is a simple patch to fix
> > this.
> >
> 
> Good point. Similarly, dynamic_shared_memory_type and event_source are
> required restarting to change but are not mentioned in
> postgresql.conf.sample. Should we add a comment as well?

I think so. The updated patch is attached.

> 
> Regards,
> 
> --
> Masahiko Sawada
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center


-- 
Yugo Nagata 


postgresql.conf.sample.patch.v2
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade failed with error - ERROR: column "a" in child table must be marked NOT NULL

2017-07-27 Thread Michael Paquier
On Wed, Jul 26, 2017 at 4:50 PM, Michael Paquier
 wrote:
> On Wed, Jul 26, 2017 at 4:02 PM, Tom Lane  wrote:
>> Not sure what's involved there code-wise, though, nor whether we'd want
>> to back-patch.
>
> I'll try to look at the code around that to come up with a clear
> conclusion in the next couple of days, likely more as that's a
> vacation period. Surely anything invasive would not be backpatched.

So I think that the attached patch is able to do the legwork. While
looking at the code, I have bumped into index_check_primary_key() that
discarded the case of sending SET NOT NULL to child tables, as
introduced by 88452d5b. But that's clearly an oversight IMO, and the
comment is wrong to begin with because SET NOT NULL is spread to child
tables. Using is_alter_table instead of a plain true in
index_check_primary_key() for the call of AlterTableInternal() is
defensive, but I don't think that we want to impact any modules
relying on this API, so recursing only when ALTER TABLE is used is the
safest course of action to me. With this logic, we rely as well on the
fact that ADD PRIMARY KEY is not recursive in tablecmds.c. Comments
are welcome, I'll park that into the CF app so as it does not get lost
in the void.
-- 
Michael


alttab-setnotnull-v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stop_backup(wait_for_archive := true) on standby server

2017-07-27 Thread Stephen Frost
Noah, all,

* Noah Misch (n...@leadboat.com) wrote:
> This PostgreSQL 10 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:

Based on the ongoing discussion, this is really looking like it's
actually a fix that needs to be back-patched to 9.6 rather than a PG10
open item.  I don't have any issue with keeping it as an open item
though, just mentioning it.  I'll provide another status update on or
before Monday, July 31st.

I'll get to work on the back-patch and try to draft up something to go
into the release notes for 9.6.4.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Tom Lane
Ashutosh Sharma  writes:
> Anyways, attached is the patch that corrects this issue. The patch now
> imports all the switches used by perl into plperl module but, after
> doing so, i am seeing some compilation errors on Windows. Following is
> the error observed,

> SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp
> referenced in function do_plperl_return_next

That's certainly a mess, but how come that wasn't happening before?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] expand_dbname in postgres_fdw

2017-07-27 Thread Tom Lane
Arseny Sher  writes:
> Attached patch allows dbname expansion and makes sure that it doesn't
> contain any invalid options.

I'm pretty much against this in principle.  It complicates both the code
and the conceptual API, for no serious gain, even if you take it on faith
that it doesn't and never will produce any security issues.

> Whether you decide to commit it or not
> (at the moment I don't see any security implications, at least not more than
> in usual dbname expansion usage, e.g. in psql, but who knows), it seems
> to me that the documentation should be updated since currently it is not
> clear on the subject, as the beginning of this thread proves.

I really don't see anything wrong with the FDW's documentation.  To claim
that it's not clear, you have to suppose that a connstring's dbname field
is allowed to recursively contain a connstring.  However, if you've got a
concrete suggestion about improving the wording, let's see it.

Now on the other hand, libpq's documentation seems a little confusing
on this point independently of the FDW: so far as I can see, what "certain
contexts" means is not clearly defined anywhere, and for that matter
"checked for extended formats" is a masterpiece of unhelpful obfuscation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] expand_dbname in postgres_fdw

2017-07-27 Thread Arseny Sher
On Thu, Jul 27, 2017 at 12:21 AM, Tom Lane  wrote:
> The main problem to my mind is that a connection string could possibly
> override items meant to be specified elsewhere.  In particular it ought
> not be allowed to specify the remote username or password, because those
> are supposed to come from the user mapping object not the server object.
> I suspect you could break things by trying to specify client_encoding
> there, as well.

Attached patch allows dbname expansion and makes sure that it doesn't
contain any invalid options. Whether you decide to commit it or not
(at the moment I don't see any security implications, at least not more than
in usual dbname expansion usage, e.g. in psql, but who knows), it seems
to me that the documentation should be updated since currently it is not
clear on the subject, as the beginning of this thread proves.
>From 09e3924501d219331b8b8fca12a9ea35a689ba10 Mon Sep 17 00:00:00 2001
From: Arseny Sher 
Date: Thu, 27 Jul 2017 12:41:17 +0300
Subject: [PATCH] Allow 'dbname' option contain full-fledged connstring in
 postgres_fdw

And if this is the case, validate that it doesn't contain any invalid options
such as client_encoding or user.
---
 contrib/postgres_fdw/connection.c |  2 +-
 contrib/postgres_fdw/option.c | 80 ---
 2 files changed, 68 insertions(+), 14 deletions(-)

diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index be4ec07cf9..bfcd9ed2e3 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -263,7 +263,7 @@ connect_pg_server(ForeignServer *server, UserMapping *user)
 		/* verify connection parameters and make connection */
 		check_conn_params(keywords, values);
 
-		conn = PQconnectdbParams(keywords, values, false);
+		conn = PQconnectdbParams(keywords, values, true);
 		if (!conn || PQstatus(conn) != CONNECTION_OK)
 			ereport(ERROR,
 	(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),
diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c
index 67e1c59951..26c11e5179 100644
--- a/contrib/postgres_fdw/option.c
+++ b/contrib/postgres_fdw/option.c
@@ -50,8 +50,10 @@ static PQconninfoOption *libpq_options;
  * Helper functions
  */
 static void InitPgFdwOptions(void);
+static void validate_dbname(const char *dbname);
+static void get_opts_hint(StringInfo buf, Oid context, bool libpq_options);
 static bool is_valid_option(const char *keyword, Oid context);
-static bool is_libpq_option(const char *keyword);
+static bool is_libpq_option(const char *keyword, Oid context);
 
 
 /*
@@ -86,16 +88,8 @@ postgres_fdw_validator(PG_FUNCTION_ARGS)
 			 * Unknown option specified, complain about it. Provide a hint
 			 * with list of valid options for the object.
 			 */
-			PgFdwOption *opt;
 			StringInfoData buf;
-
-			initStringInfo();
-			for (opt = postgres_fdw_options; opt->keyword; opt++)
-			{
-if (catalog == opt->optcontext)
-	appendStringInfo(, "%s%s", (buf.len > 0) ? ", " : "",
-	 opt->keyword);
-			}
+			get_opts_hint(, catalog, false);
 
 			ereport(ERROR,
 	(errcode(ERRCODE_FDW_INVALID_OPTION_NAME),
@@ -143,6 +137,10 @@ postgres_fdw_validator(PG_FUNCTION_ARGS)
 		 errmsg("%s requires a non-negative integer value",
 def->defname)));
 		}
+		else if (strcmp(def->defname, "dbname") == 0)
+		{
+			validate_dbname(defGetString(def));
+		}
 	}
 
 	PG_RETURN_VOID();
@@ -250,6 +248,59 @@ InitPgFdwOptions(void)
 }
 
 /*
+ * If dbname param specified, make sure it doesn't carry invalid
+ * options if it is a connstring.
+ */
+static void
+validate_dbname(const char *dbname)
+{
+	PQconninfoOption *lopts;
+	PQconninfoOption *lopt;
+
+	/* If it is not a connstring, just skip the check */
+	if ((lopts = PQconninfoParse(dbname, NULL)) != NULL)
+	{
+		for (lopt = lopts; lopt->keyword; lopt++)
+		{
+			if (lopt->val != NULL && !is_libpq_option(lopt->keyword,
+	  ForeignServerRelationId))
+			{
+StringInfoData buf;
+get_opts_hint(, ForeignServerRelationId, true);
+
+ereport(ERROR,
+		(errcode(ERRCODE_FDW_INVALID_OPTION_NAME),
+		 errmsg("invalid option in dbname connstring \"%s\"",
+lopt->keyword),
+		 errhint("Valid options in this context are: %s",
+ buf.data)));
+			}
+		}
+		PQconninfoFree(lopts);
+	}
+}
+
+/*
+ * Put to 'buf' a hint with list of valid options for the object 'context'. If
+ * libpq_options is true, only libpq options are provided. 'buf' must point to
+ * existing StringInfoData struct when called.
+ */
+static void
+get_opts_hint(StringInfo buf, Oid context, bool libpq_options)
+{
+	PgFdwOption *opt;
+
+	initStringInfo(buf);
+	for (opt = postgres_fdw_options; opt->keyword; opt++)
+	{
+		if (context == opt->optcontext &&
+			(!libpq_options || opt->is_libpq_opt))
+			appendStringInfo(buf, "%s%s", (buf->len > 0) ? ", " : "",
+			 opt->keyword);
+	}
+}
+
+/*
  * Check whether the given 

Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

2017-07-27 Thread Kunshchikov Vladimir

Hello, Alvaro,

  thanks for the suggestions,  attached version #3 with all of your 
requirements met.



--
С уважением,
Владимир Кунщиков
Ведущий программист

Отдел разработки систем обнаружения и предотвращения компьютерных атак

Компания "ИнфоТеКС"


From: Alvaro Herrera 
Sent: Wednesday, July 26, 2017 7:40:20 PM
To: Kunshchikov Vladimir
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [patch] pg_dump/pg_restore zerror() and strerror() mishap

Kunshchikov Vladimir wrote:
>  Hello Alvaro, thanks for the feedback, fixed all of your points.
> Attached new version of patch.

Looks great -- it's a lot smaller than the original even.  One final
touch -- see cfread(), where we have an #ifdef where we test for
fp->compressedfp; the "#else" branch uses the same code as the
!fp->compressedfp.  I think your get_cfp_error can be written more
simply using that form.  (Also, your version probably errors or warns
about "code before declaration" in that routine, which is not allowed in
C89.)

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index 991fe11e8a..28e5b417c2 100644
--- a/src/bin/pg_dump/compress_io.c
+++ b/src/bin/pg_dump/compress_io.c
@@ -709,5 +709,22 @@ hasSuffix(const char *filename, const char *suffix)
   suffix,
   suffixlen) == 0;
 }
+#endif
 
+const char *
+get_cfp_error(cfp* fp)
+{
+#ifdef HAVE_LIBZ
+	if(fp->compressedfp){
+		int errnum;
+		static const char fallback[] = "Zlib error";
+		const int maxlen = 255;
+		const char *errmsg = gzerror(fp->compressedfp, );
+		if(!errmsg || !memchr(errmsg, 0, maxlen))
+			errmsg = fallback;
+
+		return errnum == Z_ERRNO ? strerror(errno) : errmsg;
+	}
 #endif
+	return strerror(errno);
+}
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index 8f2e752cba..06b3762233 100644
--- a/src/bin/pg_dump/compress_io.h
+++ b/src/bin/pg_dump/compress_io.h
@@ -65,5 +65,6 @@ extern int	cfgetc(cfp *fp);
 extern char *cfgets(cfp *fp, char *buf, int len);
 extern int	cfclose(cfp *fp);
 extern int	cfeof(cfp *fp);
+extern const char * get_cfp_error(cfp *fp);
 
 #endif
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index 79922da8ba..e9c26bc571 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -352,7 +352,7 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
@@ -490,7 +490,7 @@ _WriteByte(ArchiveHandle *AH, const int i)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(, 1, ctx->dataFH) != 1)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return 1;
 }
@@ -519,7 +519,7 @@ _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	if (cfwrite(buf, len, ctx->dataFH) != len)
-		WRITE_ERROR_EXIT;
+		exit_horribly(modulename, "could not write to output file: %s\n", get_cfp_error(ctx->dataFH));
 
 	return;
 }
diff --git a/src/bin/pg_dump/pg_backup_tar.c b/src/bin/pg_dump/pg_backup_tar.c
index f839712945..f17ce78aca 100644
--- a/src/bin/pg_dump/pg_backup_tar.c
+++ b/src/bin/pg_dump/pg_backup_tar.c
@@ -35,6 +35,7 @@
 #include "pgtar.h"
 #include "common/file_utils.h"
 #include "fe_utils/string_utils.h"
+#include "compress_io.h"
 
 #include 
 #include 
@@ -555,8 +556,12 @@ _tarReadRaw(ArchiveHandle *AH, void *buf, size_t len, TAR_MEMBER *th, FILE *fh)
 			{
 res = GZREAD(&((char *) buf)[used], 1, len, th->zFH);
 if (res != len && !GZEOF(th->zFH))
+{
+	int errnum;
+	const char *errmsg = gzerror(th->zFH, );
 	exit_horribly(modulename,
-  "could not read from input file: %s\n", strerror(errno));
+  "could not read from input file: %s\n", errnum == Z_ERRNO? strerror(errno) : errmsg);
+}
 			}
 			else
 			{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl extension fails on Windows

2017-07-27 Thread Ashutosh Sharma
Hi All,

On Wed, Jul 26, 2017 at 7:56 PM, Ashutosh Sharma  wrote:
> On Wed, Jul 26, 2017 at 8:51 AM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> Based on discussion downthread, it seems like what we actually need to
>>> do is update perl.m4 to extract CCFLAGS.  Turns out somebody proposed
>>> a patch for that back in 2002:
>>> https://www.postgresql.org/message-id/Pine.LNX.4.44.0211051045070.16317-20%40wotan.suse.de
>>> It seems to need a rebase.  :-)
>>
>> Ah-hah, I *thought* we had considered the question once upon a time.
>> There were some pretty substantial compatibility concerns raised in that
>> thread, which is doubtless why it's still like that.
>>
>> My beef about inter-compiler compatibility (if building PG with a
>> different compiler from that used for Perl) could probably be addressed by
>> absorbing only -D switches from the Perl flags.  But Peter seemed to feel
>> that even that could break things, and I worry that he's right for cases
>> like -D_FILE_OFFSET_BITS which affect libc APIs.  Usually we'd have made
>> the same decisions as Perl for that sort of thing, but if we didn't, it's
>> a mess.
>>
>> I wonder whether we could adopt some rule like "absorb -D switches
>> for macros whose names do not begin with an underscore".  That's
>> surely a hack and three-quarters, but it seems safer than just
>> absorbing everything willy-nilly.
>>
>
> Thanks Robert, Tom, Andrew and Amit for all your inputs. I have tried
> to work on the patch shared by Reinhard  long time back for Linux. I
> had to rebase the patch and also had to do add some more lines of code
> to make it work on Linux. For Windows, I had to prepare a separate
> patch to replicate similar behaviour.  I can see that both these
> patches are working as expected i.e they are able import the switches
> used by Perl into plperl module during build time.  However, on
> windows i am still seeing the crash and i am still working to find the
> reason for this.  Here, I attach the patches that i have prepared for
> linux and Windows platforms. Thanks.

There was a small problem with my patch for windows that i had
submitted yesterday. It was reading the switches in '-D*' form and
including those as it is in the plperl build. Infact, it should be
removing '-D' option (from say -DPERL_CORE) and just add the macro
name (PERL_CORE) to the define list using AddDefine('PERL_CORE').
Anyways, attached is the patch that corrects this issue. The patch now
imports all the switches used by perl into plperl module but, after
doing so, i am seeing some compilation errors on Windows. Following is
the error observed,

SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp
referenced in function do_plperl_return_next

I did some analysis in order to find the reason for this error and
could see that for Windows, sigsetjmp is defined as setjmp in PG. But,
setjmp is also defined by Perl. Hence, after including perl header
files, setjmp gets redefined as 'PerlProc_setjmp'.

In short, we have 'src/pl/plperl/SPI.c' file including 'perl.h'
followed 'XSUB.h'. perl.h defines PerlProc_setjmp() as,

#define PerlProc_setjmp(b, n) Sigsetjmp((b), (n))

and Sigsetjmp is defined as:

#define Sigsetjmp(buf,save_mask) setjmp((buf))

Then XSUB.h redefines setjmp as:

# define setjmp PerlProc_setjmp

which basically creates a loop in the preprocessor definitions.

Another problem with setjmp() redefinition is that setjmp takes one
argument whereas PerlProc_setjmp takes two arguments.

To fix this compilation error i have removed the setjmp() redefinition
from XSUB.h in perl and after doing that the build succeeds and also
'create language plperl' command is working fine i.e. there is no more
server crash.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


plperl_win_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-07-27 Thread Ashutosh Bapat
On Thu, Jul 27, 2017 at 6:58 AM, Robert Haas  wrote:
>
> On a technical level, I am pretty sure that it is not OK to call
> AtEOXact_FDWXacts() from the sections of CommitTransaction,
> AbortTransaction, and PrepareTransaction that are described as
> "non-critical resource releasing".  At that point, it's too late to
> throw an error, and it is very difficult to imagine something that
> involves a TCP connection to another machine not being subject to
> error.  You might say "well, we can just make sure that any problems
> are reporting as a WARNING rather than an ERROR", but that's pretty
> hard to guarantee; most backend code assumes it can ERROR, so anything
> you call is a potential hazard.  There is a second problem, too: any
> code that runs from here is not interruptible.  The user can hit ^C
> all day and nothing will happen.  That's a bad situation when you're
> busy doing network I/O.  I'm not exactly sure what the best thing to
> do about this problem would be.
>

The remote transaction can be committed/aborted only after the fate of
the local transaction is decided. If we commit remote transaction and
abort local transaction, that's not good. AtEOXact* functions are
called immediately after that decision in post-commit/abort phase. So,
if we want to commit/abort the remote transaction immediately it has
to be done in post-commit/abort processing. Instead if we delegate
that to the remote transaction resolved backend (introduced by the
patches) the delay between local commit and remote commits depends
upon when the resolve gets a chance to run and process those
transactions. One could argue that that delay would anyway exist when
post-commit/abort processing fails to resolve remote transaction. But
given the real high availability these days, in most of the cases
remote transaction will be resolved in the post-commit/abort phase. I
think we should optimize for most common case. Your concern is still
valid, that we shouldn't raise an error or do anything critical in
post-commit/abort phase. So we should device a way to send
COMMIT/ABORT prepared messages to the remote server in asynchronous
fashion carefully avoiding errors. Recent changes to 2PC have improved
performance in that area to a great extent. Relying on resolver
backend to resolve remote transactions would erode that performance
gain.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Mark Rofail
On Thu, Jul 27, 2017 at 12:54 PM, Alexander Korotkov 
wrote:
>
> How many rows of FK table were referencing the PK table row you're
> updating/deleting.
> I wonder how may RI trigger work so fast if it has to do some job besides
> index search with no results?
>
The problem here is that the only to option for the foreign key arrays are
NO ACTION and RESTRICT which don't allow me to update/delete a refrenced
row in the PK Table. the EXPLAIN ANALYZE only tells me that this violates
the FK constraint.

So we have two options. Either implement CASCADE or if there's a
configration for EXPLAIN to show costs even if it violates the FK
constraints.


> I think we should also vary the number of referencing rows.
>
The x axis is the number if refrencing rows in the FK table


Re: [HACKERS] [POC] hash partitioning

2017-07-27 Thread amul sul
Attaching newer patches rebased against the latest master head. Thanks !

Regards,
Amul


0001-Cleanup_v6.patch
Description: Binary data


0002-hash-partitioning_another_design-v16.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-07-27 Thread Robert Haas
On Thu, Jul 27, 2017 at 5:08 AM, Stas Kelvich  wrote:
> As far as I understand any solution that provides proper isolation for 
> distributed
> transactions in postgres will require distributed 2PC somewhere under the 
> hood.
> That is just consequence of parallelism that database allows — transactions 
> can
> abort due concurrent operations. So dichotomy is simple: either we need 2PC or
> restrict write transactions to be physically serial.
>
> In particular both Postgres-XL/XC and postgrespro multimaster are using 2PC to
> commit distributed transaction.

Ah, OK.  I was imagining that a transaction manager might be
responsible for managing both snapshots and distributed commit.  But
if the transaction manager only handles the snapshots (how?) and the
commit has to be done using 2PC, then we need this.

> Also I see the quite a big value in this patch even without 
> tm/snapshots/whatever.
> Right now fdw doesn’t guarantee neither isolation nor atomicity. And if one 
> isn’t
> doing cross-node analytical transactions it will be safe to live without 
> isolation.
> But living without atomicity means that some parts of data can be lost 
> without simple
> way to detect and fix that.

OK, thanks for weighing in.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-07-27 Thread Alexander Korotkov
On Thu, Jul 27, 2017 at 3:31 AM, Mark Rofail  wrote:

> I have written some benchmark test.
>
> With two tables a PK table with 5 rows and an FK table with growing row
> count.
>
> Once triggering an RI check
> at 10 rows,
> 100 rows,
> 1,000 rows,
> 10,000 rows,
> 100,000 rows and
> 1,000,000 rows
>

How many rows of FK table were referencing the PK table row you're
updating/deleting.
I wonder how may RI trigger work so fast if it has to do some job besides
index search with no results?
I think we should also vary the number of referencing rows.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] asynchronous execution

2017-07-27 Thread Robert Haas
On Wed, Jul 26, 2017 at 5:43 PM, Tom Lane  wrote:
> I have not been paying any attention to this thread whatsoever,
> but I wonder if you can address your problem by building on top of
> the ExecProcNode replacement that Andres is working on,
> https://www.postgresql.org/message-id/20170726012641.bmhfcp5ajpudi...@alap3.anarazel.de
>
> The scheme he has allows $extra_stuff to be injected into ExecProcNode at
> no cost when $extra_stuff is not needed, because you simply don't insert
> the wrapper function when it's not needed.  I'm not sure that it will
> scale well to several different kinds of insertions though, for instance
> if you wanted both instrumentation and async support on the same node.
> But maybe those two couldn't be arms-length from each other anyway,
> in which case it might be fine as-is.

Yeah, I don't quite see how that would apply in this case -- what we
need here is not as simple as just conditionally injecting an extra
bit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple of postgresql.conf.sample discrepancies

2017-07-27 Thread Michael Paquier
On Thu, Jul 27, 2017 at 10:27 AM, Aleksander Alekseev
 wrote:
>> Here's a script that reminds you about GUCs you forgot to put in
>> postgresql.conf.sample.  It probably needs some work.  Does this
>> already happen somewhere else?  I guess not, because it found two
>> discrepancies:
>>
>> $ ./src/tools/check_sample_config.pl
>> enable_gathermerge appears in guc.c but not in postgresql.conf.sample
>> trace_recovery_messages appears in guc.c but not in postgresql.conf.sample
>
> I like the idea. However maybe it worth considering to turn it into a
> TAP test? Otherwise there is a good chance everybody will forget to run
> it. For similar reason I would advise to add this patch to the next
> commitfest.

Bonus points if the script can detect that a parameter's comment
forgets to include "(change requires restart)".
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Masahiko Sawada
On Thu, Jul 27, 2017 at 5:48 PM, Sokolov Yura
 wrote:
> On 2017-07-27 11:30, Masahiko Sawada wrote:
>>
>> On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire 
>> wrote:
>>>
>>> On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire 
>>> wrote:

 On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
  wrote:
>
> On 2017-07-24 19:11, Claudio Freire wrote:
>>
>> I was mostly thinking about something like the attached patch.
>>
>> Simple, unintrusive, and shouldn't cause any noticeable slowdown.
>
>
>
> Your change is small, clear, and currently useful for huge tables under
> high update load (until "allowing vacuum to use more than 1GB memory"
> is merged).


 In high-bloat conditions, it doesn't take long to accumulate 1GB of
 dead tuples (which is about 178M tuples, btw).

 The index scan takes way longer than the heap scan in that case.

> But it still delays updating fsm until whole first batch of dead tuples
> cleared (ie all indices scanned, and all heap pages cleared), and on
> such
> huge table it will be hours.


 So, true, it will get delayed considerably. But as you realized,
 there's not much point in trying to vacuum the FSM sooner, since it
 won't be accurate shortly afterwards anyway. Dead line pointers do use
 up a fair bit of space, especially on narrow tables.

 In a particular table I have that exhibits this problem, most of the
 time is spent scanning the index. It performs dozens of index scans
 before it's done, so it would vacuum the FSM quite often enough, even
 if I were to increase the mwm setting n-fold.
>>>
>>>
>>> I hate to reply to myself, but I wanted to add: in any case, the case
>>> I'm trying to avoid is the case where the FSM *never* gets vacuumed.
>>> That's bad. But it may not be the phenomenon you're experiencing in
>>> your tests.
>>>
>>
>> I think the frequently vacuuming the FSM during long-time vacuum would
>> be worth to have in order to avoid a table bloating. The patch
>> triggers to vacuum the FSM after vacuumed the table and indexes but I
>> think that we can have a similar mechanism for a table with no index.
>>
>> Regards,
>>
>> --
>> Masahiko Sawada
>> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
>> NTT Open Source Software Center
>
>
> I could be wrong, but it looks like table without index doesn't
> suffer that much. Since there is no indices, there is only one stage -
> scanning heap, and no quadratic behavior cause of full dead-tuple array
> and repeating index vacuuming.
>

The vacuuming the very large table with no index could also take a
long time, and it scans and vacuums blocks one by one. So I imagined
that we can vacuum the FSM once vacuumed a certain amount of blocks.
And that can avoid bloating table during the long-time vacuum.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-07-27 Thread Stas Kelvich

> On 27 Jul 2017, at 04:28, Robert Haas  wrote:
> 
>  However, you would not
> be guaranteed that all of those commits or rollbacks happen at
> anything like the same time.  So, you would have a sort of eventual
> consistency.

As far as I understand any solution that provides proper isolation for 
distributed
transactions in postgres will require distributed 2PC somewhere under the hood.
That is just consequence of parallelism that database allows — transactions can
abort due concurrent operations. So dichotomy is simple: either we need 2PC or
restrict write transactions to be physically serial.

In particular both Postgres-XL/XC and postgrespro multimaster are using 2PC to
commit distributed transaction.

Some years ago we created patches to implement transaction manager API and
that is just a way to inject consistent snapshots on different nodes, but atomic
commit itself is out of scope of TM API (hmm, may be it is better to call it 
snapshot
manager api?). That allows us to use it in quite different environments like 
fdw and
logical replication and both are using 2PC.

I want to submit TM API again during this release cycle along with 
implementation
for fdw. And I planned to base it on top of this patch. So I already rebased 
Masahiko’s
patch to current -master and started writing long list of nitpicks, but not 
finished yet.

Also I see the quite a big value in this patch even without 
tm/snapshots/whatever.
Right now fdw doesn’t guarantee neither isolation nor atomicity. And if one 
isn’t
doing cross-node analytical transactions it will be safe to live without 
isolation.
But living without atomicity means that some parts of data can be lost without 
simple
way to detect and fix that.


Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove old comments in dependencies.c and README.dependencies

2017-07-27 Thread atorikoshi

> Agreed.  Removed those comments.  Thanks for the patch.

Thanks!

On 2017/07/27 0:44, Alvaro Herrera wrote:

atorikoshi wrote:


Attached patch removes the comments about min_group_size.


Agreed.  Removed those comments.  Thanks for the patch.



--
Atsushi Torikoshi
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Sokolov Yura

On 2017-07-26 20:28, Sokolov Yura wrote:

On 2017-07-26 19:46, Claudio Freire wrote:

On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
 wrote:

On 2017-07-24 12:41, Sokolov Yura wrote:
test_master_1/pretty.log

...

time   activity  tps  latency   stddev  min  max
11130 av+ch  198198ms374ms  7ms   1956ms
11160 av+ch  248163ms401ms  7ms   2601ms
11190 av+ch  321125ms363ms  7ms   2722ms
11220 av+ch 1155 35ms123ms  7ms   2668ms
11250 av+ch 1390 29ms 79ms  7ms   1422ms


vs


test_master_ring16_1/pretty.log
time   activity  tps  latency   stddev  min  max
11130 av+ch   26   1575ms635ms101ms   2536ms
11160 av+ch   25   1552ms648ms 58ms   2376ms
11190 av+ch   32   1275ms726ms 16ms   2493ms
11220 av+ch   23   1584ms674ms 48ms   2454ms
11250 av+ch   35   1235ms777ms 22ms   3627ms


That's a very huge change in latency for the worse

Are you sure that's the ring buffer's doing and not some methodology 
snafu?


Well, I tuned postgresql.conf so that there is no such
catastrophic slows down on master branch. (with default
settings such slowdown happens quite frequently).
bgwriter_lru_maxpages = 10 (instead of default 200) were one
of such tuning.

Probably there were some magic "border" that triggers this
behavior. Tuning postgresql.conf shifted master branch on
"good side" of this border, and faster autovacuum crossed it
to "bad side" again.

Probably, backend_flush_after = 2MB (instead of default 0) is
also part of this border. I didn't try to bench without this
option yet.

Any way, given checkpoint and autovacuum interference could be
such noticeable, checkpoint clearly should affect autovacuum
cost mechanism, imho.

With regards,
--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company


I'll run two times with default postgresql.conf (except
shared_buffers and maintence_work_mem) to find out behavior on
default setting.

Then I'll try to investigate checkpoint co-operation with
autovacuum to fix behavior with "tuned" postgresql.conf.

--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Sokolov Yura

On 2017-07-27 11:30, Masahiko Sawada wrote:
On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire 
 wrote:
On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire 
 wrote:

On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
 wrote:

On 2017-07-24 19:11, Claudio Freire wrote:

I was mostly thinking about something like the attached patch.

Simple, unintrusive, and shouldn't cause any noticeable slowdown.



Your change is small, clear, and currently useful for huge tables 
under
high update load (until "allowing vacuum to use more than 1GB 
memory"

is merged).


In high-bloat conditions, it doesn't take long to accumulate 1GB of
dead tuples (which is about 178M tuples, btw).

The index scan takes way longer than the heap scan in that case.

But it still delays updating fsm until whole first batch of dead 
tuples
cleared (ie all indices scanned, and all heap pages cleared), and on 
such

huge table it will be hours.


So, true, it will get delayed considerably. But as you realized,
there's not much point in trying to vacuum the FSM sooner, since it
won't be accurate shortly afterwards anyway. Dead line pointers do 
use

up a fair bit of space, especially on narrow tables.

In a particular table I have that exhibits this problem, most of the
time is spent scanning the index. It performs dozens of index scans
before it's done, so it would vacuum the FSM quite often enough, even
if I were to increase the mwm setting n-fold.


I hate to reply to myself, but I wanted to add: in any case, the case
I'm trying to avoid is the case where the FSM *never* gets vacuumed.
That's bad. But it may not be the phenomenon you're experiencing in
your tests.



I think the frequently vacuuming the FSM during long-time vacuum would
be worth to have in order to avoid a table bloating. The patch
triggers to vacuum the FSM after vacuumed the table and indexes but I
think that we can have a similar mechanism for a table with no index.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


I could be wrong, but it looks like table without index doesn't
suffer that much. Since there is no indices, there is only one stage -
scanning heap, and no quadratic behavior cause of full dead-tuple array
and repeating index vacuuming.

--
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increase Vacuum ring buffer.

2017-07-27 Thread Masahiko Sawada
On Tue, Jul 25, 2017 at 2:27 AM, Claudio Freire  wrote:
> On Mon, Jul 24, 2017 at 2:20 PM, Claudio Freire  
> wrote:
>> On Mon, Jul 24, 2017 at 2:10 PM, Sokolov Yura
>>  wrote:
>>> On 2017-07-24 19:11, Claudio Freire wrote:
 I was mostly thinking about something like the attached patch.

 Simple, unintrusive, and shouldn't cause any noticeable slowdown.
>>>
>>>
>>> Your change is small, clear, and currently useful for huge tables under
>>> high update load (until "allowing vacuum to use more than 1GB memory"
>>> is merged).
>>
>> In high-bloat conditions, it doesn't take long to accumulate 1GB of
>> dead tuples (which is about 178M tuples, btw).
>>
>> The index scan takes way longer than the heap scan in that case.
>>
>>> But it still delays updating fsm until whole first batch of dead tuples
>>> cleared (ie all indices scanned, and all heap pages cleared), and on such
>>> huge table it will be hours.
>>
>> So, true, it will get delayed considerably. But as you realized,
>> there's not much point in trying to vacuum the FSM sooner, since it
>> won't be accurate shortly afterwards anyway. Dead line pointers do use
>> up a fair bit of space, especially on narrow tables.
>>
>> In a particular table I have that exhibits this problem, most of the
>> time is spent scanning the index. It performs dozens of index scans
>> before it's done, so it would vacuum the FSM quite often enough, even
>> if I were to increase the mwm setting n-fold.
>
> I hate to reply to myself, but I wanted to add: in any case, the case
> I'm trying to avoid is the case where the FSM *never* gets vacuumed.
> That's bad. But it may not be the phenomenon you're experiencing in
> your tests.
>

I think the frequently vacuuming the FSM during long-time vacuum would
be worth to have in order to avoid a table bloating. The patch
triggers to vacuum the FSM after vacuumed the table and indexes but I
think that we can have a similar mechanism for a table with no index.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple of postgresql.conf.sample discrepancies

2017-07-27 Thread Aleksander Alekseev
Hi Thomas,

> Here's a script that reminds you about GUCs you forgot to put in
> postgresql.conf.sample.  It probably needs some work.  Does this
> already happen somewhere else?  I guess not, because it found two
> discrepancies:
> 
> $ ./src/tools/check_sample_config.pl
> enable_gathermerge appears in guc.c but not in postgresql.conf.sample
> trace_recovery_messages appears in guc.c but not in postgresql.conf.sample

I like the idea. However maybe it worth considering to turn it into a
TAP test? Otherwise there is a good chance everybody will forget to run
it. For similar reason I would advise to add this patch to the next
commitfest.

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [HACKERS] AlterUserStmt anmd RoleSpec rules in grammar.y

2017-07-27 Thread Pavel Golub
Hello, Tom.

You wrote:

TL> Pavel Golub  writes:
>> I need someone to throw some light on grammar (gram.y).
>> I'm investigating beta2 regression tests, and found new statement

>> `ALTER USER ALL SET application_name to 'SLAP';`
>> ^^^

TL> You'll notice that that statement fails in the regression tests:

TL> ALTER USER ALL SET application_name to 'SLAP';
TL> ERROR:  syntax error at or near "ALL"

One  more  notice.  ALTER  USER  ALL  works  in  EnterpriseDB 10beta2
installer. That's weird. I thought EnterpriseDB uses official sources.

TL> The one that works is

TL> ALTER ROLE ALL SET application_name to 'SLAP';

TL> and the reason is that AlterRoleSetStmt has a separate production
TL> for ALL, but AlterUserSetStmt doesn't.  This seems a tad bizarre
TL> though.  Peter, you added that production (in commit 9475db3a4);
TL> is this difference intentional or just an oversight?  If it's
TL> intentional, what's the reasoning?

TL> BTW, I'm quite confused as to why these test cases (in rolenames.sql)
TL> seem to predate that commit, and yet it did not change their results.

TL> regards, tom lane



-- 
With best wishes,
 Pavel  mailto:pa...@gf.microolap.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] AlterUserStmt anmd RoleSpec rules in grammar.y

2017-07-27 Thread Pavel Golub
Hello, Tom.

You wrote:

TL> Pavel Golub  writes:
>> I need someone to throw some light on grammar (gram.y).
>> I'm investigating beta2 regression tests, and found new statement

>> `ALTER USER ALL SET application_name to 'SLAP';`
>> ^^^

TL> You'll notice that that statement fails in the regression tests:

TL> ALTER USER ALL SET application_name to 'SLAP';
TL> ERROR:  syntax error at or near "ALL"

Oops! My bad!

TL> The one that works is

TL> ALTER ROLE ALL SET application_name to 'SLAP';

TL> and the reason is that AlterRoleSetStmt has a separate production
TL> for ALL, but AlterUserSetStmt doesn't.

Yeap, I see now separate rule for ALL in AlterRoleSetStmt.

TL> This seems a tad bizarre
TL> though.  Peter, you added that production (in commit 9475db3a4);
TL> is this difference intentional or just an oversight?  If it's
TL> intentional, what's the reasoning?

TL> BTW, I'm quite confused as to why these test cases (in rolenames.sql)
TL> seem to predate that commit, and yet it did not change their results.

TL> regards, tom lane



-- 
With best wishes,
 Pavel  mailto:pa...@gf.microolap.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers