Re: [HACKERS] [PATCH] quiet conversion warning in DatumGetFloat4

2017-05-30 Thread Tom Lane
Chapman Flack  writes:
> On 05/31/17 01:26, Tom Lane wrote:
>> Hm.  I think it would be better to use DatumGetInt32 here.  Arguably,
>> direct use of GET_4_BYTES and its siblings should only appear in
>> DatumGetFoo macros.

> Like so?

Looks promising offhand, but I'm too tired to check in detail right now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] quiet conversion warning in DatumGetFloat4

2017-05-30 Thread Chapman Flack
On 05/31/17 01:26, Tom Lane wrote:
> Hm.  I think it would be better to use DatumGetInt32 here.  Arguably,
> direct use of GET_4_BYTES and its siblings should only appear in
> DatumGetFoo macros.

Like so? These are the 4 sites where {GET,SET}_n_BYTES got introduced
in 14cca1b (for consistency, though only the GET_4 case produces warnings).

-Chap
--- src/include/postgres.h	2017-05-31 01:36:16.621829183 -0400
+++ src/include/postgres.h	2017-05-31 01:45:51.303427115 -0400
@@ -679,7 +679,7 @@
 		float4		retval;
 	}			myunion;
 
-	myunion.value = GET_4_BYTES(X);
+	myunion.value = DatumGetInt32(X);
 	return myunion.retval;
 }
 #else
@@ -704,7 +704,7 @@
 	}			myunion;
 
 	myunion.value = X;
-	return SET_4_BYTES(myunion.retval);
+	return Int32GetDatum(myunion.retval);
 }
 #else
 extern Datum Float4GetDatum(float4 X);
@@ -727,7 +727,7 @@
 		float8		retval;
 	}			myunion;
 
-	myunion.value = GET_8_BYTES(X);
+	myunion.value = DatumGetInt64(X);
 	return myunion.retval;
 }
 #else
@@ -753,7 +753,7 @@
 	}			myunion;
 
 	myunion.value = X;
-	return SET_8_BYTES(myunion.retval);
+	return Int64GetDatum(myunion.retval);
 }
 #else
 extern Datum Float8GetDatum(float8 X);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] quiet conversion warning in DatumGetFloat4

2017-05-30 Thread Tom Lane
Chapman Flack  writes:
> - myunion.value = GET_4_BYTES(X);
> + myunion.value = (int32)GET_4_BYTES(X);

Hm.  I think it would be better to use DatumGetInt32 here.  Arguably,
direct use of GET_4_BYTES and its siblings should only appear in
DatumGetFoo macros.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Stephen Frost
Tom,

On Wed, May 31, 2017 at 01:16 Tom Lane  wrote:

> Stephen Frost  writes:
> > In the end, the experiences I've had with pg_dump of late and trying to
> > ensure that pg_dump 9.6 is able to work all the way back to *7.0*, makes
> > me think that this notion of putting the one-and-only real test-suite we
> > have into the core repo almost laughable.
>
> um... why are you trying to do that?  We moved the goalposts last fall:
>
> Author: Tom Lane 
> Branch: master [64f3524e2] 2016-10-12 12:20:02 -0400
>
> Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers.


Right... for v10. Last I checked we still wish for pg_dump 9.6 to work
against 7.0, as I mention in the portion you quote above.  I agree that we
don't need to ensure that v10 pg_dump works against 7.0, which is helpful,
certainly, but there's still a lot left on the floor in the back branches
as it relates to code coverage and the buildfarm.

Thanks!

Stephen


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Tom Lane
Stephen Frost  writes:
> In the end, the experiences I've had with pg_dump of late and trying to
> ensure that pg_dump 9.6 is able to work all the way back to *7.0*, makes
> me think that this notion of putting the one-and-only real test-suite we
> have into the core repo almost laughable.

um... why are you trying to do that?  We moved the goalposts last fall:

Author: Tom Lane 
Branch: master [64f3524e2] 2016-10-12 12:20:02 -0400

Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: pg_type.h file is not synced

2017-05-30 Thread Michael Meskes
> Bruce Momjian  writes:
> > > > But I think the "ecpg/ecpglib/pg_type.h" file is currently not
> > > > synced
> > > > with the above file.
> > Should we add some Makefile magic to make sure they stay in sync?
> 
> Not so much that as teach genbki.pl to generate this file too.
> I think that might be a good idea, but personally I'd leave it until
> after the genbki rewrite we keep talking about.  This isn't important
> enough to make that rewrite have an even higher hurdle to clear.
> 
>   regards, tom lane

Agreed. We will only run into problems with ecpg if some of the
existing oids change. Newly added ones will not be referenced unless
ecpg receives changes, too.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Meskes at (Debian|Postgresql) dot Org
Jabber: michael at xmpp dot meskes dot org
VfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] quiet conversion warning in DatumGetFloat4

2017-05-30 Thread Chapman Flack
It seems that 14cca1b (use static inline functions for float <-> Datum
conversions) has an implicit narrowing conversion in one of those
functions.

If building an extension with gcc's -Wconversion warning enabled
(*cough* pljava *cough* ... the Maven plugin that runs the compiler
enables the warning by default), this makes for a noisy build.
The warning is harmless, but repeated everywhere postgres.h is
included. An explicit cast is enough to suppress it.

-Chap
--- src/include/postgres.h	2017-05-15 17:20:59.0 -0400
+++ src/include/postgres.h	2017-05-31 00:21:27.329393499 -0400
@@ -679,7 +679,7 @@
 		float4		retval;
 	}			myunion;
 
-	myunion.value = GET_4_BYTES(X);
+	myunion.value = (int32)GET_4_BYTES(X);
 	return myunion.retval;
 }
 #else

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > * Craig Ringer (cr...@2ndquadrant.com) wrote:
> >> At the moment that'd be 9.5, since that's where PostgresNode was
> >> introduced. But if I can find the time I'd quite like to backport
> >> PostgresNode to 9.4 too.
> 
> > Makes sense to me.
> 
> Um ... but we still have 2 live pre-9.4 branches.  If your proposal
> doesn't extend to back-porting all of this stuff as far as 9.2,
> I don't see what this is really buying.  We'd still need version cutoff
> checks in the tests.

I don't believe the explicit goal of this is to remove the version
checks but rather to provide improved testing coverage in our
back-branches.  If we have to keep a version cutoff check for that, so
be it.

> (If you *do* propose back-patching all this stuff as far as 9.2, I'm not
> quite sure what I'd think about that.  But the proposal as stated seems
> like questionable half measures.)

I find that to be an extremely interesting idea, for my own 2c, but I'm
not sure how practical it is.

Based on all of the feedback and discussion, I'm really inclined to
suggest that we support an alternative testing structure to the in-repo
regression suite.  Such a testing structure would allow us to have,
perhaps, somewhat longer running tests than what developers run
typically (frankly, we already have this, but we have perversely decided
that it's "ok" when it's a configure option or a #define, but seemingly
not otherwise), or tests built in a framework which simply didn't exist
at the time of the major release which is being tested (such as the TAP
tests), but wouldn't also force the burden of supporting those tests on
our packagers (which has the other advantage of avoiding pushing new
requirements on those packagers, which might be quite difficult to
fulfill).

In the end, the experiences I've had with pg_dump of late and trying to
ensure that pg_dump 9.6 is able to work all the way back to *7.0*, makes
me think that this notion of putting the one-and-only real test-suite we
have into the core repo almost laughable.  We aren't going to back-patch
things into 7.0, nor are we likely to run 7.0 across all members of the
buildfarm, so how are we to test the combinations which we claim to
support?  On one-off builds/installs on individual developer systems
with, in some cases, hacked-up versions of PG (just to get them to
build!).  Is that really testing what we're worried about?  Perhaps in a
few cases, but I've little doubt that any serious 7.0-era deployment is
going to require more effort to migrate forward than running a 9.6
pg_dump against it.

That does push back a bit on if we should really be trying to support
such ancient versions, but I have to admit that I'm as much of a pureist
as anyone in that regard and I'd really hate to drop support for those
older branches too, at least for pg_dump, since that's how one moves
forward.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Tom Lane
Stephen Frost  writes:
> * Craig Ringer (cr...@2ndquadrant.com) wrote:
>> At the moment that'd be 9.5, since that's where PostgresNode was
>> introduced. But if I can find the time I'd quite like to backport
>> PostgresNode to 9.4 too.

> Makes sense to me.

Um ... but we still have 2 live pre-9.4 branches.  If your proposal
doesn't extend to back-porting all of this stuff as far as 9.2,
I don't see what this is really buying.  We'd still need version cutoff
checks in the tests.

(If you *do* propose back-patching all this stuff as far as 9.2, I'm not
quite sure what I'd think about that.  But the proposal as stated seems
like questionable half measures.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Stephen Frost
Craig,

* Craig Ringer (cr...@2ndquadrant.com) wrote:
> I propose that we backpatch all practical TAP enhancements back to the
> last supported back branch.

Thanks for bringing this up.  I tend to agree, as we really want to have
better testing of PostgreSQL in all of our branches and being able to
back-patch tests, and add more tests to all of the branches where those
tests apply, will help a great deal with that.

> At the moment that'd be 9.5, since that's where PostgresNode was
> introduced. But if I can find the time I'd quite like to backport
> PostgresNode to 9.4 too.

Makes sense to me.

> Where needed, PostgresNode could have version-dependent logic so we
> could just copy the Perl module to the different branches.

Agreed.

> This would make it much easier to test for bugs when doing work on
> back branches, run the same test on multiple branches, etc. My
> personal motivation is mainly using it in extensions, but I've
> _frequently_ found myself writing TAP tests when looking into possible
> Pg bugs etc too. Not having the same facilities in the different
> branches makes this way harder.

I also find it really unfortunate to write a set of tests but then not
be able to have them included and run across the buildfarm.  As much as
I'd like to think that testing locally identifies all issues, I've been
proven wrong enough to realize that it's really unlikely to ever be
true.

> If combined with pg_config --version-num (which IMO deserves a
> backpatch especially now Pg 10 makes parsing our versions harder) this
> would make it a lot more practical to do nontrivial tests in
> extensions - which really matters since we introduced bgworkers.

Presumably you mean nontrivial tests of extensions in *back-branches*,
here?  Or am I misunderstanding what you're getting at?

> Thoughts? Backpatch new TAP methods, etc, into back branches routinely?

For my 2c, at least, I'd like to see the project, in general, be much
more accepting of adding tests to all branches, and, really, to
encourage that the focus of the time between feature-freeze and release
be specifically on improving our testing code coverage, including the
coverage in back branches, as well as the to-be-released version.

Perhaps, one day in the far future, we will be able to look back in
amusement at the lack of code coverage we have these days, but we have
to really work to get there and that means setting aside some period of
dedicated time during the year for writing tests, in my opinion, and not
being overly picky about if those tests are adding code coverage for
code committed in Postgres95 or for PG10.  I would have thought that the
extent of the back-patching I've been doing of late for issues found in
pg_dump, back to the initial implementation of those parts, would point
out this need, but I'm afraid that there continues to be a general
feeling that "what we have is good enough" and I find that a very hard
pill to swallow when there are extremely important components of the
system which are, essentially, entirely untested in the buildfarm, such
as:

https://coverage.postgresql.org/src/backend/access/brin/brin_xlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/common/bufmask.c.gcov.html
https://coverage.postgresql.org/src/backend/access/gin/ginxlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/gist/gistbuildbuffers.c.gcov.html
https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/hash/hash_xlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/nbtree/nbtxlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/spgist/spgxlog.c.gcov.html
https://coverage.postgresql.org/src/backend/access/transam/xlogfuncs.c.gcov.html
https://coverage.postgresql.org/src/backend/executor/nodeCustom.c.gcov.html
https://coverage.postgresql.org/src/backend/executor/tqueue.c.gcov.html

Let's. please. work together to correct this.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump

2017-05-30 Thread David G. Johnston
Stephen,

On Tue, May 30, 2017 at 8:41 PM, Stephen Frost  wrote:

> David,
>
> * David G. Johnston (david.g.johns...@gmail.com) wrote:
> > On Fri, May 26, 2017 at 7:47 AM, Stephen Frost 
> wrote:
> > > * Robins Tharakan (thara...@gmail.com) wrote:
> > > > Attached is a patch adds a --no-comments argument to pg_dump to skip
> > > > generation of COMMENT statements when generating a backup. This is
> > > crucial
> > > > for non-superusers to restore a database backup in a Single
> Transaction.
> > > > Currently, this requires one to remove COMMENTs via scripts, which is
> > > > inelegant at best.
> > >
> > > Having --no-comments seems generally useful to me, in any case.
> >
> > I​t smacks of being excessive to me.
> > ​
>
> I have a hard time having an issue with an option that exists to exclude
> a particular type of object from being in the dump.
>

​Excessive with respect to the problem at hand.  A single comment in the
dump is unable to be restored.  Because of that we are going to require
people to omit every comment in their database.  The loss of all that
information is in excess of what is required to solve the stated problem
which is how I was thinking of excessive.

I agree that the general idea of allowing users to choose to include or
exclude comments is worth discussion along the same lines of large objects
and privileges.


> > > CREATE EXTENSION IF NOT EXISTS plpgsql ... COMMENT blah;
> >
> > ​A less verbose way to add comments to objects would be nice but we have
> an
> > immediate problem that we either need to solve or document a best
> practice
> > for.
>
> The above would be a solution to the immediate problem in as much as
> adding COMMENT IF NOT EXISTS would be.
>

​I believe it would take a lot longer, possibly even until 12, to get the
linked comment feature committed compared ​to committing COMMENT IF NOT
EXISTS or some variation (or putting in a hack for that matter).


> > COMMENT IF NOT EXISTS has been brought up but it doesn't actually map to
> > what seems to me is the underlying problem...that people don't want a
> > non-functional (usually...) aspect preventing successful restoration.
>
> I tend to disagree with this characterization- I'm of the opinion that
> people are, rightly, confused as to why we bother to try and add a
> COMMENT to an object which we didn't actually end up creating (as it
> already existed), and then throw an error on it to boot.


My characterization does actually describe the current system though.
While I won't doubt that people do hold your belief it is an underlying
mis-understanding as to how PostgreSQL works since comments aren't, as you
say below, actual attributes but rather objects in their own right.  I
would love to have someone solve the systemic problem here.  But the actual
complaint can probably be addressed without it.


>   Were pg_dump a
> bit more intelligent of an application, it would realize that once the
> CREATE ... IF NOT EXISTS returned a notice saying "this thing already
> existed" that it would realize that it shouldn't try to adjust the
> attributes of that object, as it was already existing.


​pg_dump isn't in play during the restore which is where the error occurs.

During restore you have pg_restore+psql - and having cross-statement logic
in those applications is likely a non-starter. That is ultimately the
problem here, and which is indeed fixed by the outstanding proposal of
embedding COMMENT within the CREATE/ALTER object commands.  But today,
comments are independent objects and solving the problem within that domain
will probably prove easier than changing how the system treats comments.


> > COMMENT ON object TRY 'text'  -- i.e., replace the word IS with TRY
>
> Perhaps you could elaborate as to how this is different from IF NOT
> EXISTS?
>
>
​If the comment on plpgsql were removed for some reason the COMMENT ON IF
NOT EXISTS would fire and then it would fail due to permissions.  With
"TRY" whether the comment (or object for that matter) exists or not the new
comment would be attempted and if the permission failure kicked in it
wouldn't care.​


>  If the
> > affected users cannot make that work then maybe we should just remove the
> > comment from the extension.
>
> Removing the comment as a way to deal with our deficiency in this area
> strikes me as akin to adding planner hints.
>

​Maybe, but the proposal you are supporting has been around for years, with
people generally in favor of it, and hasn't happened yet.  At some point
I'd rather hold my nose and fix the problem myself than wait for the
professional to arrive and do it right.  Any, hey, we've had multiple
planner hints since 8.4 ;)

David J.


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Beena Emerson
On Wed, May 31, 2017 at 8:13 AM, Amit Langote
 wrote:
> On 2017/05/31 9:33, Amit Langote wrote:
>
>
> In get_rule_expr():
>
>  case PARTITION_STRATEGY_LIST:
>  Assert(spec->listdatums != NIL);
>
> +/*
> + * If the boundspec is of Default partition, it does
> + * not have list of datums, but has only one node to
> + * indicate its a default partition.
> + */
> +if (isDefaultPartitionBound(
> +(Node *) linitial(spec->listdatums)))
> +{
> +appendStringInfoString(buf, "DEFAULT");
> +break;
> +}
> +
>
> How about adding this part before the switch (key->strategy)?  That way,
> we won't have to come back and add this again when we add range default
> partitions.

I think it is best that we add a bool is_default to PartitionBoundSpec
and then have a general check for both list and range. Though
listdatums, upperdatums and lowerdatums are set to default for a
DEFAULt partition, it does not seem proper that we check listdatums
for range as well.




-- 

Beena Emerson

EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_config --version-num

2017-05-30 Thread David G. Johnston
On Tue, May 30, 2017 at 8:07 PM, Tom Lane  wrote:

> Hm, but with this you're trading that problem for "is the right version
> of pg_config in my PATH?".
>

That is probably a solved problem for those who are parsing the output of
--version today.
​

>
> This idea might well be useful for external packages which are always
> built/tested against installed versions of Postgres.  But it seems like
> we need to think harder about what to do for our own usages, and that
> may lead to a different solution altogether.
>

​While we may not want to go to the extreme that is Perl, there being more
than one way to do things does have merit.  Given that we run 5 concurrent
releases of our software and put out new ones annually the version is a
very important piece of information.  There being 5 different ways to get
at that data is not inherently bad since each way likely is most useful to
different subsets of our users.  To allow people to scratch their own itch,
here specifically, seems like an easy win in making the project visibly
responsive to the community.

David J.
​


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> but without actual interoperability testing it sounds pretty
> speculative to me.

I'm all for interoperability testing.

When we have multiple implementations of TLS using different libraries
with various versions of PostgreSQL and libpq and are able to test those
against other versions of PostgreSQL and libpq compiled with other TLS
libraries, I'll be downright ecstatic.  We are a small ways from that
right now, however, and I don't believe that we should be asking the
implementors of channel binding to also implement support for multiple
TLS libraries in PostgreSQL in order to test that their RFC-following
(at least, as far as they can tell) implementation actually works.

I'm not exactly sure what to characterize that as, given that the old
fall-back of "feature creep" feels woefully inadequate as a description.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pg_config --version-num

2017-05-30 Thread David G. Johnston
On Tue, May 30, 2017 at 6:36 PM, Michael Paquier 
wrote:

> On Tue, May 30, 2017 at 6:14 PM, Craig Ringer 
> wrote:
> > Attached is a small patch to teach pg_config how to output a
> --version-num
> >
> > With Pg 10, parsing versions got more annoying. Especially with
> > "10beta1", "9.6beta2" etc into the mix. It makes no sense to force
> > tools and scripts to do this when we can just expose a sensible
> > pre-formatted one at no cost to us.
> >
> > Personally I'd like to backpatch this into supported back branches,
> > but just having it in pg 10 would be  a help.
>
> The last threads treating about the same subject are here:
> https://www.postgresql.org/message-id/CAB7nPqTAdAJpX8iK4V3uYJbO2Kmo8
> rhzqjkdsladdrannrg...@mail.gmail.com


​​Tom's comment here:

"whereas adding a pg_config option
entails quite a lot of overhead (documentation, translatable help text,
yadda yadda)."

The proposed doesn't touch the first item and patch author's aren't
expected to handle the second.  Not sure what all the rest entails...but I
cannot imagine it is a considerable amount of stuff that amounts to little
more than boilerplate text paralleling what is already out there for the
existing --version option.  If someone is willing to do that I'd think we
should feel OK with the little bit of translation would that would need to
occur because of it.

The fact that this is even on the radar means that more than likely there
are sensible uses for this capability whether they've been adequately
presented or not.  We don't have someone begging for help here but rather
ultimately a complete patch that can be committed and which would require
pretty much zero maintenance.

While I don't do it presently I could very well imagine value in being able
to inspect installed versions PostgreSQL, including patch levels, without
needing a running server process or the ability to login.

David J.


Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump

2017-05-30 Thread Stephen Frost
David,

* David G. Johnston (david.g.johns...@gmail.com) wrote:
> On Fri, May 26, 2017 at 7:47 AM, Stephen Frost  wrote:
> > * Robins Tharakan (thara...@gmail.com) wrote:
> > > Attached is a patch adds a --no-comments argument to pg_dump to skip
> > > generation of COMMENT statements when generating a backup. This is
> > crucial
> > > for non-superusers to restore a database backup in a Single Transaction.
> > > Currently, this requires one to remove COMMENTs via scripts, which is
> > > inelegant at best.
> >
> > Having --no-comments seems generally useful to me, in any case.
> 
> I​t smacks of being excessive to me.
> ​

I have a hard time having an issue with an option that exists to exclude
a particular type of object from being in the dump.  A user who never
uses large objects/blobs might feel the same way about "--no-blobs", or
a user who never uses ACLs wondering why we have a "--no-privileges".
In the end, these are also possible components that users may wish to
have for their own reasons.

What I, certainly, agree isn't ideal is requiring users to use such an
option to generate a database-level dump file (assuming they have access
to more-or-less all database objects) which can be restored as a
non-superuser, that's why I was a bit hesitant about this particular
solution to the overall problem.

I do agree that if there is simply no use-case, ever, for wishing to
strip comments from a database then it might be excessive to provide
such an option, but I tend to feel that there is a reasonable, general,
use-case for the option.

> > CREATE EXTENSION IF NOT EXISTS plpgsql ... COMMENT blah;
> 
> ​A less verbose way to add comments to objects would be nice but we have an
> immediate problem that we either need to solve or document a best practice
> for.

The above would be a solution to the immediate problem in as much as
adding COMMENT IF NOT EXISTS would be.

> COMMENT IF NOT EXISTS has been brought up but it doesn't actually map to
> what seems to me is the underlying problem...that people don't want a
> non-functional (usually...) aspect preventing successful restoration.

I tend to disagree with this characterization- I'm of the opinion that
people are, rightly, confused as to why we bother to try and add a
COMMENT to an object which we didn't actually end up creating (as it
already existed), and then throw an error on it to boot.  Were pg_dump a
bit more intelligent of an application, it would realize that once the
CREATE ... IF NOT EXISTS returned a notice saying "this thing already
existed" that it would realize that it shouldn't try to adjust the
attributes of that object, as it was already existing.  That, however,
would preclude the ability of pg_dump to produce a text file used for
restore, unless we started to write into that text file DO blocks, which
I doubt would go over very well.

> COMMENT ON object TRY 'text'  -- i.e., replace the word IS with TRY

I'm not sure that I see why we should invent wholly new syntax for
something which we already have in IF NOT EXISTS, or why this should
really be viewed or considered any differently from the IF NOT EXISTS
syntax.

Perhaps you could elaborate as to how this is different from IF NOT
EXISTS?

> If the attempt to comment fails for any reason log a warning (maybe) but
> otherwise ignore the result and continue on without invoking an error.

In the IF NOT EXISTS case we log a NOTICE, which seems like it's what
would be appropriate here also, again begging the question of it this is
really different from IF NOT EXISTS in a meaningful way.

> One suggestion I've seen is to simply "COMMENT ON EXTENSION plpgsql IS
> NULL;"  If that works in the scenarios people are currently dealing with
> then I'd say we should advise that such an action be taken for those whom
> wish to generate dumps that can be loaded by non-super-users.  If the
> affected users cannot make that work then maybe we should just remove the
> comment from the extension.  People can lookup "plpgsql" in the docs easily
> enough and "PL/pgSQL procedural language" doesn't do anything more than
> expand the acronym.

Removing the comment as a way to deal with our deficiency in this area
strikes me as akin to adding planner hints.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Why does logical replication launcher set application_name?

2017-05-30 Thread Peter Eisentraut
Here is a proposed solution that splits bgw_name into bgw_type and
bgw_name_extra.  bgw_type shows up in pg_stat_activity.backend_type.
Uses of application_name are removed, because they are no longer
necessary to identity the process type.

This code appears to be buggy because I sometimes get NULL results of
the backend_type lookup, implying that it couldn't find the background
worker slot.  This needs another look.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 45444b3362e8d00e998fae195023b8a55d96a005 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 30 May 2017 22:59:19 -0400
Subject: [PATCH] Split background worker name into type and name

Remove background worker structure field bgw_name and add two new fields
bgw_type and bgw_name_extra.  The bgw_type field is intended to be set
to the same value for all workers of the same type, so they can be
grouped in pg_stat_activity, for example.  The bgw_name_extra field can
be any suffix that is specific to the individual worker.  So bgw_type +
bgw_name_extra is the old bgw_name.

The backend_type column in pg_stat_activity now shows bgw_type for a
background worker.  The ps listing and some log messages no longer call
out that a process is a "background worker" but just show the bgw_type.
That way, being a background worker is effectively an implementation
detail now that is not shown to the user.

Don't set application_name in logical replication launcher or worker
anymore.  Those processes can now be identified using the mechanisms
described above instead.
---
 doc/src/sgml/bgworker.sgml | 17 +---
 src/backend/access/transam/parallel.c  |  3 ++-
 src/backend/postmaster/bgworker.c  | 42 --
 src/backend/postmaster/postmaster.c| 16 +---
 src/backend/replication/logical/launcher.c | 15 +--
 src/backend/replication/logical/worker.c   |  4 ---
 src/backend/utils/adt/pgstatfuncs.c| 29 +++--
 src/include/postmaster/bgworker.h  |  3 ++-
 src/test/modules/test_shm_mq/setup.c   |  2 +-
 src/test/modules/worker_spi/worker_spi.c   | 17 +++-
 10 files changed, 92 insertions(+), 56 deletions(-)

diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml
index b422323081..0a89a5044b 100644
--- a/doc/src/sgml/bgworker.sgml
+++ b/doc/src/sgml/bgworker.sgml
@@ -50,7 +50,8 @@ Background Worker Processes
 typedef void (*bgworker_main_type)(Datum main_arg);
 typedef struct BackgroundWorker
 {
-charbgw_name[BGW_MAXLEN];
+charbgw_type[BGW_MAXLEN];
+charbgw_name_extra[BGW_MAXLEN];
 int bgw_flags;
 BgWorkerStartTime bgw_start_time;
 int bgw_restart_time;   /* in seconds, or BGW_NEVER_RESTART */
@@ -64,8 +65,18 @@ Background Worker Processes
   
 
   
-   bgw_name is a string to be used in log messages, process
-   listings and similar contexts.
+   bgw_type is a string to be used in log messages, process
+   listings and similar contexts.  It should be the same for all background
+   workers of the same type, so that it is possible to group such workers in a
+   process listing, for example.
+  
+
+  
+   bgw_name_extra is a string that is appended
+   to bgw_type in some contexts such as process
+   listings.  Unlike bgw_type, it can contain
+   information that is particular to this process.  The string, if not empty,
+   should normally start with a space or other separator.
   
 
   
diff --git a/src/backend/access/transam/parallel.c 
b/src/backend/access/transam/parallel.c
index 2dad3e8a65..fbdcf2da31 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -436,7 +436,8 @@ LaunchParallelWorkers(ParallelContext *pcxt)
 
/* Configure a worker. */
memset(, 0, sizeof(worker));
-   snprintf(worker.bgw_name, BGW_MAXLEN, "parallel worker for PID %d",
+   snprintf(worker.bgw_type, BGW_MAXLEN, "parallel worker");
+   snprintf(worker.bgw_name_extra, BGW_MAXLEN, " for PID %d",
 MyProcPid);
worker.bgw_flags =
BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION
diff --git a/src/backend/postmaster/bgworker.c 
b/src/backend/postmaster/bgworker.c
index c3454276bf..65318a5fb8 100644
--- a/src/backend/postmaster/bgworker.c
+++ b/src/backend/postmaster/bgworker.c
@@ -342,8 +342,10 @@ BackgroundWorkerStateChange(void)
 * Copy strings in a paranoid way.  If shared memory is 
corrupted, the
 * source data might not even be NUL-terminated.
 */
-   ascii_safe_strlcpy(rw->rw_worker.bgw_name,
-  slot->worker.bgw_name, 
BGW_MAXLEN);
+   ascii_safe_strlcpy(rw->rw_worker.bgw_type,
+  

Re: [HACKERS] pg_config --version-num

2017-05-30 Thread Tom Lane
Craig Ringer  writes:
> On 31 May 2017 9:36 am, "Michael Paquier"  wrote:
>> Is the data in Makefile.global unsufficient?

> It's a pain in the butt because then you need to find or get passed the
> name of Makefile.global. Then you have to template it out into a file. Or
> parse the Makefile. Or create a wrapper program to emit it.
> It's beyond me why we don't expose this at runtime for use in scripts and
> tools. (Then again, the same is true of reporting it in the startup message
> and we know how that's gone).

Hm, but with this you're trading that problem for "is the right version
of pg_config in my PATH?".  I can't see using this in TAP testing, for
instance, because it would never work in "make check" scenarios.

This idea might well be useful for external packages which are always
built/tested against installed versions of Postgres.  But it seems like
we need to think harder about what to do for our own usages, and that
may lead to a different solution altogether.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Amit Langote
On 2017/05/31 9:33, Amit Langote wrote:
> On 2017/05/30 16:38, Jeevan Ladhe wrote:
>> I have rebased the patch on the latest commit.
>> PFA.
> 
> Was looking at the patch

I tried creating default partition of a range-partitioned table and got
the following error:

ERROR:  invalid bound specification for a range partition

I thought it would give:

ERROR: creating default partition is not supported for range partitioned
tables

Which means transformPartitionBound() should perform this check more
carefully.  As I suggested in my previous email, if there were a
is_default field in the PartitionBoundSpec, then one could add the
following block of code at the beginning of transformPartitionBound:


  if (spec->is_default && spec->strategy != PARTITION_STRATEGY_LIST)
  ereport(ERROR,
  (errcode(ERRCODE_INVALID_TABLE_DEFINITION),
   errmsg("creating default partition is not supported for %s
partitioned tables", get_partition_strategy_name(key->strategy;


Some more comments on the patch:

+ errmsg("new default partition constraint is
violated by some row"),

"new default partition constraint" may sound a bit confusing to users.
That we recompute the default partition's constraint and check the "new
constraint" against the rows it contains seems to me to be the description
of internal details.  How about:

ERROR: default partition contains rows that belong to partition being created

+char *ExecBuildSlotValueDescription(Oid reloid,
+  TupleTableSlot *slot,
+  TupleDesc tupdesc,
+  Bitmapset *modifiedCols,
+  int maxfieldlen);

It seems that you made the above public to use it in
check_default_allows_bound(), which while harmless, I'm not sure if
needed.  ATRewriteTable() in tablecmds.c, for example, emits the following
error messages:

errmsg("check constraint \"%s\" is violated by some row",

errmsg("partition constraint is violated by some row")));

but neither outputs the DETAIL part showing exactly what row.  I think
it's fine for check_default_allows_bound() not to show the row itself and
hence no need to make ExecBuildSlotValueDescription public.


In get_rule_expr():

 case PARTITION_STRATEGY_LIST:
 Assert(spec->listdatums != NIL);

+/*
+ * If the boundspec is of Default partition, it does
+ * not have list of datums, but has only one node to
+ * indicate its a default partition.
+ */
+if (isDefaultPartitionBound(
+(Node *) linitial(spec->listdatums)))
+{
+appendStringInfoString(buf, "DEFAULT");
+break;
+}
+

How about adding this part before the switch (key->strategy)?  That way,
we won't have to come back and add this again when we add range default
partitions.

Gotta go; will provide more comments later.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_config --version-num

2017-05-30 Thread Craig Ringer
On 31 May 2017 9:36 am, "Michael Paquier"  wrote:

On Tue, May 30, 2017 at 6:14 PM, Craig Ringer  wrote:
> Attached is a small patch to teach pg_config how to output a --version-num
>
> With Pg 10, parsing versions got more annoying. Especially with
> "10beta1", "9.6beta2" etc into the mix. It makes no sense to force
> tools and scripts to do this when we can just expose a sensible
> pre-formatted one at no cost to us.
>
> Personally I'd like to backpatch this into supported back branches,
> but just having it in pg 10 would be  a help.

The last threads treating about the same subject are here:
https://www.postgresql.org/message-id/CAB7nPqTAdAJpX8iK4V3uYJbO2Kmo8
rhzqjkdsladdrannrg...@mail.gmail.com
https://www.postgresql.org/message-id/20161127001648.ga21...@fetter.org
Is the data in Makefile.global unsufficient?


It's a pain in the butt because then you need to find or get passed the
name of Makefile.global. Then you have to template it out into a file. Or
parse the Makefile. Or create a wrapper program to emit it.

It's beyond me why we don't expose this at runtime for use in scripts and
tools. (Then again, the same is true of reporting it in the startup message
and we know how that's gone).


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Álvaro Hernández Tortosa



On 31/05/17 03:39, Michael Paquier wrote:

On Tue, May 30, 2017 at 5:59 PM, Robert Haas  wrote:

That sounds like undue optimism to me.  Unless somebody's tested that
Michael's proposed implementation, which uses undocumented OpenSSL
APIs, actually interoperates properly with a SCRAM + channel binding
implementation based on some other underlying SSL implementation, we
can't really know that it's going to work.  It's not like we're
calling SSL_do_the_right_thing_for_channel_binding_thing_per_rfc5929().
We're calling SSL_do_something_undocumented() and hoping that
something_undocumented ==
the_right_thing_for_channel_binding_thing_per_rfc5929.  Could be true,
but without actual interoperability testing it sounds pretty
speculative to me.

Hm? Python is using that stuff for a certain amount of years now, for
the same goal of providing channel binding for SSL authentication. You
can look at this thread:
https://bugs.python.org/issue12551
So qualifying that as a speculative method sounds a quite hard word to me.


Actually this Python patch seems to ultimately rely on the same 
OpenSSL functions as your implementation.


I don't have anything against implementing tls-unique, specially as 
per the RFC it is mandatory (if channel binding support is provided). 
However, given the use of undocumented API, given that at least the Java 
driver would have a very difficult time implementing it (basically 
replacing Java's standard SSL stack with a completely external one, 
BouncyCastle, which adds load, size and complexity), I would rather 
focus the efforts on tls-server-end-point which only needs to access the 
certificate data, something that most APIs/frameworks/languages seem to 
cover much more naturally than tls-unique.


Both are equally safe and effective and so having support for both 
seems sensible to me.



Álvaro

--

Álvaro Hernández Tortosa


---
<8K>data



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TAP backpatching policy

2017-05-30 Thread Michael Paquier
On Tue, May 30, 2017 at 6:12 PM, Craig Ringer  wrote:
> Thoughts? Backpatch new TAP methods, etc, into back branches routinely?

FWIW, I'd love to see a committer helping into getting this facility
back-patched at least to 9.5. The least I can do is a commitment as a
reviewer of such a patch. So count me in to help out.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Michael Paquier
On Tue, May 30, 2017 at 5:59 PM, Robert Haas  wrote:
> That sounds like undue optimism to me.  Unless somebody's tested that
> Michael's proposed implementation, which uses undocumented OpenSSL
> APIs, actually interoperates properly with a SCRAM + channel binding
> implementation based on some other underlying SSL implementation, we
> can't really know that it's going to work.  It's not like we're
> calling SSL_do_the_right_thing_for_channel_binding_thing_per_rfc5929().
> We're calling SSL_do_something_undocumented() and hoping that
> something_undocumented ==
> the_right_thing_for_channel_binding_thing_per_rfc5929.  Could be true,
> but without actual interoperability testing it sounds pretty
> speculative to me.

Hm? Python is using that stuff for a certain amount of years now, for
the same goal of providing channel binding for SSL authentication. You
can look at this thread:
https://bugs.python.org/issue12551
So qualifying that as a speculative method sounds a quite hard word to me.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_config --version-num

2017-05-30 Thread Michael Paquier
On Tue, May 30, 2017 at 6:14 PM, Craig Ringer  wrote:
> Attached is a small patch to teach pg_config how to output a --version-num
>
> With Pg 10, parsing versions got more annoying. Especially with
> "10beta1", "9.6beta2" etc into the mix. It makes no sense to force
> tools and scripts to do this when we can just expose a sensible
> pre-formatted one at no cost to us.
>
> Personally I'd like to backpatch this into supported back branches,
> but just having it in pg 10 would be  a help.

The last threads treating about the same subject are here:
https://www.postgresql.org/message-id/cab7npqtadajpx8ik4v3uyjbo2kmo8rhzqjkdsladdrannrg...@mail.gmail.com
https://www.postgresql.org/message-id/20161127001648.ga21...@fetter.org
Is the data in Makefile.global unsufficient?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Jeevan Ladhe
Thanks Amit for your comments.

On 31-May-2017 6:03 AM, "Amit Langote" 
wrote:

Hi Jeevan,

On 2017/05/30 16:38, Jeevan Ladhe wrote:
> I have rebased the patch on the latest commit.
> PFA.

Was looking at the patch and felt that the parse node representation of
default partition bound could be slightly different.  Can you explain the
motivation behind implementing it without adding a new member to the
PartitionBoundSpec struct?

I would suggest instead adding a bool named is_default and be done with
it.  It will help get rid of the public isDefaultPartitionBound() in the
proposed patch whose interface isn't quite clear and instead simply check
if (spec->is_default) in places where it's called by passing it (Node *)
linitial(spec->listdatums).


I thought of reusing the existing members of PartitionBoundSpec, but I
agree that having a bool could simplify the code. Will do the receptive
change.

Further looking into the patch, I found a tiny problem in
check_default_allows_bound().  If the default partition that will be
scanned by it is a foreign table or a partitioned table with a foreign
leaf partition, you will get a failure like:

-- default partition is a foreign table
alter table p attach partition fp default;

-- adding a new partition will try to scan fp above
alter table p attach partition p12 for values in (1, 2);
ERROR:  could not open file "base/13158/16456": No such file or directory

I think the foreign tables should be ignored here to avoid the error.  The
fact that foreign default partition may contain data that satisfies the
new partition's constraint is something we cannot do much about.  Also,
see the note in ATTACH PARTITION description regarding foreign tables [1]
and the discussion at [2].


Will look into this.

Regards,
Jeevan Ladhe


[HACKERS] pg_config --version-num

2017-05-30 Thread Craig Ringer
Hi all

Attached is a small patch to teach pg_config how to output a --version-num

With Pg 10, parsing versions got more annoying. Especially with
"10beta1", "9.6beta2" etc into the mix. It makes no sense to force
tools and scripts to do this when we can just expose a sensible
pre-formatted one at no cost to us.

Personally I'd like to backpatch this into supported back branches,
but just having it in pg 10 would be  a help.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
From 87647fab7f8ce607de4cfc098cd9a8519149bc31 Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Wed, 31 May 2017 09:10:33 +0800
Subject: [PATCH v1] Provide numeric version from pg_config

Add pg_config --version-num to print integer version, for scripts/tools
---
 src/bin/pg_config/pg_config.c | 2 ++
 src/common/config_info.c  | 6 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/src/bin/pg_config/pg_config.c b/src/bin/pg_config/pg_config.c
index fa2a5a9..3e576fa 100644
--- a/src/bin/pg_config/pg_config.c
+++ b/src/bin/pg_config/pg_config.c
@@ -64,6 +64,7 @@ static const InfoItem info_items[] = {
 	{"--ldflags_sl", "LDFLAGS_SL"},
 	{"--libs", "LIBS"},
 	{"--version", "VERSION"},
+	{"--version-num", "VERSION_NUM"},
 	{NULL, NULL}
 };
 
@@ -100,6 +101,7 @@ help(void)
 	printf(_("  --ldflags_sl  show LDFLAGS_SL value used when PostgreSQL was built\n"));
 	printf(_("  --libsshow LIBS value used when PostgreSQL was built\n"));
 	printf(_("  --version show the PostgreSQL version\n"));
+	printf(_("  --version-num show the PostgreSQL version in integer form\n"));
 	printf(_("  -?, --helpshow this help, then exit\n"));
 	printf(_("\nWith no arguments, all known items are shown.\n\n"));
 	printf(_("Report bugs to .\n"));
diff --git a/src/common/config_info.c b/src/common/config_info.c
index ad506be..e056f91 100644
--- a/src/common/config_info.c
+++ b/src/common/config_info.c
@@ -39,7 +39,7 @@ get_configdata(const char *my_exec_path, size_t *configdata_len)
 	int			i = 0;
 
 	/* Adjust this to match the number of items filled below */
-	*configdata_len = 23;
+	*configdata_len = 24;
 	configdata = (ConfigData *) palloc(*configdata_len * sizeof(ConfigData));
 
 	configdata[i].name = pstrdup("BINDIR");
@@ -200,6 +200,10 @@ get_configdata(const char *my_exec_path, size_t *configdata_len)
 	configdata[i].setting = pstrdup("PostgreSQL " PG_VERSION);
 	i++;
 
+	configdata[i].name = pstrdup("VERSION_NUM");
+	configdata[i].setting = pstrdup(CppAsString2(PG_VERSION_NUM));
+	i++;
+
 	Assert(i == *configdata_len);
 
 	return configdata;
-- 
2.9.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] TAP backpatching policy

2017-05-30 Thread Craig Ringer
Hi all

I propose that we backpatch all practical TAP enhancements back to the
last supported back branch.

At the moment that'd be 9.5, since that's where PostgresNode was
introduced. But if I can find the time I'd quite like to backport
PostgresNode to 9.4 too.

Where needed, PostgresNode could have version-dependent logic so we
could just copy the Perl module to the different branches.

This would make it much easier to test for bugs when doing work on
back branches, run the same test on multiple branches, etc. My
personal motivation is mainly using it in extensions, but I've
_frequently_ found myself writing TAP tests when looking into possible
Pg bugs etc too. Not having the same facilities in the different
branches makes this way harder.

If combined with pg_config --version-num (which IMO deserves a
backpatch especially now Pg 10 makes parsing our versions harder) this
would make it a lot more practical to do nontrivial tests in
extensions - which really matters since we introduced bgworkers.

Thoughts? Backpatch new TAP methods, etc, into back branches routinely?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 1:00 PM, Stephen Frost  wrote:
> All-in-all, this sounds like it's heading in the right direction, at
> least at a high level.  Glad to see that there's been consideration of
> other TLS implementations, and as such I don't think we need to be
> overly concerned about the specifics of the OpenSSL API here.

That sounds like undue optimism to me.  Unless somebody's tested that
Michael's proposed implementation, which uses undocumented OpenSSL
APIs, actually interoperates properly with a SCRAM + channel binding
implementation based on some other underlying SSL implementation, we
can't really know that it's going to work.  It's not like we're
calling SSL_do_the_right_thing_for_channel_binding_thing_per_rfc5929().
We're calling SSL_do_something_undocumented() and hoping that
something_undocumented ==
the_right_thing_for_channel_binding_thing_per_rfc5929.  Could be true,
but without actual interoperability testing it sounds pretty
speculative to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump

2017-05-30 Thread David G. Johnston
On Fri, May 26, 2017 at 7:47 AM, Stephen Frost  wrote:

> Greetings,
>
> * Robins Tharakan (thara...@gmail.com) wrote:
> > Attached is a patch adds a --no-comments argument to pg_dump to skip
> > generation of COMMENT statements when generating a backup. This is
> crucial
> > for non-superusers to restore a database backup in a Single Transaction.
> > Currently, this requires one to remove COMMENTs via scripts, which is
> > inelegant at best.
>
> Having --no-comments seems generally useful to me, in any case.
>

I​t smacks of being excessive to me.
​

> CREATE EXTENSION IF NOT EXISTS plpgsql ... COMMENT blah;
>

​A less verbose way to add comments to objects would be nice but we have an
immediate problem that we either need to solve or document a best practice
for.

COMMENT IF NOT EXISTS has been brought up but it doesn't actually map to
what seems to me is the underlying problem...that people don't want a
non-functional (usually...) aspect preventing successful restoration.

COMMENT ON object TRY 'text'  -- i.e., replace the word IS with TRY

If the attempt to comment fails for any reason log a warning (maybe) but
otherwise ignore the result and continue on without invoking an error.

One suggestion I've seen is to simply "COMMENT ON EXTENSION plpgsql IS
NULL;"  If that works in the scenarios people are currently dealing with
then I'd say we should advise that such an action be taken for those whom
wish to generate dumps that can be loaded by non-super-users.  If the
affected users cannot make that work then maybe we should just remove the
comment from the extension.  People can lookup "plpgsql" in the docs easily
enough and "PL/pgSQL procedural language" doesn't do anything more than
expand the acronym.

David J.


Re: [HACKERS] Segmentation fault when creating a BRIN, 10beta1

2017-05-30 Thread Andres Freund
On 2017-05-30 18:21:10 -0400, Alvaro Herrera wrote:
> Alexander Sosna wrote:
> > Hi,
> > 
> > I can reproduce a segmentation fault when creating a BRIN concurrently
> > with set pages_per_range and autosummarize.
> 
> Pushed fix just now.  Please give it a try.  Thanks for testing and
> reporting,

Shouldn't this have been uncovered by a regression test? In other words,
do I understand correctly that the new summarization stuff is largely
not tested by regression tests?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] TAP: allow overriding PostgresNode in get_new_node

2017-05-30 Thread Craig Ringer
Hi all

More and more I'm finding it useful to extend PostgresNode for project
specific helper classes. But PostgresNode::get_new_node is a factory
that doesn't provide any mechanism for overriding, so you have to
create a PostgresNode then re-bless it as your desired subclass. Ugly.

The attached patch allows an optional second argument, a class name,
to be passed to PostgresNode::get_new_node . It's instantiated instead
of PostgresNode if supplied. Its 'new' constructor must take the same
arguments.

BTW, it strikes me that we should really template a Perl file with the
postgres version. Or finally add a --version-num to pg_config so we
don't have to do version parsing. When you're writing extension TAP
tests you often need to know the target postgres version and parsing
our version strings, already annoying, got even more so with Pg 10. I
prefer adding --version-num to pg_config. Any objections? Will submit
patch if none.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
From 8a19793c89b165c25c88cd7149650e20ef27bd55 Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Tue, 30 May 2017 15:18:00 +0800
Subject: [PATCH v1] Allow creation of PostgresNode subclasses from
 get_new_node

---
 src/test/perl/PostgresNode.pm | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 42e66ed..9f28963 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -853,7 +853,7 @@ sub _update_pid
 
 =pod
 
-=item get_new_node(node_name)
+=item get_new_node(node_name, node_class)
 
 Build a new PostgresNode object, assigning a free port number. Standalone
 function that's automatically imported.
@@ -863,14 +863,20 @@ node, and to ensure that it gets shut down when the test script exits.
 
 You should generally use this instead of PostgresNode::new(...).
 
+If you have a subclass of PostgresNode you want created, pass its name as the
+second argument. By default a 'PostgresNode' is created.
+
 =cut
 
 sub get_new_node
 {
 	my $name  = shift;
+	my $pgnclass  = shift;
 	my $found = 0;
 	my $port  = $last_port_assigned;
 
+	$pgnclass = 'PostgresNode' unless defined $pgnclass;
+
 	while ($found == 0)
 	{
 
@@ -911,7 +917,9 @@ sub get_new_node
 	print "# Found free port $port\n";
 
 	# Lock port number found by creating a new node
-	my $node = new PostgresNode($name, $test_pghost, $port);
+	my $node = $pgnclass->new($name, $test_pghost, $port);
+
+	die "$pgnclass must have PostgresNode as a base" if (not ($node->isa("PostgresNode")));
 
 	# Add node to list of nodes
 	push(@all_nodes, $node);
-- 
2.9.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Amit Langote
Hi Jeevan,

On 2017/05/30 16:38, Jeevan Ladhe wrote:
> I have rebased the patch on the latest commit.
> PFA.

Was looking at the patch and felt that the parse node representation of
default partition bound could be slightly different.  Can you explain the
motivation behind implementing it without adding a new member to the
PartitionBoundSpec struct?

I would suggest instead adding a bool named is_default and be done with
it.  It will help get rid of the public isDefaultPartitionBound() in the
proposed patch whose interface isn't quite clear and instead simply check
if (spec->is_default) in places where it's called by passing it (Node *)
linitial(spec->listdatums).

Further looking into the patch, I found a tiny problem in
check_default_allows_bound().  If the default partition that will be
scanned by it is a foreign table or a partitioned table with a foreign
leaf partition, you will get a failure like:

-- default partition is a foreign table
alter table p attach partition fp default;

-- adding a new partition will try to scan fp above
alter table p attach partition p12 for values in (1, 2);
ERROR:  could not open file "base/13158/16456": No such file or directory

I think the foreign tables should be ignored here to avoid the error.  The
fact that foreign default partition may contain data that satisfies the
new partition's constraint is something we cannot do much about.  Also,
see the note in ATTACH PARTITION description regarding foreign tables [1]
and the discussion at [2].

Thanks,
Amit

[1] https://www.postgresql.org/docs/devel/static/sql-altertable.html
[2]
https://www.postgresql.org/message-id/flat/8f89dcb2-bd15-d8dc-5f54-3e11dc6c9463%40lab.ntt.co.jp



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] An incomplete comment sentence in subtrans.c

2017-05-30 Thread Masahiko Sawada
Hi,

There is an incomplete sentence at top of subtrans.c file. I think the
commit 88e66d19 removed the whole line mistakenly.

Attached patch fixes this.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


fix_comment_subtrans_c.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: recursive json_populate_record()

2017-05-30 Thread Nikita Glukhov

On 30.05.2017 02:31, Tom Lane wrote:


Nikita Glukhov  writes:

2. Also I've found a some kind of thinko in JsGetObjectSize() macro, but it
seems that this obvious mistake can not lead to incorrect behavior.

Hm, I think it actually was wrong for the case of jsonb_cont == NULL,
wasn't it?


Yes, JsObjectIsEmpty() returned false negative answer for jsonb_cont == NULL,
but this could only lead to suboptimal behavior of populate_record() when the
default record value was given.


But maybe that case didn't arise for the sole call site.

It also seems to me that populate_record() can't be called now with
jsonb_cont == NULL from the existing call sites.

--
Nikita Glukhov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use of non-restart-safe storage by temp_tablespaces

2017-05-30 Thread Mark Dilger

> On May 29, 2017, at 11:53 AM, Bruce Momjian  wrote:
> 
> Right now we don't document that temp_tablespaces can use
> non-restart-safe storage, e.g. /tmp, ramdisks.  Would this be safe? 
> Should we document this?

The only safe way to do temporary tablespaces that I have found is to extend
the grammar to allow CREATE TEMPORARY TABLESPACE, and then refuse
to allow the creation of any non-tempoary table (or index on same) in that
tablespace.  Otherwise, it is too easy to be surprised to discover that your
table contents have gone missing.

If the project wanted to extend the grammar and behavior in this direction,
maybe temp_tablespaces would work that way?  I'm not so familiar with what
the temp_tablespaces GUC is for -- only ever implemented what I described
above.

Mark Dilger


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] syscache entries out of order

2017-05-30 Thread Tom Lane
Mark Dilger  writes:
> Just FYI, as of 665d1fad99e7b11678b0d5fa24d2898424243cd6, syscache.h
> entries are not in alphabetical order, which violates the coding standard
> specified in the comment for these entries.  In particular, it is the 
> PUBLICATION
> and SUBSCRIPTION entries that are mis-ordered.

Agreed, this is the project convention and we should follow it.

> Sorry for my pedantry.  Patch attached.

Patch pushed.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race conditions with WAL sender PID lookups

2017-05-30 Thread Thomas Munro
On Fri, May 19, 2017 at 2:05 PM, Michael Paquier
 wrote:
> On Thu, May 18, 2017 at 1:43 PM, Thomas Munro
>  wrote:
>> On Thu, May 11, 2017 at 1:48 PM, Michael Paquier
>>  wrote:
>>> I had my eyes on the WAL sender code this morning, and I have noticed
>>> that walsender.c is not completely consistent with the PID lookups it
>>> does in walsender.c. In two code paths, the PID value is checked
>>> without holding the WAL sender spin lock (WalSndRqstFileReload and
>>> pg_stat_get_wal_senders), which looks like a very bad idea contrary to
>>> what the new WalSndWaitStopping() does and what InitWalSenderSlot() is
>>> doing for ages.
>>
>> There is also code that accesses shared walsender state without
>> spinlocks over in syncrep.c.  I think that file could use a few words
>> of explanation for why it's OK to access pid, state and flush without
>> synchronisation.
>
> Yes, that is read during the quorum and priority sync evaluation.
> Except sync_standby_priority, all the other variables should be
> protected using the spin lock of the WAL sender. walsender_private.h
> is clear regarding that. So the current coding is inconsistent even
> there. Attached is an updated patch.

I don't claim that that stuff is wrong, just that it would be good to
hear an analysis.  I think the question is: is there a way for syncrep
to hang because of a perfectly timed walsender transition, however
vanishingly unlikely?  I'm thinking of something like: syncrep skips a
walsender slot because it's looking at arbitrarily old 'pid' from
before a walsender connected, or gets a torn read of 'flush' that
comes out as 0 but was actually non-0.

Incidentally, I suspect that a couple of places where 'volatile' is
used it's superfluous (accessing things protected by an LWLock that is
held).

I don't see any of this as 'open item' material, it's interesting to
look into but it's preexisting code.  As for unlocked reads used for
pg_stat_X views, it seems well established that we're OK with that (at
least for things that the project has decided can be read atomically,
to wit aligned 32 bit values).

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Segmentation fault when creating a BRIN, 10beta1

2017-05-30 Thread Alvaro Herrera
Alexander Sosna wrote:
> Hi,
> 
> I can reproduce a segmentation fault when creating a BRIN concurrently
> with set pages_per_range and autosummarize.

Pushed fix just now.  Please give it a try.  Thanks for testing and
reporting,

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 4:38 PM, Tom Lane  wrote:
> It'd be interesting if people could gather similar numbers on other
> platforms of more real-world relevance, such as ppc64.  But based on
> this small sample, I wouldn't object to just going to -fPIC across
> the board.

That seems pretty sensible to me.  I think we should aim for a
configuration that works by default.  If we use -fPIC where -fpic
would have been better, the result is that on some platforms there
might be a speed penalty, probably small.  In the reverse situation,
the build fails.  I believe that the average PostgreSQL extension
developer will prefer the first situation.  If somebody has got an
extension which is small enough to use -fpic and that person cares
about minimizing the performance penalty on s390 or other platforms
where they have this problem, then they can arrange an override in the
opposite direction.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: pg_type.h file is not synced

2017-05-30 Thread Tom Lane
Bruce Momjian  writes:
>>> But I think the "ecpg/ecpglib/pg_type.h" file is currently not synced
>>> with the above file.

> Should we add some Makefile magic to make sure they stay in sync?

Not so much that as teach genbki.pl to generate this file too.
I think that might be a good idea, but personally I'd leave it until
after the genbki rewrite we keep talking about.  This isn't important
enough to make that rewrite have an even higher hurdle to clear.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: pg_type.h file is not synced

2017-05-30 Thread Bruce Momjian
On Tue, May 23, 2017 at 10:25:13AM +0200, Michael Meskes wrote:
> Hi,
> 
> > I am looking at ECPG source code. In the "ecpg/ecpglib/pg_type.h" file I
> > have seen following comment:
> > 
> > ** keep this in sync with src/include/catalog/pg_type.h*
> > 
> > But I think the "ecpg/ecpglib/pg_type.h" file is currently not synced
> > with the above file.
> > 
> > I have added the remaining types in the attached patch.
> > 
> > I would like to know whether we can add remaining types in the
> > "ecpg/ecpglib/pg_type.h" file or not?
> 
> It's not too big a deal as neither of the new OIDs is used in ecpg, but
> still, the file should be up to date.
> 
> Thanks for the patch, committed.

Should we add some Makefile magic to make sure they stay in sync?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Tom Lane
I wrote:
> Very possibly true, but I wish we had some hard facts and not just
> guesses.

As a simple but on-point test, I compared sizes of postgres_fdw.so
built with -fpic and -fPIC.  I no longer have access to a wide variety
of weird architectures, but on what I do have in my office:

x86_64, RHEL6:

-fpic:
   textdata bss dec hex filename
  854672632  64   88163   15863 postgres_fdw.so
-fPIC:
   textdata bss dec hex filename
  854672632  64   88163   15863 postgres_fdw.so

This seems to confirm Andres' opinion that it makes no difference on
x86_64.

PPC, FreeBSD 10.3:

-fpic:
   textdata bss dec hex filename
  86638 420  32   87090   15432 postgres_fdw.so
-fPIC:
   textdata bss dec hex filename
  864741860  32   88366   1592e postgres_fdw.so
that's a 1.47% penalty

PPC, NetBSD 5.1.5:

-fpic:
   textdata bss dec hex filename
  81880 420  56   82356   141b4 postgres_fdw.so
-fPIC:
   textdata bss dec hex filename
  816882044  56   83788   1474c postgres_fdw.so
that's a 1.74% penalty

HPPA, HPUX 10.20:

-fpic:
   textdata bss dec hex filename
  97253   17296   8  114557   1bf7d postgres_fdw.sl
-fPIC:
   textdata bss dec hex filename
 102629   17320   8  119957   1d495 postgres_fdw.sl
that's a 4.7% penalty


It's somewhat noteworthy that the PPC builds show a large increase in data
segment size.  That likely corresponds to relocatable pointer fields that
have to be massaged by the dynamic linker during shlib load.  Thus one
could speculate that shlib load might be noticeably slower with -fPIC,
at least on PPC.  But people rarely complain about the speed of that
anyway, so it's unlikely to be worth worrying about.

It'd be interesting if people could gather similar numbers on other
platforms of more real-world relevance, such as ppc64.  But based on
this small sample, I wouldn't object to just going to -fPIC across
the board.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] syscache entries out of order

2017-05-30 Thread Mark Dilger
Peter,

Just FYI, as of 665d1fad99e7b11678b0d5fa24d2898424243cd6, syscache.h
entries are not in alphabetical order, which violates the coding standard
specified in the comment for these entries.  In particular, it is the 
PUBLICATION
and SUBSCRIPTION entries that are mis-ordered.

Sorry for my pedantry.  Patch attached.

Mark Dilger



syscache.patch.0
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Segmentation fault when creating a BRIN, 10beta1

2017-05-30 Thread Alvaro Herrera
Tom Lane wrote:
> Alexander Sosna  writes:
> > I can reproduce a segmentation fault when creating a BRIN concurrently
> > with set pages_per_range and autosummarize.
> 
> I wonder if this isn't the same issue reported in
> https://www.postgresql.org/message-id/flat/20170524063323.29941.46339%40wrigleys.postgresql.org
> 
> Could you try the patch suggested there?

With the patch, it crashes immediately for me.  Looking into it now.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Daniel Gustafsson
> On 30 May 2017, at 16:50, Tom Lane  wrote:
> 
> Robert Haas  writes:
>> On Sat, May 27, 2017 at 5:59 PM, Álvaro Hernández Tortosa
>>  wrote:
>>> - tls-unique, as you mentioned, uses two undocumented APIs. This raises a
>>> small flag about the stability and future of those APIs.
> 
>> It seems to me that the question is not just whether those APIs will
>> be available in future versions of OpenSSL, but whether they will be
>> available in every current and future version of every SSL
>> implementation that we may wish to use in core or that any client may
>> wish to use.  We've talked before about being able to use the Windows
>> native SSL implementation rather than OpenSSL and it seems that there
>> would be significant advantages in having that capability.
> 
> Another thing of the same sort that should be on our radar is making
> use of Apple's TLS code on macOS.  The handwriting on the wall is
> unmistakable that they intend to stop shipping OpenSSL before long,
> and I do not think we really want to be in a position of having to
> bundle OpenSSL into our distribution on macOS.
> 
> I'm not volunteering to do that, mind you.  But +1 for not tying new
> features to any single TLS implementation.

Big +1.  The few settings we have already make it hard to provide other
implementations as drop-in replacements (Secure Transport doesn’t support
.crl files for example, only CRL loaded in Keychains).

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Daniel Gustafsson
> On 30 May 2017, at 18:25, Michael Paquier  wrote:
> 
> On macos though it is another story, I am not seeing anything:
> https://developer.apple.com/reference/security/secure_transport#symbols

The Secure Transport documentation unfortunately excel at being incomplete and
hard to search in.  That being said, I think you’re right, AFAICT there is no
published API for this (SSLProcessFinished() seems like the best match but is
not public).

> Depending on the SSL implementation the server is compiled with, it
> will be up to the backend to decide if it should advertise to the
> client the -PLUS mechanism or not, so we can stay modular here.

+1

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "create publication..all tables" ignore 'partition not supported' error

2017-05-30 Thread Masahiko Sawada
On Tue, May 30, 2017 at 1:42 PM, Amit Langote
 wrote:
> On 2017/05/22 20:02, Kuntal Ghosh wrote:
>> Yeah, it's a bug. While showing the table definition, we use the
>> following query for showing the related publications:
>> "SELECT pub.pubname\n"
>>   " FROM pg_catalog.pg_publication pub\n"
>>   " LEFT JOIN pg_catalog.pg_publication_rel pr\n"
>>   "  ON (pr.prpubid = pub.oid)\n"
>>   "WHERE pr.prrelid = '%s' OR pub.puballtables\n"
>>   "ORDER BY 1;"
>>
>> When pub.puballtables is TRUE, we should also check whether the
>> relation is publishable or not.(Something like is_publishable_class
>> method in pg_publication.c).
>
> BTW, you can see the following too, because of the quoted query:
>
> create publication pub2 for all tables;
>
> -- system tables are not publishable, but...
> \d pg_class
> 
> Publications:
> "pub2"
>
> -- unlogged tables are not publishable, but...
> create unlogged table foo (a int);
> \d foo
> 
> Publications:
> "pub2"
>
> -- temp tables are not publishable, but...
> create temp table bar (a int);
> \d bar
> 
> Publications:
> "pub2"
>
> The above contradicts what check_publication_add_tables() thinks are
> publishable relations.
>
> At first, I thought this would be fixed by simply adding a check on
> relkind and relpersistence in the above query, both of which are available
> to describeOneTableDetails() already.  But, it won't be possible to check
> the system table status using the available information, so we might need
> to add a SQL-callable version of is_publishable_class() after all.

I'd say we can fix this issue by just changing the query. Attached
patch changes the query so that it can handle publication name
correctly, the query gets complex, though.

>>
>> However, I'm not sure whether this is the correct way to solve the problem.
>
> AFAICS, the problem is with psql's describe query itself and no other
> parts of the system are affected by this.

I think so too.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


psql_publication.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Create subscription with `create_slot=false` and incorrect slot name

2017-05-30 Thread Peter Eisentraut
On 5/29/17 22:26, Noah Misch wrote:
> On Fri, May 26, 2017 at 05:05:37PM -0400, Peter Eisentraut wrote:
>> On 5/25/17 19:16, Petr Jelinek wrote:
 The reported error is just one of many errors that can happen when DROP
 SUBSCRIPTION tries to drop the slot (doens't exist, still active, no
 permission, etc.).  We don't want to give the hint that is effectively
 "just forget about the slot then" for all of them.  So we would need
 some way to distinguish "you can't do this right now" from "this would
 never work" (400 vs 500 errors).

>>> This specific error returns ERRCODE_UNDEFINED_OBJECT error code so we
>>> could check for it and offer hint only for this case.
>>
>> We would have to extend the walreceiver interface a little to pass that
>> through, but it seems doable.
> 
> [Action required within three days.  This is a generic notification.]

I have just posted a patch earlier in this thread with a possible
solution.  If people don't like that approach, then we can pursue the
approach with error codes discussed above.  But I would punt that to
PG11 in that case.  I will report back on Friday.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-30 Thread Peter Eisentraut
On 5/25/17 17:26, Peter Eisentraut wrote:
> Another way to fix this particular issue is to not verify the
> replication slot name before doing the drop.  After all, if the name is
> not valid, then you can also just report that it doesn't exist.

Here is a possible patch along these lines.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 1e09473bb2f571af0a6a8e4ca718d291efa63772 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 30 May 2017 14:57:01 -0400
Subject: [PATCH] Remove replication slot name check from
 ReplicationSlotAcquire()

When trying to access a replication slot that is supposed to already
exist, we don't need to check the naming rules again.  If the slot
does not exist, we will then get a "does not exist" error message, which
is generally more useful from the perspective of an end user.
---
 src/backend/replication/slot.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c
index 5386e86aa6..c0f7fbb2b2 100644
--- a/src/backend/replication/slot.c
+++ b/src/backend/replication/slot.c
@@ -331,8 +331,6 @@ ReplicationSlotAcquire(const char *name)
 
Assert(MyReplicationSlot == NULL);
 
-   ReplicationSlotValidateName(name, ERROR);
-
/* Search for the named slot and mark it active if we find it. */
LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);
for (i = 0; i < max_replication_slots; i++)
-- 
2.13.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race conditions with WAL sender PID lookups

2017-05-30 Thread Michael Paquier
On Tue, May 30, 2017 at 11:09 AM, Peter Eisentraut
 wrote:
> I don't think this is my item.  Most of the behavior is old, and
> pg_stat_get_wal_receiver() is from commit
> b1a9bad9e744857291c7d5516080527da8219854.
>
> I would appreciate if another committer can take the lead on this.

Those things are on Alvaro's plate for the WAL receiver portion, and I
was the author of those patches. The WAL sender portion is older
though, but it seems crazy to me to not fix both things at the same
time per their similarities.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-30 Thread Peter Eisentraut
On 5/29/17 22:14, Noah Misch wrote:
> On Fri, May 26, 2017 at 10:46:12PM -0300, Euler Taveira wrote:
>> 2017-05-26 17:52 GMT-03:00 Peter Eisentraut 
>> :
>>> You cannot publish a system catalog.  But a user-created table in
>>> information_schema is not a system catalog.
>> Replication of information_schema tables works. However, pg_dump doesn't
>> include information_schema tables into CREATE PUBLICATION command
>> (user-defined information_schema tables aren't included in pg_dump even
>> *before* logical replication). IMO allow publish/subscribe of tables into
>> information_schema is harmless (they aren't special tables like catalogs).
>> Also, how many people would create real tables into information_schema?
>> Almost zero. Let's leave it alone. Since pg_dump doesn't document that
>> information_schema isn't dumped, I think we shouldn't document this for
>> logical replication.
> [Action required within three days.  This is a generic notification.]

Unless there are any new insights, I propose to close this item as not
new and not worth fixing, per the above.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race conditions with WAL sender PID lookups

2017-05-30 Thread Peter Eisentraut
On 5/29/17 21:38, Noah Misch wrote:
>> Actually, now that I look at it, ready_to_display should as well be
>> protected by the lock of the WAL receiver, so it is incorrectly placed
>> in walreceiver.h. As you are pointing out, pg_stat_get_wal_receiver()
>> is lazy as well, and that's new in 10, so we have an open item here
>> for both of them. And I am the author for both things. No issues
>> spotted in walreceiverfuncs.c after review.
>>
>> I am adding an open item so as both issues are fixed in PG10. With the
>> WAL sender part, I think that this should be a group shot.
>>
>> So what do you think about the attached?
> 
> [Action required within three days.  This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> since you committed the patch believed to have created it, you own this open
> item.

I don't think this is my item.  Most of the behavior is old, and
pg_stat_get_wal_receiver() is from commit
b1a9bad9e744857291c7d5516080527da8219854.

(The logical replication equivalent of that is
pg_stat_get_subscription(), which does appear to use appropriate locking.)

I would appreciate if another committer can take the lead on this.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication status in logical replication

2017-05-30 Thread Peter Eisentraut
On 5/29/17 22:56, Noah Misch wrote:
> On Fri, May 19, 2017 at 11:33:48AM +0900, Masahiko Sawada wrote:
>> On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs  wrote:
>>> Looks like a bug that we should fix in PG10, with backpatch to 9.4 (or
>>> as far as it goes).
>>>
>>> Objections to commit?
>>>
>>
>> Seems we still have this issue. Any update or comment on this? Barring
>> any objections, I'll add this to the open item so it doesn't get
>> missed.
> 
> [Action required within three days.  This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 10 open item.  Peter,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.

I would ask Simon to go ahead with this patch if he feels comfortable
with it.

I'm disclaiming this open item, since it's an existing bug from previous
releases (and I have other open items to focus on).

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix GetOldestXmin comment

2017-05-30 Thread Masahiko Sawada
On Tue, May 30, 2017 at 11:07 PM, Amit Kapila  wrote:
> On Tue, May 30, 2017 at 6:32 AM, Masahiko Sawada  
> wrote:
>> Hi,
>>
>> While reading source code, I realized that comment of GetOldestXmin mentions;
>>
>>   * if rel = NULL and there are no transactions running in the current
>>   * database, GetOldestXmin() returns latestCompletedXid.
>>
>> However, in that case if I understand correctly GetOldestXmin()
>> actually returns latestCompletedXid + 1 as follows;
>>
>
> Isn't there another gotcha in above part of the comment, shouldn't it
> say rel != NULL?  AFAICS, when rel is NULL, it considers all databases
> not only current database.
>

Hmm it could return latestCompletedXid in that case. I think I
understood now, Thank you!

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Alter subscription..SET - NOTICE message is coming for table which is already removed

2017-05-30 Thread Masahiko Sawada
On Thu, May 25, 2017 at 9:54 PM, tushar  wrote:
> On 05/25/2017 03:38 PM, tushar wrote:
>>
>> While performing - Alter subscription..SET  , I found that NOTICE message
>> is coming duplicate next time , which  is not needed anymore.
>
> There is an another example - where i am getting "ERROR: subscription table
> 16435 in subscription 16684 does not exist" in standby log file
>
> 2017-05-25 13:51:48.825 BST [32138] NOTICE:  removed subscription for table
> public.t96
> 2017-05-25 13:51:48.825 BST [32138] NOTICE:  removed subscription for table
> public.t97
> 2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for table
> public.t98
> 2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for table
> public.t99
> 2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for table
> public.t100
> 2017-05-25 13:51:48.827 BST [32138] LOG:  duration: 35.404 ms statement:
> alter subscription c1 set publication p1 refresh;
> 2017-05-25 13:51:49.192 BST [32347] LOG:  starting logical replication
> worker for subscription "c1"
> 2017-05-25 13:51:49.198 BST [32368] LOG:  logical replication sync for
> subscription c1, table t16 started
> 2017-05-25 13:51:49.198 BST [32368] ERROR:  subscription table 16429 in
> subscription 16684 does not exist
> 2017-05-25 13:51:49.199 BST [32347] LOG:  starting logical replication
> worker for subscription "c1"
> 2017-05-25 13:51:49.200 BST [32065] LOG:  worker process: logical
> replication worker for subscription 16684 sync 16429 (PID 32368) exited with
> exit code 1
> 2017-05-25 13:51:49.204 BST [32369] LOG:  logical replication sync for
> subscription c1, table t17 started
> 2017-05-25 13:51:49.204 BST [32369] ERROR:  subscription table 16432 in
> subscription 16684 does not exist
> 2017-05-25 13:51:49.205 BST [32347] LOG:  starting logical replication
> worker for subscription "c1"
> 2017-05-25 13:51:49.205 BST [32065] LOG:  worker process: logical
> replication worker for subscription 16684 sync 16432 (PID 32369) exited with
> exit code 1
> 2017-05-25 13:51:49.209 BST [32370] LOG:  logical replication sync for
> subscription c1, table t18 started
> 2017-05-25 13:51:49.209 BST [32370] ERROR:  subscription table 16435 in
> subscription 16684 does not exist
> 2017-05-25 13:51:49.210 BST [32347] LOG:  starting logical replication
> worker for subscription "c1"
> 2017-05-25 13:51:49.210 BST [32065] LOG:  worker process: logical
> replication worker for subscription 16684 sync 16435 (PID 32370) exited with
> exit code 1
> 2017-05-25 13:51:49.213 BST [32371] LOG:  logical replication sync for
> subscription c1, table t19 started
> 2017-05-25 13:51:49.213 BST [32371] ERROR:  subscription table 16438 in
> subscription 16684 does not exist
> 2017-05-25 13:51:49.214 BST [32347] LOG:  starting logical replication
> worker for subscription "c1"
>
>
> Steps to reproduce -
> X cluster ->
> create 100 tables , publish all tables (create publication pub for table
> t1,t2,t2..t100;)
> create one more table (create table t101(n int), create publication ,
> publish only that table (create publication p1 for table t101;)
>
> Y Cluster ->
> create subscription (create subscription c1 connection 'host=localhost
> port=5432 user=centos ' publication pub;
> alter subscription c1 set publication p1 refresh;
> alter subscription c1 set publication pub refresh;
> alter subscription c1 set publication p1 refresh;
>
> check the log file.
>

Thanks.

I think this cause is that the relation status entry could be deleted
by ALTER SUBSCRIPTION REFRESH before corresponding table sync worker
starting. Attached patch fixes issues reported on this thread so far.

However there is one more problem here; if the relation status entry
is deleted while corresponding table sync worker is waiting to be
changed its status, the table sync worker can be orphaned in waiting
status. In this case, should table sync worker check the relation
status and exits if the relation status record gets removed? Or should
ALTER SUBSCRIPTION update status of table sync worker to UNKNOWN?

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


fix_ALTER_SUB_REFRESH_and_table_sync.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Christoph Berg
Re: Tom Lane 2017-05-30 <25131.1496163...@sss.pgh.pa.us>
> Christoph Berg  writes:
> > My main point here would be that we are already setting this for all
> > extensions for sparc and sparc64, so s390(x) would just follow suit.
> 
> For some values of "we", sure ;-).

Afaict for all values of "we":

ifeq "$(findstring sparc,$(host_cpu))" "sparc"
CFLAGS_SL = -fPIC
else
CFLAGS_SL = -fpic
endif

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/makefiles/Makefile.linux

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Tom Lane
Christoph Berg  writes:
> My main point here would be that we are already setting this for all
> extensions for sparc and sparc64, so s390(x) would just follow suit.

For some values of "we", sure ;-).  But I think what is really under
discussion here is whether to change -fpic to -fPIC for all platforms.
It looks like Makefile.netbsd and Makefile.openbsd would be affected along
with Makefile.linux.  Some other platforms such as freebsd are already
in the fPIC-always camp.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-05-30 Thread Christoph Berg
Re: Konstantin Knizhnik 2017-05-30 

> 
> 
> On 29.05.2017 20:21, Christoph Berg wrote:
> > 
> > I think the term you were looking for is "projection".
> > 
> > https://en.wikipedia.org/wiki/Projection_(set_theory)
> 
> I have already renamed parameter from "surjective" to "injective".
> But I am ok to do do one more renaming to "projection" if it will be
> considered as better alternative.
> From my point of view, "projection" seems to be clearer for people without
> mathematical background,
> but IMHO this term is overloaded in DBMS context.

With mathematical background, I don't see how your indexes would
exploit surjective or injective properties of the function used. What
you are using is that ->> projects a json value to one of its
components, i.e. the projection/function result does not depend on the
other attributes contained.

> The irony is that in Wikipedia "projection" is explained using
> "surjection" term:)

For the equivalence classes part, which isn't really connected to your
application.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-30 Thread Bruce Momjian
On Sun, May 21, 2017 at 11:01:57PM -0400, Chapman Flack wrote:
> ? A transformOpclassLike function could verify that foo and the opcintype
> of int4_ops have the same typlen and typbyval, and that the operators and
> support procs are backed by C functions, and return a list of
> CREATE OPERATOR reusing the same functions, followed by the rewritten
> CREATE OPERATOR CLASS.
> 
> Would it be helpful to link any part of these notes to the hash index
> section of the TODO page?

I added a link to this excellent summary email based on the first
paragraph alone:

Add UNIQUE capability to hash indexes 

then when I got to your suggestion at the bottom, it was already done. 
:-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Stephen Frost
Michael,

* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Tue, May 30, 2017 at 8:14 AM, Stephen Frost  wrote:
> > The specific APIs are, certainly, going to be different between
> > different TLS implementations and I don't believe we need to be
> > particularly concerned with that (or if they change in the future), as
> > long as we abstract those APIs.  Perhaps there's some question as to
> > just how to abstract them, but I'm at least a bit less concerned with
> > that as I expect we'll be able to adjust those abstraction layers later
> > (presuming they aren't exposed, which I wouldn't expect them to be).
> 
> The current patch fetches the TLS finish message data just after the
> handshake in be_tls_open_server() using the dedicated OpenSSL API. We
> could definitely go with a more generic routine on our side that
> sticks with the concepts of be_tls_get_compression():
> - For the backend, this routine returns an allocated string with the
> contents of the expected TLS message, and its size:
> char *be_tls_get_tls_finish(Port *, int *)
> - For the frontend, a routine to get the generated TLS finish message:
> char *pgtls_get_tls_finish(PGconn *, int *)

Those look pretty reasonable to me and seem to go along with the
concepts from the RFC, making them hopefully the right API.

> > If that wasn't the case then, for example, it wouldn't be possible to do
> > channel binding with a LibNSS client and an OpenSSL server or with other
> > combinations and I find that rather hard to believe.
> 
> As far as I can see, Windows has some APIs to get the TLS finish message:
> https://msdn.microsoft.com/en-us/library/windows/desktop/aa382420(v=vs.85).aspx

Good.

> On macos though it is another story, I am not seeing anything:
> https://developer.apple.com/reference/security/secure_transport#symbols

That's a bit unfortunate, if that's the case.  Perhaps Apple will see
fit to address that deficiency though.

> Depending on the SSL implementation the server is compiled with, it
> will be up to the backend to decide if it should advertise to the
> client the -PLUS mechanism or not, so we can stay modular here.

Right, of course.

All-in-all, this sounds like it's heading in the right direction, at
least at a high level.  Glad to see that there's been consideration of
other TLS implementations, and as such I don't think we need to be
overly concerned about the specifics of the OpenSSL API here.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Segmentation fault when creating a BRIN, 10beta1

2017-05-30 Thread Tom Lane
Alexander Sosna  writes:
> I can reproduce a segmentation fault when creating a BRIN concurrently
> with set pages_per_range and autosummarize.

I wonder if this isn't the same issue reported in
https://www.postgresql.org/message-id/flat/20170524063323.29941.46339%40wrigleys.postgresql.org

Could you try the patch suggested there?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Christoph Berg
Re: Andres Freund 2017-05-30 <20170530161541.koj5xbvvovrrt...@alap3.anarazel.de>
> I think we can fix this easily enough in Citus, postgis, and whatever.
> But it's not a particularly good user/developer experience, and
> presumably is going to become more and more common. On x86 there
> shouldn't be a difference in efficiency between the two, but some others
> will see some.  Given that most distributions switched to building the
> main executables with -fPIE anyway, to allow for ASLR, it seems unlikely
> that the intra extension overhead is going to be very meaningful in
> comparison.

My main point here would be that we are already setting this for all
extensions for sparc and sparc64, so s390(x) would just follow suit.

-fPIC is the default in Debian now, see the discussion starting at
https://lists.debian.org/debian-devel/2016/05/msg00028.html
including the Fedora: 
https://lists.debian.org/debian-devel/2016/05/msg00219.html
and Ubuntu: https://lists.debian.org/debian-devel/2016/05/msg00225.html
situation, which all do that.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Tom Lane
Andres Freund  writes:
> On 2017-05-29 15:45:11 -0400, Tom Lane wrote:
>> Maybe this is small enough to not be something we need to worry about,
>> but I'm wondering if we should ask citus and other large .so's to set
>> some additional make flag that would cue usage of -fPIC over -fpic.

> I think we can fix this easily enough in Citus, postgis, and whatever.
> But it's not a particularly good user/developer experience, and
> presumably is going to become more and more common. On x86 there
> shouldn't be a difference in efficiency between the two, but some others
> will see some.  Given that most distributions switched to building the
> main executables with -fPIE anyway, to allow for ASLR, it seems unlikely
> that the intra extension overhead is going to be very meaningful in
> comparison.

Very possibly true, but I wish we had some hard facts and not just
guesses.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "cannot specify finite value after UNBOUNDED" ... uh, why?

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 11:03 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Mon, May 29, 2017 at 3:04 AM, Tom Lane  wrote:
>>> Would someone please defend the restrictions imposed by the
>>> "seen_unbounded" checks in transformPartitionBound
>>> (parse_utilcmd.c:3365..3396 in current HEAD)?
>
>> Because this is supposed to work more or less like row-comparison --
>> the earlier columns are strictly more significant than the later ones.
>> That is, allowing (1, 2) through (3, 4) allows (2, whatever) but (1,
>> y) only if y >= 2 and (3, y) only if y < 4.
>
> I see.  That makes the logic awfully complex though.  I was looking
> at get_qual_for_range() yesterday --- it's mind-bendingly complicated
> and I have next to no faith that it's 100% right.

It might be useful if somebody would be willing to put together a
fuzz-tester for it.  Like, randomly generate partition bounds over
various different data types, and then verify that each tuple is
accepted by the partition constraint of only one partition and that
it's the same partition into which tuple routing tries to put it.

>> In case you're wondering, this is also how a certain large commercial
>> database system interprets composite bounds.  You could imagine in
>> theory a system where a bound from (1, 2) to (3, 4) allows only those
>> (x, y) where 1<=x<3 and 2<=y<4 but I know of no existing system that
>> does anything like that.  If you want that sort of thing, you can get
>> it anyway using two levels of partitioning, one on each column.
>
> Well, if we just treated each column independently, you could get
> the row-comparison behavior by partitioning on a ROW() expression.
> So that argument doesn't impress me.  I suppose compatibility with
> other partitioning implementations is worth something, but I'm not
> sure it's worth this much complication and risk of bugs.

I guess you won't be terribly surprised to hear that I think
compatibility with other implementations is quite desirable.  I also
think that Amit and others who have been involved in the discussions
have the same opinion, although of course they can speak for
themselves.  In terms of partitioning on a ROW() expression, I believe
that you would lose the ability to control which opclass and collation
is used for each column, which I think is something that people care
about.  Also, I bet it would be slower.  Also, I bet the syntax would
be less intuitive.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE

2017-05-30 Thread Andres Freund
On 2017-05-30 07:27:12 -0400, Robert Haas wrote:
> The other is that I figured 64k was small enough that nobody would
> care about the memory utilization.  I'm not sure we can assume the
> same thing if we make this bigger.  It's probably fine to use a 6.4M
> tuple queue for each worker if work_mem is set to something big, but
> maybe not if work_mem is set to the default of 4MB.

Probably not.  It might also end up being detrimental performancewise,
because we start touching more memory.  I guess it'd make sense to set
it in the planner, based on the size of a) work_mem b) number of
expected tuples.

I do wonder whether the larger size fixes some scheduling issue
(i.e. while some backend is scheduled out, the other side of the queue
can continue), or whether it's largely triggered by fixable contention
inside the queue.  I'd guess it's a bit of both.  It should be
measurable in some cases, by comparing the amount of time blocking on
reading the queue (or continuing because the queue is empty), writing
to the queue (should always result in blocking) and time spent waiting
for the spinlock.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Michael Paquier
On Tue, May 30, 2017 at 8:14 AM, Stephen Frost  wrote:
> Work has been done in that area already, as well as for LibNSS support.
>
> I've not had a chance to closely look over the proposed approach here,
> but generally speaking, we need to be looking for a standard way to
> generate and transmit the channel binding information across the
> wire.

The RFC of SCRAM specifies that the client sends the type of channel
binding in its first message data, and then provides the contents of
the TLS message it generated in the challenge response.

> The specific APIs are, certainly, going to be different between
> different TLS implementations and I don't believe we need to be
> particularly concerned with that (or if they change in the future), as
> long as we abstract those APIs.  Perhaps there's some question as to
> just how to abstract them, but I'm at least a bit less concerned with
> that as I expect we'll be able to adjust those abstraction layers later
> (presuming they aren't exposed, which I wouldn't expect them to be).

The current patch fetches the TLS finish message data just after the
handshake in be_tls_open_server() using the dedicated OpenSSL API. We
could definitely go with a more generic routine on our side that
sticks with the concepts of be_tls_get_compression():
- For the backend, this routine returns an allocated string with the
contents of the expected TLS message, and its size:
char *be_tls_get_tls_finish(Port *, int *)
- For the frontend, a routine to get the generated TLS finish message:
char *pgtls_get_tls_finish(PGconn *, int *)

> I skimmed over RFC5929, which looks to be the correct RFC when it comes
> to channel binding with TLS.  Hopefully the various TLS implementations
> work in a similar manner, following that RFC (or whichever is relevant).

That's the one I use as a reference.

> If that wasn't the case then, for example, it wouldn't be possible to do
> channel binding with a LibNSS client and an OpenSSL server or with other
> combinations and I find that rather hard to believe.

As far as I can see, Windows has some APIs to get the TLS finish message:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa382420(v=vs.85).aspx
On macos though it is another story, I am not seeing anything:
https://developer.apple.com/reference/security/secure_transport#symbols

Depending on the SSL implementation the server is compiled with, it
will be up to the backend to decide if it should advertise to the
client the -PLUS mechanism or not, so we can stay modular here.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Andres Freund
On 2017-05-29 15:45:11 -0400, Tom Lane wrote:
> Christoph Berg  writes:
> > Re: To Andres Freund 2016-04-28 <20160428080824.ga22...@msg.df7cb.de>
> >>> I'm not clear why citus triggers this, when other extensions don't?
> 
> >> Maybe it's simply because citus.so is bigger than all the other
> >> extension .so files:
> 
> I wonder what the overhead is of using -fPIC when -fpic would be
> sufficient.  Whatever it is, the proposed patch imposes it on every
> shlib or extension, to accommodate one single extension that isn't
> even one we ship.

> Maybe this is small enough to not be something we need to worry about,
> but I'm wondering if we should ask citus and other large .so's to set
> some additional make flag that would cue usage of -fPIC over -fpic.

I think we can fix this easily enough in Citus, postgis, and whatever.
But it's not a particularly good user/developer experience, and
presumably is going to become more and more common. On x86 there
shouldn't be a difference in efficiency between the two, but some others
will see some.  Given that most distributions switched to building the
main executables with -fPIE anyway, to allow for ASLR, it seems unlikely
that the intra extension overhead is going to be very meaningful in
comparison.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP Patch: Precalculate stable functions, infrastructure v1

2017-05-30 Thread Aleksander Alekseev
Hi Marina,

I still don't see anything particularly wrong with your patch. It
applies, passes all test, it is well test-covered and even documented.
Also I've run `make installcheck` under Valgrind and didn't find any
memory-related errors.

Is there anything that you would like to change before we call it more
or less final?

Also I would advice to add your branch to our internal buildfarm just to
make sure everything is OK on exotic platforms like Windows ;)

On Mon, May 22, 2017 at 06:32:17PM +0300, Marina Polyakova wrote:
> > Hi,
> 
> Hello!
> 
> > I've not followed this thread, but just scanned this quickly because it
> > affects execExpr* stuff.
> 
> Thank you very much for your comments! Thanks to them I have made v4 of the
> patches (as in the previous one, only planning and execution part is
> changed).
> 
> > Looks like having something like struct CachedExprState would be better,
> > than these separate allocations?  That also allows to aleviate some size
> > concerns when adding new fields (see below)
> 
> > I'd rather not have this on function scope - a) the stack pressure in
> > ExecInterpExpr is quite noticeable in profiles already b) this is going
> > to trigger warnings because of unused vars, because the compiler doesn't
> > understand that EEOP_CACHEDEXPR_IF_CACHED always follows
> > EEOP_CACHEDEXPR_SUBEXPR_END.
> > 
> > How about instead storing oldcontext in the expression itself?
> 
> Thanks, in new version I did all of it in this way.
> 
> > I'm also not sure how acceptable it is to just assume it's ok to leave
> > stuff in per_query_memory, in some cases that could prove to be
> > problematic.
> 
> I agree with you and in new version context is changed only for copying
> datum of result value (if it's a pointer, its data should be allocated in
> per_query_memory, or we will lost it for next tuples).
> 
> > Is this actually a meaningful path?  Shouldn't always have done const
> > evaluation before adding CachedExpr's?
> 
> eval_const_expressions_mutator is used several times, and one of them in
> functions for selectivity evaluation (set_baserel_size_estimates ->
> clauselist_selectivity -> clause_selectivity -> restriction_selectivity ->
> ... -> get_restriction_variable -> estimate_expression_value ->
> eval_const_expressions_mutator). In set_baserel_size_estimates function
> right after selectivity evaluation there's costs evaluation and cached
> expressions should be replaced before costs. I'm not sure that it is a good
> idea to insert cached expressions replacement in set_baserel_size_estimates,
> because in comments to it it's said "The rel's targetlist and restrictinfo
> list must have been constructed already, and rel->tuples must be set." and
> its file costsize.c is entitled as "Routines to compute (and set) relation
> sizes and path costs". So I have inserted cached expressions replacement
> just before it (but I'm not sure that I have seen all places where it should
> be inserted). What do you think about all of this?
> 
> -- 
> Marina Polyakova
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company

> From 02262b9f3a3215d3884b6ac188bafa6517ac543d Mon Sep 17 00:00:00 2001
> From: Marina Polyakova 
> Date: Mon, 15 May 2017 14:24:36 +0300
> Subject: [PATCH v4 1/3] Precalculate stable functions, infrastructure
> 
> Now in Postgresql only immutable functions are precalculated; stable functions
> are calculated for every row so in fact they don't differ from volatile
> functions.
> 
> This patch includes:
> - creation of CachedExpr node
> - usual node functions for it
> - mutator to replace nonovolatile functions' and operators' expressions by
> appropriate cached expressions.
> ---
>  src/backend/nodes/copyfuncs.c|  31 +
>  src/backend/nodes/equalfuncs.c   |  31 +
>  src/backend/nodes/nodeFuncs.c| 151 
>  src/backend/nodes/outfuncs.c |  56 
>  src/backend/nodes/readfuncs.c|  48 +++
>  src/backend/optimizer/plan/planner.c | 259 
> +++
>  src/include/nodes/nodeFuncs.h|   1 +
>  src/include/nodes/nodes.h|   1 +
>  src/include/nodes/primnodes.h|  38 +
>  9 files changed, 616 insertions(+)
> 
> diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c
> index 6ad3844..f9f69a1 100644
> --- a/src/backend/nodes/copyfuncs.c
> +++ b/src/backend/nodes/copyfuncs.c
> @@ -1527,6 +1527,34 @@ _copyNullIfExpr(const NullIfExpr *from)
>   return newnode;
>  }
>  
> +static CachedExpr *
> +_copyCachedExpr(const CachedExpr *from)
> +{
> + CachedExpr *newnode = makeNode(CachedExpr);
> +
> + COPY_SCALAR_FIELD(subexprtype);
> + switch(from->subexprtype)
> + {
> + case CACHED_FUNCEXPR:
> + COPY_NODE_FIELD(subexpr.funcexpr);
> + break;
> + case CACHED_OPEXPR:
> + 

Re: [HACKERS] ALTER PUBLICATION documentation

2017-05-30 Thread Peter Eisentraut
On 5/29/17 22:16, Noah Misch wrote:
> On Wed, May 24, 2017 at 07:24:08PM -0400, Peter Eisentraut wrote:
>> On 5/22/17 17:50, Jeff Janes wrote:
>>> "The first variant of this command listed in the synopsis can change all
>>> of the publication properties specified in CREATE PUBLICATION
>>> ."
>>>
>>> That referenced first variant no longer exists.  I don't if that should
>>> just be removed, or reworked to instead describe "ALTER PUBLICATION name
>>> SET".
>>
>> Yeah that got whacked around a bit too much.  I'll see about fixing it up.
> 
> [Action required within three days.  This is a generic notification.]

This has been fixed.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use of non-restart-safe storage by temp_tablespaces

2017-05-30 Thread Joe Conway
On 05/30/2017 10:54 AM, Tom Lane wrote:
> Bruce Momjian  writes:
>> On Mon, May 29, 2017 at 03:55:08PM -0400, Tom Lane wrote:
>>> I'm too lazy to search the archives right now, but there was some case
>>> years ago where somebody destroyed their database via an ill-thought-out
>>> combination of automatic-initdb-if-$PGDATA-isn't-there and slow mounting.
> 
>> Here is the Joe Conway report from 2004:
>>  https://postgr.es/m/41bfab7c.5040...@joeconway.com
> 
> Yeah, that's the thread I was remembering.  Thanks for looking it up.

Wow, reading that was a blast from the past!

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] pgbench tap tests & minor fixes

2017-05-30 Thread Fabien COELHO


Hello Nikolay,


Year, this is much more clear for me. Now I understand this statistics code.


Great.


I still have three more questions. A new one:


   my_command->line = expr_scanner_get_substring(sstate,
 start_offset,
- end_offset);
+ end_offset + 1);


I do not quite understand what are you fixing here,


I fix a truncation which appears in an error message later on.


I did not find any mention of it in the patch introduction letter.


Indeed. Just a minor bug fix to avoid an error message to be truncated. If 
you remove it, then you can get:


 missing argument in command "sleep"
 \slee

Note the missing "p"...

And more, expr_scanner_get_substring is called twice in very similar 
code, and it is fixed only once. Can you tell more about this fix.


I fixed the one which was generating truncated messages. I did not notice
other truncated messages while testing, so I assume that other calls are 
correct, but I did not investigate further, so I may be wrong. Maybe in 
other instances the truncation removes a "\n" which is intended?



Old one:

(nxacts <= 0 || st->cnt < nxacts)) /* with -t, do not overshoot */
if (progress_timestamp && progress <= 0)


I am still sure that the right way is to do '== 0' and Assert for case 
when it is negative.


 - nxacts is a counter, it could wrap around at least theoretically.

 - progress is checked for >=0, so ==0 is fine.

Note that the same trick is used in numerous places in pgbench code, and I 
did not wrote most of them:


  st->nvariables <= 0
  duration <= 0
  total->cnt <= 0
  (nxacts <= 0 && duration <= 0)


If you are sure it is good to do '<= 0', let's allow commiter to do final
decision.


I'm sure that it is a minor thing, and that the trick is already used in 
the code. I changed progress because there is a clearly checked, but I 
kept nxacts because of the theoritical wrap around.



And another unclosed issue:

I still do not like command_checks_all function name (I would prefer
command_like_more) but I can leave it for somebody more experienced (i.e.
commiter) to make final decision, if you do not agree with me here...


I've propose the name because it checks for everything (multiple stdout, 
multiple stderr, status), hence "all". The "like" just refers to stdout 
regex, so is quite limited, and "checks all" seems to be a good summary of 
what is done, while "like more" is pretty unclear to me, because it is 
relative to "like", so I have to check what "like" does and then assume 
that it does more...



/* Why I am so bothered with function name. We are adding this function to
library that are used by all TAP-test-writers. So here things should be 100%
clear for all.


Yep. "checks_all" is clearer to me that "like_more" which is relative to 
another function.



If this function was only in pgbench test code, I would not
care about the name at all. But here I do. I consider it is important to give
best names to the functions in shared libraries. */

Hope these are last one. Let's close the first issue, fix or leave unclosed
others, and finish with this patch :-)


Here is a v6.

 - it uses "progress == 0"

 - it does not change "nxacts <= 0" because of possible wrapping

 - it fixes two typos in a perl comment about the checks_all function

 - I kept "checks all" because this talks more to me than "like more"
   if a native English speaker or someone else has an opinion, it is
   welcome.

Also, if someone could run a test on windows, it would be great.

--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index ae36247..b28558c 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -229,7 +229,7 @@ typedef struct SimpleStats
 typedef struct StatsData
 {
 	time_t		start_time;		/* interval start time, for aggregates */
-	int64		cnt;			/* number of transactions */
+	int64		cnt;			/* number of transactions, including skipped */
 	int64		skipped;		/* number of transactions skipped under --rate
  * and --latency-limit */
 	SimpleStats latency;
@@ -329,7 +329,7 @@ typedef struct
 	bool		prepared[MAX_SCRIPTS];	/* whether client prepared the script */
 
 	/* per client collected stats */
-	int64		cnt;			/* transaction count */
+	int64		cnt;			/* client transaction count, for -t */
 	int			ecnt;			/* error count */
 } CState;
 
@@ -2045,7 +2045,8 @@ doCustom(TState *thread, CState *st, StatsData *agg)
 	if (INSTR_TIME_IS_ZERO(now))
 		INSTR_TIME_SET_CURRENT(now);
 	now_us = INSTR_TIME_GET_MICROSEC(now);
-	while (thread->throttle_trigger < now_us - latency_limit)
+	while (thread->throttle_trigger < now_us - latency_limit &&
+		   (nxacts <= 0 || st->cnt < nxacts)) /* with -t, do not overshoot */
 	{
 		processXactStats(thread, st, , true, agg);
 		/* next rendez-vous */
@@ -2053,6 +2054,12 @@ 

Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Stephen Frost
Tom, all,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Robert Haas  writes:
> > On Sat, May 27, 2017 at 5:59 PM, Álvaro Hernández Tortosa
> >  wrote:
> >> - tls-unique, as you mentioned, uses two undocumented APIs. This raises a
> >> small flag about the stability and future of those APIs.
> 
> > It seems to me that the question is not just whether those APIs will
> > be available in future versions of OpenSSL, but whether they will be
> > available in every current and future version of every SSL
> > implementation that we may wish to use in core or that any client may
> > wish to use.  We've talked before about being able to use the Windows
> > native SSL implementation rather than OpenSSL and it seems that there
> > would be significant advantages in having that capability.
> 
> Another thing of the same sort that should be on our radar is making
> use of Apple's TLS code on macOS.  The handwriting on the wall is
> unmistakable that they intend to stop shipping OpenSSL before long,
> and I do not think we really want to be in a position of having to
> bundle OpenSSL into our distribution on macOS.
> 
> I'm not volunteering to do that, mind you.  But +1 for not tying new
> features to any single TLS implementation.

Work has been done in that area already, as well as for LibNSS support.

I've not had a chance to closely look over the proposed approach here,
but generally speaking, we need to be looking for a standard way to
generate and transmit the channel binding information across the
wire.  The specific APIs are, certainly, going to be different between
different TLS implementations and I don't believe we need to be
particularly concerned with that (or if they change in the future), as
long as we abstract those APIs.  Perhaps there's some question as to
just how to abstract them, but I'm at least a bit less concerned with
that as I expect we'll be able to adjust those abstraction layers later
(presuming they aren't exposed, which I wouldn't expect them to be).

I skimmed over RFC5929, which looks to be the correct RFC when it comes
to channel binding with TLS.  Hopefully the various TLS implementations
work in a similar manner, following that RFC (or whichever is relevant).

If that wasn't the case then, for example, it wouldn't be possible to do
channel binding with a LibNSS client and an OpenSSL server or with other
combinations and I find that rather hard to believe.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] "cannot specify finite value after UNBOUNDED" ... uh, why?

2017-05-30 Thread Tom Lane
Robert Haas  writes:
> On Mon, May 29, 2017 at 3:04 AM, Tom Lane  wrote:
>> Would someone please defend the restrictions imposed by the
>> "seen_unbounded" checks in transformPartitionBound
>> (parse_utilcmd.c:3365..3396 in current HEAD)?

> Because this is supposed to work more or less like row-comparison --
> the earlier columns are strictly more significant than the later ones.
> That is, allowing (1, 2) through (3, 4) allows (2, whatever) but (1,
> y) only if y >= 2 and (3, y) only if y < 4.

I see.  That makes the logic awfully complex though.  I was looking
at get_qual_for_range() yesterday --- it's mind-bendingly complicated
and I have next to no faith that it's 100% right.

> In case you're wondering, this is also how a certain large commercial
> database system interprets composite bounds.  You could imagine in
> theory a system where a bound from (1, 2) to (3, 4) allows only those
> (x, y) where 1<=x<3 and 2<=y<4 but I know of no existing system that
> does anything like that.  If you want that sort of thing, you can get
> it anyway using two levels of partitioning, one on each column.

Well, if we just treated each column independently, you could get
the row-comparison behavior by partitioning on a ROW() expression.
So that argument doesn't impress me.  I suppose compatibility with
other partitioning implementations is worth something, but I'm not
sure it's worth this much complication and risk of bugs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Segmentation fault when creating a BRIN, 10beta1

2017-05-30 Thread Alexander Sosna
Hi,

I can reproduce a segmentation fault when creating a BRIN concurrently
with set pages_per_range and autosummarize.

# Reproduce
CREATE TABLE brin_test AS
SELECT
series AS id,
MD5(series::TEXT) AS VALUE,
'2015-10-31 13:37:00.313370+01'::TIMESTAMP + (series::TEXT || '
Minute')::INTERVAL AS TIME
FROM
GENERATE_SERIES(0,1) AS series;

CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin
(TIME) WITH (pages_per_range=8, autosummarize=true);

2017-05-30 16:08:59.451 CEST [20689] LOG:  server process (PID 20881)
was terminated by signal 11: Segmentation fault
2017-05-30 16:08:59.451 CEST [20689] DETAIL:  Failed process was
running: CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test
USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:08:59.451 CEST [20689] LOG:  terminating any other active
server processes
2017-05-30 16:08:59.451 CEST [20806] WARNING:  terminating connection
because of crash of another server process
2017-05-30 16:08:59.451 CEST [20806] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2017-05-30 16:08:59.451 CEST [20806] HINT:  In a moment you should be
able to reconnect to the database and repeat your command.
2017-05-30 16:08:59.452 CEST [21204] postgres@postgres FATAL:  the
database system is in recovery mode

The debian packages where used, version "PostgreSQL 10beta1 on
x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18) 6.3.0 20170516,
64-bit".
I also tested the latest git commit
d5cb3bab564e0927ffac7c8729eacf181a12dd40 with the same result.
A more detailed log (debug5) can be found attached.


Kind regards,
Alexander Sosna
2017-05-30 16:19:27.374 CEST [22160] postgres@postgres DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0
2017-05-30 16:19:27.374 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.375 CEST [22160] postgres@postgres DEBUG:  CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGR, xid/subid/cid: 591/1/2
2017-05-30 16:19:27.375 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.382 CEST [22160] postgres@postgres DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0
2017-05-30 16:19:27.382 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.382 CEST [22160] postgres@postgres DEBUG:  building index "brin_test_time_abrin_8" on table "brin_test"
2017-05-30 16:19:27.382 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.384 CEST [22160] postgres@postgres DEBUG:  CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGR, xid/subid/cid: 0/1/1 (used)
2017-05-30 16:19:27.384 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.384 CEST [22160] postgres@postgres DEBUG:  StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0
2017-05-30 16:19:27.384 CEST [22160] postgres@postgres STATEMENT:  CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  reaping dead processes
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  server process (PID 22160) was terminated by signal 11: Segmentation fault
2017-05-30 16:19:27.386 CEST [22059] DETAIL:  Failed process was running: CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.386 CEST [22059] LOG:  server process (PID 22160) was terminated by signal 11: Segmentation fault
2017-05-30 16:19:27.386 CEST [22059] DETAIL:  Failed process was running: CREATE INDEX CONCURRENTLY brin_test_time_abrin_8 ON brin_test USING brin (TIME) WITH (pages_per_range=8, autosummarize=true);
2017-05-30 16:19:27.386 CEST [22059] LOG:  terminating any other active server processes
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  sending SIGQUIT to process 22066
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  sending SIGQUIT to process 22062
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  sending SIGQUIT to process 22061
2017-05-30 16:19:27.386 CEST [22059] DEBUG:  sending SIGQUIT to process 22063

Re: [HACKERS] Use of non-restart-safe storage by temp_tablespaces

2017-05-30 Thread Tom Lane
Bruce Momjian  writes:
> On Mon, May 29, 2017 at 03:55:08PM -0400, Tom Lane wrote:
>> I'm too lazy to search the archives right now, but there was some case
>> years ago where somebody destroyed their database via an ill-thought-out
>> combination of automatic-initdb-if-$PGDATA-isn't-there and slow mounting.

> Here is the Joe Conway report from 2004:
>   https://postgr.es/m/41bfab7c.5040...@joeconway.com

Yeah, that's the thread I was remembering.  Thanks for looking it up.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Tom Lane
Robert Haas  writes:
> On Sat, May 27, 2017 at 5:59 PM, Álvaro Hernández Tortosa
>  wrote:
>> - tls-unique, as you mentioned, uses two undocumented APIs. This raises a
>> small flag about the stability and future of those APIs.

> It seems to me that the question is not just whether those APIs will
> be available in future versions of OpenSSL, but whether they will be
> available in every current and future version of every SSL
> implementation that we may wish to use in core or that any client may
> wish to use.  We've talked before about being able to use the Windows
> native SSL implementation rather than OpenSSL and it seems that there
> would be significant advantages in having that capability.

Another thing of the same sort that should be on our radar is making
use of Apple's TLS code on macOS.  The handwriting on the wall is
unmistakable that they intend to stop shipping OpenSSL before long,
and I do not think we really want to be in a position of having to
bundle OpenSSL into our distribution on macOS.

I'm not volunteering to do that, mind you.  But +1 for not tying new
features to any single TLS implementation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "cannot specify finite value after UNBOUNDED" ... uh, why?

2017-05-30 Thread Robert Haas
On Mon, May 29, 2017 at 3:04 AM, Tom Lane  wrote:
> Would someone please defend the restrictions imposed by the
> "seen_unbounded" checks in transformPartitionBound
> (parse_utilcmd.c:3365..3396 in current HEAD)?  They sure look to me like
> nothing but sloppy thinking, and/or protection of downstream sloppy
> thinking.  Why should the boundedness of one partitioning column's
> range matter to any other partitioning column's range?

Because this is supposed to work more or less like row-comparison --
the earlier columns are strictly more significant than the later ones.
That is, allowing (1, 2) through (3, 4) allows (2, whatever) but (1,
y) only if y >= 2 and (3, y) only if y < 4.

In case you're wondering, this is also how a certain large commercial
database system interprets composite bounds.  You could imagine in
theory a system where a bound from (1, 2) to (3, 4) allows only those
(x, y) where 1<=x<3 and 2<=y<4 but I know of no existing system that
does anything like that.  If you want that sort of thing, you can get
it anyway using two levels of partitioning, one on each column.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use of non-restart-safe storage by temp_tablespaces

2017-05-30 Thread Bruce Momjian
On Mon, May 29, 2017 at 03:55:08PM -0400, Tom Lane wrote:
> I'm too lazy to search the archives right now, but there was some case
> years ago where somebody destroyed their database via an ill-thought-out
> combination of automatic-initdb-if-$PGDATA-isn't-there and slow mounting.
> We'd have to be very sure that any auto-directory-creation behavior didn't
> have a potential for that.  Perhaps doing it only for temp tablespaces
> alleviates some of the risk, but it still seems pretty scary.

Here is the Joe Conway report from 2004:

https://postgr.es/m/41bfab7c.5040...@joeconway.com

and later references to the report from Joe Conway (2005):

https://postgr.es/m/425ABB17.305%40joeconway.com

and you (2007):

https://postgr.es/m/18897.1197775682%40sss.pgh.pa.us

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Channel binding support for SCRAM-SHA-256

2017-05-30 Thread Robert Haas
On Sat, May 27, 2017 at 5:59 PM, Álvaro Hernández Tortosa
 wrote:
> - tls-unique, as you mentioned, uses two undocumented APIs. This raises a
> small flag about the stability and future of those APIs.

It seems to me that the question is not just whether those APIs will
be available in future versions of OpenSSL, but whether they will be
available in every current and future version of every SSL
implementation that we may wish to use in core or that any client may
wish to use.  We've talked before about being able to use the Windows
native SSL implementation rather than OpenSSL and it seems that there
would be significant advantages in having that capability.  Meanwhile,
a client that reimplements the PostgreSQL wire protocol is free to use
any SSL implementation it likes.  If channel binding can't work unless
both sides are speaking OpenSSL, then I'd say we've blown it.

Also, Heikki pointed out in his PGCon talk that there's currently no
way for libpq to insist on the use of SCRAM rather than plain password
authentication or md5.  So, if someone trojans the server, they only
need to hack it to ask the client for plaintext authentication rather
than SCRAM and the client will cheerfully hand over the password with
no questions asked.  Presumably, we need to add a connection option to
insist (a) on SCRAM or (b) SCRAM + channel binding, or else this isn't
going to actually provide us with any increased security.  Even
without that, channel binding will still let the server verify that
it's really connected to the client, but that's not really doing much
for us in terms of security because the client (who has the password)
can always let someone else impersonate it perfectly by just handing
over the password.  Channel binding can't prevent that.  It *could*
prevent someone from impersonating the *server* but not if the server
can disable the check whenever it likes by ceasing to advertise
channel binding as an available option -- with no notification to the
client that anything has changed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Tom Lane
Robert Haas  writes:
> On Mon, May 29, 2017 at 3:45 PM, Tom Lane  wrote:
>> I wonder what the overhead is of using -fPIC when -fpic would be
>> sufficient.

> Do we have an idea how to measure the increased overhead?  Just from
> reading the description, I'm guessing that the increased cost would
> happen when the extension calls back into core, but maybe that doesn't
> happen often enough to worry about?

My gut feeling is that it'd be a pretty distributed cost, because every
internal cross-reference in the .so (for instance, loading the address of
a string literal) would involve a bit more overhead to support a wider
offset field.  An easy thing to look at would be how much the code expands
by.  That might or might not be a good proxy for the runtime slowdown
percentage, but it seems like it ought to serve as a zero-order
approximation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix GetOldestXmin comment

2017-05-30 Thread Amit Kapila
On Tue, May 30, 2017 at 6:32 AM, Masahiko Sawada  wrote:
> Hi,
>
> While reading source code, I realized that comment of GetOldestXmin mentions;
>
>   * if rel = NULL and there are no transactions running in the current
>   * database, GetOldestXmin() returns latestCompletedXid.
>
> However, in that case if I understand correctly GetOldestXmin()
> actually returns latestCompletedXid + 1 as follows;
>

Isn't there another gotcha in above part of the comment, shouldn't it
say rel != NULL?  AFAICS, when rel is NULL, it considers all databases
not only current database.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] relocation truncated to fit: citus build failure on s390x

2017-05-30 Thread Robert Haas
On Mon, May 29, 2017 at 3:45 PM, Tom Lane  wrote:
> I wonder what the overhead is of using -fPIC when -fpic would be
> sufficient.  Whatever it is, the proposed patch imposes it on every
> shlib or extension, to accommodate one single extension that isn't
> even one we ship.
>
> Maybe this is small enough to not be something we need to worry about,
> but I'm wondering if we should ask citus and other large .so's to set
> some additional make flag that would cue usage of -fPIC over -fpic.

Do we have an idea how to measure the increased overhead?  Just from
reading the description, I'm guessing that the increased cost would
happen when the extension calls back into core, but maybe that doesn't
happen often enough to worry about?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Walsender timeouts and large transactions

2017-05-30 Thread Petr Jelinek
On 30/05/17 11:02, Kyotaro HORIGUCHI wrote:
> At Thu, 25 May 2017 17:52:50 +0200, Petr Jelinek 
>  wrote in 
> 
>> Hi,
>>
>> We have had issue with walsender timeout when used with logical decoding
>> and the transaction is taking long time to be decoded (because it
>> contains many changes)
>>
>> I was looking today at the walsender code and realized that it's because
>> if the network and downstream are fast enough, we'll always take fast
>> path in WalSndWriteData which does not do reply or keepalive processing
>> and is only reached once the transaction has finished by other code. So
>> paradoxically we die of timeout because everything was fast enough to
>> never fall back to slow code path.
>>
>> I propose we only use fast path if the last processed reply is not older
>> than half of walsender timeout, if it is then we'll force the slow code
>> path to process the replies again. This is similar logic that we use to
>> determine if to send keepalive message. I also added CHECK_INTERRUPRS
>> call to fast code path because otherwise walsender might ignore them for
>> too long on large transactions.
>>
>> Thoughts?
> 
> + TimestampTz now = GetCurrentTimestamp();
> 
> I think it is not recommended to read the current time too
> frequently, especially within a loop that hates slowness. (I
> suppose that a loop that can fill up a send queue falls into that

Yeah that was my main worry for the patch as well, although given that
the loop does tuple processing it might not be very noticeable.

> category.)  If you don't mind a certain amount of additional
> complexity for eliminating the possible slowdown by the check,
> timeout would be usable. Attached patch does almost the same
> thing with your patch but without busy time check.
> 
> What do you think about this?
> 

I think we could do it that way.

> # I saw that SIGQUIT doens't work for active publisher, which I
> # think mention in another thread.

Ah missed that email I guess, but that's the missing CHECK_INTERRUPTS();
in the fast-path which btw your updated patch is missing as well.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] Re: pgsql: Code review focused on new node types added by partitioning supp

2017-05-30 Thread Tom Lane
Robert Haas  writes:
 I'm not really for doing it that way, but I'm willing to apply the fix
 if there's consensus for your position.  Anybody else have an opinion?

> +1 from me, too.  I don't see that there's enough advantage in
> avoiding a catversion bump to justify leaving this footgun behind.

The consensus seems pretty clear.  I'll make it so.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-05-30 Thread Amit Kapila
On Tue, May 30, 2017 at 12:36 PM, Konstantin Knizhnik
 wrote:
> On 27.10.2016 14:39, Mithun Cy wrote:
>
>
> I wonder if you considered parallel prewarming of a table?
> Right now either with pg_prewarm, either with pg_autoprewarm, preloading
> table's data is performed by one backend.
> It certainly makes sense if there is just one HDD and we want to minimize
> impact of pg_prewarm on normal DBMS activity.
> But sometimes we need to load data in memory as soon as possible. And modern
> systems has larger number of CPU cores and
> RAID devices make it possible to efficiently load data in parallel.
>
> I have asked this question in context of my CFS (compressed file system) for
> Postgres. The customer's complaint was that there are 64 cores at his system
> but when
> he is building index, decompression of heap data is performed by only one
> core. This is why I thought about prewarm... (parallel index construction is
> separate story...)
>
> pg_prewarm makes is possible to specify range of blocks, so, in principle,
> it is possible to manually preload table in parallel, by spawining
> pg_prewarm
> with different subranges in several backends. But it is definitely not user
> friendly approach.
> And as far as I understand pg_autoprewarm has all necessary infrastructure
> to do parallel load. We just need to spawn more than one background worker
> and specify
> separate block range for each worker.
>
> Do you think that such functionality (parallel autoprewarm) can be useful
> and be easily added?
>

I think parallel load functionality can be useful for few cases like
when the system has multiple I/O channels.  I think doing it
parallelly might need some additional infrastructure to manage the
workers based on how we decide to parallelism like whether we allow
each worker to pick one block and load the same or specify the range
of blocks for each worker.  Each way has its own pros and cons.  It
seems like even if we want to add such an option to *prewarm
functionality, it should be added as a separate patch as it has its
own set of problems that needs to be solved.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-05-30 Thread Alexander Korotkov
Hi, Mark!

On Tue, May 30, 2017 at 2:18 AM, Mark Rofail  wrote:

> rhaas=# select oid, * from pg_opfamily where opfmethod = 2742;
>>  oid  | opfmethod |opfname | opfnamespace | opfowner
>> --+---++--+--
>>  2745 |  2742 | array_ops  |   11 |   10
>>  3659 |  2742 | tsvector_ops   |   11 |   10
>>  4036 |  2742 | jsonb_ops  |   11 |   10
>>  4037 |  2742 | jsonb_path_ops |   11 |   10
>> (4 rows)
>
> I am particulary intrested in array_ops but I have failed in locating the
> code behind it. Where is it reflected in the source code
>

Let's look what particular opclass is consisting of.  Besides records in
pg_opfamily, it also contains records in pg_opclass, pg_amproc and pg_amop.

=# select * from pg_opclass where opcfamily = 2745;
 opcmethod |  opcname  | opcnamespace | opcowner | opcfamily | opcintype |
opcdefault | opckeytype
---+---+--+--+---+---++
  2742 | array_ops |   11 |   10 |  2745 |  2277 |
t  |   2283
(1 row)

=# select * from pg_amproc where amprocfamily = 2745;
 amprocfamily | amproclefttype | amprocrighttype | amprocnum |
amproc
--++-+---+
 2745 |   2277 |2277 | 2 |
pg_catalog.ginarrayextract
 2745 |   2277 |2277 | 3 |
ginqueryarrayextract
 2745 |   2277 |2277 | 4 |
ginarrayconsistent
 2745 |   2277 |2277 | 6 |
ginarraytriconsistent
(4 rows)

=# select * from pg_amop where amopfamily = 2745;
 amopfamily | amoplefttype | amoprighttype | amopstrategy | amoppurpose |
amopopr | amopmethod | amopsortfamily
+--+---+--+-+-++
   2745 | 2277 |  2277 |1 | s   |
 2750 |   2742 |  0
   2745 | 2277 |  2277 |2 | s   |
 2751 |   2742 |  0
   2745 | 2277 |  2277 |3 | s   |
 2752 |   2742 |  0
   2745 | 2277 |  2277 |4 | s   |
 1070 |   2742 |  0
(4 rows)

These records of system catalog are defined in special headers the source
code:
src/include/catalog/pg_amop.h
src/include/catalog/pg_amproc.h
src/include/catalog/pg_opclass.h
src/include/catalog/pg_opfamily.h
These records are written to system catalog during bootstrap process (see
src/backend/catalog/README).

As you can see pg_amproc records refer some procedures.  Those procedures
are actually the majority of source code behind of opclass.  Those
procedures are defined in src/backend/access/gin/ginarrayproc.c.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Alter subscription..SET - NOTICE message is coming for table which is already removed

2017-05-30 Thread Masahiko Sawada
On Tue, May 30, 2017 at 10:12 AM, Masahiko Sawada  wrote:
> On Thu, May 25, 2017 at 8:15 PM, tushar  wrote:
>> On 05/25/2017 04:40 PM, Masahiko Sawada wrote:
>>>
>>> I think you did ALTER SUBSCRIPTION while table sync for 100 tables is
>>> running, right?
>>
>> Yes, i didn't wait too much while executing the commands.
>>
>
> I think it's better to add this to open items so that it doesn't get
> missed. Barring any objections I'll add it.
>

Added.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Ashutosh Bapat
On Tue, May 30, 2017 at 1:08 PM, Jeevan Ladhe
 wrote:
> Hi,
>
> I have rebased the patch on the latest commit.
> PFA.
>

Thanks for rebasing the patch. Here are some review comments.
+/*
+ * In case of default partition, just note the index, we do not
+ * add this to non_null_values list.
+ */
We may want to rephrase it like
"Note the index of the partition bound spec for the default partition. There's
no datum to add to the list of non-null datums for this partition."

/* Assign mapping index for default partition. */
"mapping index" should be "mapped index". May be we want to use "the" before
default partition everywhere, there's only one specific default partition.

Assert(default_index >= 0 &&
   mapping[default_index] == -1);
Needs some explanation for asserting mapping[default_index] == -1. Since
default partition accepts any non-specified value, it should not get a mapped
index while assigning those for non-null datums.

+ * Currently range partition do not have default partition
May be rephrased as "As of now, we do not support default range partition."

+ * ArrayExpr, which would return an negated expression for default
a negated instead of an negated.

+cur_index = -1;
 /*
- * A null partition key is only acceptable if null-accepting list
- * partition exists.
+ * A null partition key is acceptable if null-accepting list partition
+ * or a default partition exists. Check if there exists a null
+ * accepting partition, else this will be handled later by default
+ * partition if it exists.
  */
-cur_index = -1;
Why do we need to move assignment to cur_index before the comment.
The comment should probably change to "Handle NULL partition key here
if there's a
null-accepting list partition. Else it will routed to a default partition if
one exists."

+-- attaching default partition overlaps if a default partition already exists
+ERROR:  partition "part_def2" would overlap partition "part_def1"
Saying a default partition overlaps is misleading here. A default partition is
not exepected to overlap with anything. It's expected to "adjust" with the rest
of the partitions. It can "conflict" with another default partition. So the
right error message here is "a default partition "part_def1" already exists."

+CREATE TABLE part_def1 PARTITION OF list_parted DEFAULT;
+CREATE TABLE part_def2 (LIKE part_1 INCLUDING CONSTRAINTS);
+ALTER TABLE list_parted ATTACH PARTITION part_def2 DEFAULT;
May be you want to name part_def1 as def_part and part_def2 as fail_def_part to
be consistent with other names in the file. May be you want to test to
consecutive CREATE TABLE ... DEFAULT.

+ALTER TABLE list_parted2 ATTACH PARTITION part_3 FOR VALUES IN (11);
+ERROR:  new default partition constraint is violated by some row
+DETAIL:  Violating row contains (11, z).
The error message seems to be misleading. The default partition is not new. May
be we should say, "default partition contains rows that conflict with the
partition bounds of "part_3"". I think we should use a better word instead of
"conflict", but I am not able to find one right now.

+-- check that leaf partitons of default partition are scanned when
s/partitons/partitions/

-ALTER TABLE part_5 ADD CONSTRAINT check_a CHECK (a IN (5)), ALTER a
SET NOT NULL;
-ALTER TABLE list_parted2 ATTACH PARTITION part_5 FOR VALUES IN (5);
+ALTER TABLE part_5 ADD CONSTRAINT check_a CHECK (a IN (5, 55)), ALTER
a SET NOT NULL;
+ALTER TABLE list_parted2 ATTACH PARTITION part_5 FOR VALUES IN (5, 55);
Why do we want to change partition bounds of this one? The test is for children
of part_5 right?

+drop table part_default;
I think this is premature drop. Down the file there's a SELECT from
list_parted, which won't list the rows inserted to the default partition and we
will miss to check whether the tuples were routed to the right partition or
not.

+update list_part1 set a = 'c' where a = 'a';
+ERROR:  new row for relation "list_part1" violates partition constraint
+DETAIL:  Failing row contains (c, 1).
Why do we need this test here? It's not dealing with the default partition and
partition row movement is not in there. So the updated row may not move to the
default partition, even if it's there.

This isn't a complete review. I will continue to review this patch further.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-05-30 Thread Mithun Cy
On Tue, May 30, 2017 at 12:36 PM, Konstantin Knizhnik
 wrote:
> On 27.10.2016 14:39, Mithun Cy wrote:
> And as far as I understand pg_autoprewarm has all necessary infrastructure
> to do parallel load. We just need to spawn more than one background worker
> and specify
> separate block range for each worker.
>
> Do you think that such functionality (parallel autoprewarm) can be useful
> and be easily added?

I have not put any attention on making the autoprewarm parallel. But
as per the current state of the code, making the subworkers to load
the blocks in parallel should be possible and I think could be easily
added. Probably we need a configurable parameter to restrict max
number of parallel prewarm sub-workers. Since I have not thought much
about this I might not be aware of all of the difficulties around it.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-05-30 Thread Mithun Cy
On Tue, May 30, 2017 at 10:16 AM, Mithun Cy  wrote:
> Thanks Robert,

Sorry, there was one more mistake ( a typo) in dump_now() instead of
using pfree I used free corrected same in the new patch v10.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


autoprewarm_10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] Re: pgsql: Code review focused on new node types added by partitioning supp

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 5:26 AM, Magnus Hagander  wrote:
>> > I'm not really for doing it that way, but I'm willing to apply the fix
>> > if there's consensus for your position.  Anybody else have an opinion?
>>
>> I tend to agree with Noah on this one.
>
> +1

+1 from me, too.  I don't see that there's enough advantage in
avoiding a catversion bump to justify leaving this footgun behind.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)

2017-05-30 Thread Robert Haas
On Mon, May 29, 2017 at 9:34 PM, Noah Misch  wrote:
> This PostgreSQL 10 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

I spoke with Kevin about this at PGCon and asked him to have a look at
it.  He agreed to do so, but did not specify a time frame, which seems
important.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetwal is broken if run from v10 against older version of PG data directory

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 4:24 AM, Simon Riggs  wrote:
> Just check the name of the directory so that pg_resetwal will refuse
> to run against pg_xlog directory

That's a strictly weaker integrity check than what Tom already
committed.  That only distinguishes pre-10 from post-10, whereas Tom
linked it to the specific major version, which is better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE

2017-05-30 Thread Robert Haas
On Tue, May 30, 2017 at 6:50 AM, Ashutosh Bapat
 wrote:
> Increasing that number would require increased DSM which may not be
> available. Also, I don't see any analysis as to why 6553600 is chosen?
> Is it optimal? Does that work for all kinds of work loads?

Picky, picky.  The point is that Rafia has discovered that a large
increase can sometimes significantly improve performance.  I don't
think she's necessarily proposing that (or anything else) as a final
value that we should definitely use, just getting the conversation
started.

I did a little bit of brief experimentation on this same topic a long
time ago and didn't see an improvement from boosting the queue size
beyond 64k but Rafia is testing Gather rather than Gather Merge and,
as I say, my test was very brief.  I think it would be a good idea to
try to get a complete picture here.  Does this help on any query that
returns many tuples through the Gather?  Only the ones that use Gather
Merge?  Some queries but not others with no obvious pattern?  Only
this query?

Blindly adding a GUC because we found one query that would be faster
with a different value is not the right solution.   If we don't even
know why a larger value is needed here and (maybe) not elsewhere, then
how will any user possibly know how to tune the GUC?  And do we really
want the user to have to keep adjusting a GUC before each query to get
maximum performance?  I think we need to understand the whole picture
here, and then decide what to do.  Ideally this would auto-tune, but
we can't write code for that without a more complete picture of the
behavior.

BTW, there are a couple of reasons I originally picked 64k here.  One
is that making it smaller was very noticeably terrible in my testing,
while making it bigger didn't help much.  The other is that I figured
64k was small enough that nobody would care about the memory
utilization.  I'm not sure we can assume the same thing if we make
this bigger.  It's probably fine to use a 6.4M tuple queue for each
worker if work_mem is set to something big, but maybe not if work_mem
is set to the default of 4MB.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding support for Default partition in partitioning

2017-05-30 Thread Jeevan Ladhe
Hi,

I have fixed the issue related to default partition constraints not getting
updated
after detaching a partition.

PFA.

Regards,
Jeevan Ladhe

On Tue, May 30, 2017 at 1:08 PM, Jeevan Ladhe  wrote:

> Hi,
>
> I have rebased the patch on the latest commit.
> PFA.
>
> There exists one issue reported by Rajkumar[1] off-line as following, where
> describing the default partition after deleting null partition, does not
> show
> updated constraints. I am working on fixing this issue.
>
> create table t1 (c1 int) partition by list (c1);
> create table t11 partition of t1 for values in (1,2);
> create table t12 partition of t1 default;
> create table t13 partition of t1 for values in (10,11);
> create table t14 partition of t1 for values in (null);
>
> postgres=# \d+ t12
> Table "public.t12"
>  Column |  Type   | Collation | Nullable | Default | Storage | Stats
> target | Description
> +-+---+--+-+
> -+--+-
>  c1 | integer |   |  | | plain   |
>  |
> Partition of: t1 DEFAULT
> Partition constraint: ((c1 IS NOT NULL) AND (c1 <> ALL (ARRAY[1, 2, 10,
> 11])))
>
> postgres=# alter table t1 detach partition t14;
> ALTER TABLE
> postgres=# \d+ t12
> Table "public.t12"
>  Column |  Type   | Collation | Nullable | Default | Storage | Stats
> target | Description
> +-+---+--+-+
> -+--+-
>  c1 | integer |   |  | | plain   |
>  |
> Partition of: t1 DEFAULT
> Partition constraint: ((c1 IS NOT NULL) AND (c1 <> ALL (ARRAY[1, 2, 10,
> 11])))
>
> postgres=# insert into t1 values(null);
> INSERT 0 1
>
> Note that the parent correctly allows the nulls to be inserted.
>
> [1] rajkumar.raghuwan...@enterprisedb.com
>
> Regards,
> Jeevan Ladhe
>
> On Tue, May 30, 2017 at 10:59 AM, Beena Emerson 
> wrote:
>
>> On Mon, May 29, 2017 at 9:33 PM, Jeevan Ladhe
>>  wrote:
>> > Hi,
>> >
>> > I have rebased the patch on latest commit with few cosmetic changes.
>> >
>> > The patch fix_listdatums_get_qual_for_list_v3.patch [1]  needs to be
>> applied
>> > before applying this patch.
>> >
>> > [1] http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg
>> 315490.html
>> >
>>
>>
>> This needs a rebase again.
>>
>> --
>>
>> Beena Emerson
>>
>> EnterpriseDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>
>


default_partition_v17.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use of non-restart-safe storage by temp_tablespaces

2017-05-30 Thread Christoph Berg
Re: Tom Lane 2017-05-29 <28291.1496087...@sss.pgh.pa.us>
> Andres Freund  writes:
> > On May 29, 2017 12:15:37 PM PDT, Alvaro Herrera  
> > wrote:
> >> I think it'd be smart to support the use case directly, because there's
> >> interest in it being actually supported (unlike the statu quo).
> >> Something like restoring the tablespace to the empty state on boot, if
> >> it's known to need it.
> 
> > Has the danger of making recovery harder after a restart where somebody 
> > forgot to mount some subdirectory ...
> 
> Or even worse, the mount happens after PG starts (and creates directories
> on the root volume, not knowing they should go onto the mount instead).
> 
> I'm too lazy to search the archives right now, but there was some case
> years ago where somebody destroyed their database via an ill-thought-out
> combination of automatic-initdb-if-$PGDATA-isn't-there and slow mounting.
> We'd have to be very sure that any auto-directory-creation behavior didn't
> have a potential for that.  Perhaps doing it only for temp tablespaces
> alleviates some of the risk, but it still seems pretty scary.

stats_temp_directory has the same problem. In that case, the risk of
breaking something by calling mkdir() instead of aborting startup
seems pretty low to me.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE

2017-05-30 Thread Ashutosh Bapat
On Tue, May 30, 2017 at 5:28 AM, Rafia Sabih
 wrote:
> Hello everybody,
>
> Here is a thing I observed in my recent experimentation, on changing
> the value of PARALLEL_TUPLE_QUEUE_SIZE to 6553600, the performance of
> a TPC-H query is improved by more than 50%.

How many tuples are being gathered? This could happen if the workers
are waiting for the leader to make space in the queue after its
filled. By increasing the queue size we might be reducing the waiting
time for worker. In that case, it may be better to check why leader is
not pulling rows faster. How does the performance vary with different
values of PARALLEL_TUPLE_QUEUE_SIZE?

>
> Specifically, with this change, q12 completes in 14 seconds which was
> taking 45 seconds on head. There wasn't any change in the plan
> structure, just the time at gather-merge reduced which gave this
> improvement.
>
> This clearly says that the current value of PARALLEL_TUPLE_QUEUE_SIZE
> is not the best one for all the queries, rather some modification in
> it is very likely to improve performance significantly. One way to do
> is to give this parameters as another GUC just like
> min_parallel_table_scan_size, etc.

GUC may help.

>
> Attached .txt file gives the plan at head and with this patch,
> additionally patch is attached for setting PARALLEL_TUPLE_QUEUE_SIZE
> to 6553600 too.

Increasing that number would require increased DSM which may not be
available. Also, I don't see any analysis as to why 6553600 is chosen?
Is it optimal? Does that work for all kinds of work loads?

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench more operators & functions

2017-05-30 Thread Fabien COELHO



The patch looks ok,


Ok.


but the main issue is missing regress tests.


Yes, I agree.


I know so it is another patch. But it should be placed differently



1. first patch - initial infrastructure for pgbench regress tests
2. this patch + related regress tests


Yep. I have no control about the display order, committers are too few and 
overbooked, pgbench is not necessarily a priority, so I can only wait for 
the non-regression test improvements to get committed some day before 
updating/adding tests for the other patches in the queue (query result in 
variable, utf8 variable names...).


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE

2017-05-30 Thread Rafia Sabih
Hello everybody,

Here is a thing I observed in my recent experimentation, on changing
the value of PARALLEL_TUPLE_QUEUE_SIZE to 6553600, the performance of
a TPC-H query is improved by more than 50%.

Specifically, with this change, q12 completes in 14 seconds which was
taking 45 seconds on head. There wasn't any change in the plan
structure, just the time at gather-merge reduced which gave this
improvement.

This clearly says that the current value of PARALLEL_TUPLE_QUEUE_SIZE
is not the best one for all the queries, rather some modification in
it is very likely to improve performance significantly. One way to do
is to give this parameters as another GUC just like
min_parallel_table_scan_size, etc.

Attached .txt file gives the plan at head and with this patch,
additionally patch is attached for setting PARALLEL_TUPLE_QUEUE_SIZE
to 6553600 too.

Thoughts?
-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/
Head:
 \i /data/rafia.sabih/dss/queries/12.sql
   
---

 Limit  (cost=1001.19..391123.34 rows=1 width=27) (actual 
time=45511.755..45511.755 rows=1 loops=1)
   ->  GroupAggregate  (cost=1001.19..2731856.24 rows=7 width=27) (actual 
time=45511.753..45511.753 rows=1 loops=1)
 Group Key: lineitem.l_shipmode
 ->  Gather Merge  (cost=1001.19..2721491.60 rows=592261 width=27) 
(actual time=7.806..44794.334 rows=311095 loops=1)
   Workers Planned: 4
   Workers Launched: 4
   ->  Nested Loop  (cost=1.13..2649947.55 rows=148065 width=27) 
(actual time=0.342..9071.892 rows=62257 loops=5)
 ->  Parallel Index Scan using idx_l_shipmode on lineitem  
(cost=0.57..2543794.01 rows=148065 width=19) (actual time=0.284.
.7936.015 rows=62257 loops=5)
   Index Cond: (l_shipmode = ANY ('{"REG 
AIR",RAIL}'::bpchar[]))
   Filter: ((l_commitdate < l_receiptdate) AND 
(l_shipdate < l_commitdate) AND (l_receiptdate >= '1995-01-01'::date) AN
D (l_receiptdate < '1996-01-01 00:00:00'::timestamp without time zone))
   Rows Removed by Filter: 3368075
 ->  Index Scan using orders_pkey on orders  
(cost=0.56..0.71 rows=1 width=20) (actual time=0.016..0.016 rows=1 loops=31128
5)
   Index Cond: (o_orderkey = lineitem.l_orderkey)
 Planning time: 1.143 ms
 Execution time: 45522.278 ms
(15 rows)



PATCH:
TPC-H queries:
\i /data/rafia.sabih/dss/queries/12.sql

  QUERY PLAN   

---

 Limit  (cost=1001.19..391123.34 rows=1 width=27) (actual 
time=14427.289..14427.290 rows=1 loops=1)
   ->  GroupAggregate  (cost=1001.19..2731856.24 rows=7 width=27) (actual 
time=14427.287..14427.287 rows=1 loops=1)
 Group Key: lineitem.l_shipmode
 ->  Gather Merge  (cost=1001.19..2721491.60 rows=592261 width=27) 
(actual time=8.656..13469.925 rows=311095 loops=1)
   Workers Planned: 4
   Workers Launched: 4
   ->  Nested Loop  (cost=1.13..2649947.55 rows=148065 width=27) 
(actual time=0.418..14050.843 rows=67321 loops=5)
 ->  Parallel Index Scan using idx_l_shipmode on lineitem  
(cost=0.57..2543794.01 rows=148065 width=19) (actual time=0.354.
.12291.338 rows=67321 loops=5)
   Index Cond: (l_shipmode = ANY ('{"REG 
AIR",RAIL}'::bpchar[]))
   Filter: ((l_commitdate < l_receiptdate) AND 
(l_shipdate < l_commitdate) AND (l_receiptdate >= '1995-01-01'::date) AN
D (l_receiptdate < '1996-01-01 00:00:00'::timestamp without time zone))
   Rows Removed by Filter: 3639282
 ->  Index Scan using orders_pkey on orders  
(cost=0.56..0.71 rows=1 width=20) (actual time=0.023..0.024 rows=1 loops=33660
6)
   Index Cond: (o_orderkey = lineitem.l_orderkey)
 Planning time: 1.201 ms
 Execution time: 14569.550 ms
(15 rows)


change_parallel_tuple_q_size.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] Re: pgsql: Code review focused on new node types added by partitioning supp

2017-05-30 Thread Magnus Hagander
On Tue, May 30, 2017 at 4:41 AM, Stephen Frost  wrote:

> * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > Noah Misch  writes:
> > > On Mon, May 29, 2017 at 03:20:41AM +, Tom Lane wrote:
> > >> Annotate the fact that somebody added location fields to
> PartitionBoundSpec
> > >> and PartitionRangeDatum but forgot to handle them in
> > >> outfuncs.c/readfuncs.c.  This is fairly harmless for production
> purposes
> > >> (since readfuncs.c would just substitute -1 anyway) but it's still
> bogus.
> > >> It's not worth forcing a post-beta1 initdb just to fix this, but if we
> > >> have another reason to force initdb before 10.0, we should go back and
> > >> clean this up.
> >
> > > +1 for immediately forcing initdb for this, getting it out of the
> way.  We're
> > > already unlikely to reach 10.0 without bumping catversion, but if we
> otherwise
> > > did, releasing 10.0 with a 10beta1 catversion would have negative
> value.
> >
> > I'm not really for doing it that way, but I'm willing to apply the fix
> > if there's consensus for your position.  Anybody else have an opinion?
>
> I tend to agree with Noah on this one.
>
>
>
+1


-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


  1   2   >