Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-11-29 Thread Michael Paquier
On Tue, Nov 29, 2016 at 08:45:13PM +0100, Christian Ullrich wrote:
> * Michael Paquier wrote:
>
> > On Wed, Nov 16, 2016 at 12:45 PM, Christian Ullrich
> >  wrote:
> >> I also did a debug build with 1+2+4 that came in at 84 μs/iteration.
> >
> > Moved to next CF. Christian, perhaps this patch should have an extra
> > round of reviews?
>
> It is significantly different from the last version, so yes, of course.

Patches 0001 (Cosmetic fixes), 0002 (Remove unnecessary CloseHandle)
and 0003 (support for debug CRTs) look in good shape to me. 0004 (Fix
load race) is 0002 from the previous set, and this change makes sense
in itself.

With 0005 I am seeing a compilation failure: you need to have the
declarations in the _MSC_VER block at the beginning of the routine. It
would be nice to mention in a code comment that this what Noah has
mentioned upthread: if a CRT loads while pgwin32_putenv() is
executing, the newly-loaded CRT will always have the latest value.

Based on that 0006 will need a rebase, nothing huge though.

Removing the caching per the measurements upthread is causing a 1us
regression compared to the full set. Let's do things simple then! This
smells like noise.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Fabien COELHO


Hello Tom,

But really, \fi is something that could only be loved by a certain 
academic breed of hackers.


Ah ah:-) I'll take that as a compliment:-)

I'd go for \endif, probably. That still doesn't relate well with 
\unless, so +1 for \if, \ifnot, \else, and \endif.


Ok, that is a clear opinion.


I'm not really convinced that we need an \elseif at this point.
It could be added later if people find it compelling, but I'm
having a hard time believing that it's essential.


My experience with cpp is that #elif is quite useful in some cases, 
typically with alternate implementations depending on dependences, to 
avoid this:


 #ifdef HAVE_XXX
   ...
 #else
 #ifdef HAVE_YYY
   ...
 #else
 #ifdef HAVE ZZZ
   ...
 #else
 #error "oops!"
 #endif // HAVE_ZZZ
 #endif // HAVE_YYY
 #endif // HAVE_XXX

I think that some realistic use cases examples for psql would be great...

If the else-if is a common pattern then it should be at least in the 
design, even if not implemented right away. Also ISTM that having else-if 
is not too compatible with multiplying \if* because it leads to pretty 
ugly \el*if* variants which are quite hard to parse and understand, so 
there is an impact on the design.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Random number generation, take two

2016-11-29 Thread Michael Paquier
On Tue, Nov 29, 2016 at 10:02 PM, Heikki Linnakangas  wrote:
> Ok, here's my second attempt at refactoring random number generation.
> Previous attempt crashed and burned, see
> https://www.postgresql.org/message-id/e1bw3g3-0003st...@gemulon.postgresql.org.
> This addresses the issues pointed out in that thread.

Yeah. (Nit: I want this badly)

> Autoconf
> 
>
> * Configure chooses the implementation to use: OpenSSL, Windows native, or
> /dev/urandom, in that order. You can also force it, by specifying
> USE_OPENSSL_RANDOM=1, USE_WIN32_RANDOM=1 or USE_DEV_URANDOM=1 on the
> configure command line. That's useful for testing.

It's that or a configure option able to take an optional argument but
that conflicts with --with-openssl so your way looks better.

> * Remove the support for /dev/random (only support /dev/urandom). I don't
> think we have any platforms where /dev/urandom is not present, but
> /dev/random is. If we do, the buildfarm will tell us, but until then let's
> keep it simple.

Yes, I don't recall of a modern platform without urandom if random is
there, recalling when researching on the matter a couple of weeks back
or even now. Even newest HP/UX boxes have it!

> Fallback implementation (with --disable-strong-random)
> ---
>
> In postmaster, the algorithm is similar to the existing PostmasterRandom()
> function. It's based on the built-in PRNG, seeded by postmaster and first
> backend start time. It's been rewritten to fit the rest of the code better,
> however.
>
> In backends, there is a new function, pg_backend_random(). It also uses the
> built-in PRNG, but the seed is shared between all backends, in shared
> memory. (Otherwise all backends would inherit the same seed from
> postmaster.)

Sounds fine for me.

> Phew, this has been way more complicated than it seemed at first. Thoughts?

One of the goals of this patch is to be able to have a strong random
function as well for the frontend, which is fine. But any build where
--disable-strong-random is used is not going to have a random function
to rely on. Disabling SCRAM for such builds is a possibility, because
we assume that any build using --disable-strong-random is aware of
security risks, still that's not really appealing in terms of
portability. Another possibility would be to have an extra routine
like pg_frontend_random(), wrapping pg_strong_backend() and using a
combination of getpid + gettimeofday to generate a seed with just a
random() call? That's what we were fighting against previously, so my
mind is telling me that just returning an error when the code paths of
the SCRAM client code is used when built with --disable-strong-random
is the way to go, because we want SCRAM to be fundamentally safe to
use. What do you think?

> pgcrypto
> 
>
> pgcrypto doesn't have the same requirements for "strongness" as cancel keys
> and MD5 salts have. pgcrypto uses random numbers for generating salts, too,
> which I think has similar requirements. But it also uses random numbers for
> generating encryption keys, which I believe ought to be harder to predict.
> If you compile with --disable-strong-random, do we want the encryption key
> generation routines to fail, or to return known-weak keys?
>
> This patch removes the Fortuna algorithm, that was used to generate fairly
> strong random numbers, if OpenSSL was not present. One option would be to
> keep that code as a fallback. I wanted to get rid of that, since it's only
> used on a few old platforms, but OTOH it's been around for a long time with
> little issues.
>
> As this patch stands, it removes Fortuna, and returns known-weak keys, but
> there's a good argument to be made for throwing an error instead.

IMO, leading to an error would make the users aware of them playing
with the fire... Now pademelon's owner may likely have a different
opinion on the matter :p

This patch needs to initialize the variable "res" in
pad_eme_pkcs1_v15(), creating compilation warnings.

Documentation for --disable-strong-random needs to be added.

Cancel keys are not broken.

+   int32   MyCancelKey;
Those would be better as unsigned?

+bool
+pg_backend_random(char *dst, int len)
+{
+   int i;
+   char   *end = dst + len;
+
+   /* should not be called in postmaster */
+   Assert (IsUnderPostmaster || !IsPostmasterEnvironment);
+
+   LWLockAcquire(BackendRandomLock, LW_EXCLUSIVE);
Shouldn't an exclusive lock be taken only when the initialization
phase is called? When reading the value a shared lock would be fine.

Attached is a patch for MSVC to apply on top of yours to enable the
build for strong and weak random functions. Feel free to hack it as
needs be, this base implementation works for the current
implementation.
-- 
Michael


strong_random_msvc.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Tom Lane
David Fetter  writes:
> On Tue, Nov 29, 2016 at 01:10:06PM +0100, Fabien COELHO wrote:
>> If there is a negative condition syntax, I would slightly prefer \ifnot to
>> \if_not or worse \unless. I would disaprove strongly of \unless because it
>> looses the clear symmetry with a closing \fi.

> I take it \sselnu is right out.

[ splorf... ]  But really, \fi is something that could only be loved
by a certain academic breed of hackers.  I'd go for \endif, probably.
That still doesn't relate well with \unless, so +1 for \if, \ifnot,
\else, and \endif.

I'm not really convinced that we need an \elseif at this point.
It could be added later if people find it compelling, but I'm
having a hard time believing that it's essential.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread David Fetter
On Tue, Nov 29, 2016 at 01:10:06PM +0100, Fabien COELHO wrote:
> If there is a negative condition syntax, I would slightly prefer \ifnot to
> \if_not or worse \unless. I would disaprove strongly of \unless because it
> looses the clear symmetry with a closing \fi.

I take it \sselnu is right out.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2016-11-29 Thread Michael Paquier
On Mon, Nov 28, 2016 at 2:01 PM, Michael Paquier
 wrote:
> On Thu, Nov 24, 2016 at 11:15 PM, Andreas Karlsson  wrote:
>> Never mind, I had not fetched the latest master. Once I had done so these
>> tests were both broken in my rebased branch and in the master branch without
>> any modifications. I guess I will have to bisect this once I get home.
>>
>> Inof for myself or anyone else who feels like bisecting this: the last known
>> good commit is 67dc4ccbb2e1c27da823eced66d9217a5652cbb0.
>
> Actually, it is a bit stupid of me to not move on with this patch as
> this is backend-only, and the commit causing the regression failures
> is 9a1d0af which is a frontend-only changes. So I have done a review
> of this patch after reverting 9a1d0af, and things are working honestly
> well.
>
> Looking at the latest patch at code-level, there is some refactoring
> to introduce initialize_context(). But it is actually not necessary
> (perhaps this is the remnant of a past version?) as be_tls_init() is
> its only caller. This patch makes hard to look at the diffs, and it
> makes future back-patching more complicated, so I would suggest
> simplifying the patch as much as possible. You could use for example a
> goto block for the error code path to free the context with
> SSL_CTX_free(), and set up ssl_loaded_verify_locations once the CA is
> loaded. The same applies to initialize_ecdh().
>
> +   if (secure_initialize() != 0)
> +   ereport(FATAL,
> +   (errmsg("could not load ssl context")));
> +   LoadedSSL = true;
> In case of a failure, a LOG message would have been already generated,
> so this duplicates the information. Worse, if log_min_messages is set
> to a level higher than LOG, users *lose* information on what has
> happened. I think that secure_initialize() should have an argument to
> define elevel and that this routine should be in charge of generating
> an error message. Having it return a status code is necessary, but you
> could cast secure_initialize() with (void) to show that we don't care
> about the status code in this case. There is no need to care about
> freeing the new context when the FATAL code path is used as process
> would just shut down.
>
> config.sgml needs to be updated to reflect that the SSL parameters are
> updated at server reload (mentioned already upthread, just
> re-mentioning it to group all the comments in one place).

As this patch could be really simplified this way, I am marking is as
returned with feedback.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2016-11-29 Thread Dilip Kumar
On Fri, Nov 25, 2016 at 6:55 PM, Dilip Kumar  wrote:
> I have changed the design to directly make it based on DSA instead of using 
> DHT.
> In new version we no longer use DHT. Instead, of that I have made some
> change in simplehash[1], so that it can allow external allocator. In
> tidbitmap.c, I have register the allocator to simplehash and those
> allocator functions will allocate memory directly from DSA.
>
> simplehash is always using one single memory (during expand it copy
> from old memory to new memory). Which makes remaining processing very
> simple for us.
>
> In tbm_begin_iterate, we need not to scan complete hash table instead
> of that we can just process dsa memory and convert into page array and
> chunk array.
>
> I have tested the performance in my local machine and I observed that
> it's slightly better than older
> DHT based version (complete performance number will be published soon).
>
> Dependency on other patches:
> 1. dsa_area (dsa-v7.patch)
> https://www.postgresql.org/message-id/CAEepm%3D024p-MeAsDmG%3DR3%2BtR4EGhuGJs_%2BrjFKF0eRoSTmMJnA%40mail.gmail.com
>
> 2. Creating a DSA area to provide work space for parallel execution
> (dsa-area-for-executor-v3.patch)
> https://www.postgresql.org/message-id/CAEepm%3D0HmRefi1%2BxDJ99Gj5APHr8Qr05KZtAxrMj8b%2Bay3o6sA%40mail.gmail.com
>
> patch details
> 1. hash_support_alloc_free_v1.patch [1].
> 2. parallel-bitmap-heap-scan-v3.patch

I just realised that, my latest patch I just sent to Andres, instead
of replying to all.
Forwarding the same mail to Hackers.

Performance reading with new patch..
TPCH-scale factor 10. work_mem 20MB, Power 4 socket machine

Query Head Patch Improvement
Q4   4811 3290 1.5x
Q6 13136 6198 2.1x
Q14 8119 5057 1.6x
Q15   25652   20143 1.2x

Explained analyzed results are attached with the mail..

* I have also applied Andres patch from below link, for taking this
performance (both for head and for patch).
https://www.postgresql.org/message-id/20161123083351.5vramz52nmdokhzz%40alap3.anarazel.de

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


hash_support_alloc_free_v1.patch
Description: Binary data


parallel-bitmap-heap-scan-v3.patch
Description: Binary data


analyze.tar.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2016-11-29 Thread Pavel Stehule
2016-11-30 2:40 GMT+01:00 Craig Ringer :

> On 30 November 2016 at 05:36, Pavel Stehule 
> wrote:
>
> > The problem is in unreserved keyword "PASSING" probably.
>
> Yeah, I think that's what I hit when trying to change it.
>
> Can't you just parenthesize the expression to use operators like ||
> etc? If so, not a big deal.
>
>
???



>
>
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] pg_recvlogical --endpos

2016-11-29 Thread Okano, Naoki

On Wednesday, November 30, 2016 10:34 AM Craig Ringer wrote:
>On 30 November 2016 at 09:18, Okano, Naoki 
> wrote:
>>
>> On November 29, 2016 at 5:03 PM Craig Ringer wrote:
>>> Would it be better rephrased as "--endpos can only be used with --start" ?
>> OK. I think this phrase is better than the previous phrase.
>>
 The patch should allow --endpos to work with --create-slot.
>>>
>>> How? It doesn't make sense with --create-slot .
>> This patch is updated to incorporate Alvaro's changes shown below, isn't it?
>> https://www.postgresql.org/message-id/20160503180658.GA59498%40alvherre.pgsql
>> I thought that the consent in community was taken with this specification...
>>> I could not find any mention that it did not make sense with --create-slot.

> What would --endpos with --create-slot do?

Sorry, I was misunderstanding. you are right.
I have noticed that --endpos doesn't make sense unless it is with --start.
I checked --endpos works with --create-slot and --start.
So, there is no problem with this patch.

Regards,
Okano Naoki
Fujitsu

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove the comment on the countereffectiveness of large shared_buffers on Windows

2016-11-29 Thread Magnus Hagander
On Tuesday, November 22, 2016, Robert Haas  wrote:

> On Fri, Nov 18, 2016 at 3:23 PM, Peter Eisentraut
> > wrote:
> > On 11/17/16 12:30 AM, Tsunakawa, Takayuki wrote:
> >> No, I'm not recommending a higher value, but just removing the doubtful
> sentences of 512MB upper limit.  The advantage is that eliminating this
> sentence will make a chance for users to try best setting.
> >
> > I think this is a good point.  The information is clearly
> > wrong/outdated.  We have no new better information, but we should remove
> > misleading outdated advice and let users find new advice.  Basically,
> > this just puts Windows on par with other platforms with regard to
> > shared_buffers tuning, doesn't it?
> >
> > I'm inclined to commit the original patch if there are no objections.
>
> +1.
>
>
+1. In light of the further data that's in, my earlier objection is
withdrawn :)



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] GiST support for UUIDs

2016-11-29 Thread Chris Bandy
On Tue, Nov 29, 2016 at 1:13 PM, Tom Lane  wrote:
> Pushed with that change and some other mostly-cosmetic tweaking.

Thank you for addressing all those issues, Tom! I tested some
exclusion constraints that are interesting to me, and everything seems
to be working well.

-- Chris


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-29 Thread Mithun Cy
On Wed, Nov 30, 2016 at 1:44 AM, Robert Haas  wrote:
 >On Tue, Nov 29, 2016 at 2:19 PM, Kuntal Ghosh 
wrote:
 >> I was doing some testing with the patch and I found some inconsistency
 >> in the error message.

 > Hmm, maybe the query buffer is getting cleared someplace in there.  We
 > might need to save/restore it.

 Thanks, send query resets the errorMessage. Will fix same.

* PQsendQuery()->PQsendQueryStart()->resetPQExpBuffer(>errorMessage);*

Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] patch: function xmltable

2016-11-29 Thread Craig Ringer
On 30 November 2016 at 05:36, Pavel Stehule  wrote:

> The problem is in unreserved keyword "PASSING" probably.

Yeah, I think that's what I hit when trying to change it.

Can't you just parenthesize the expression to use operators like ||
etc? If so, not a big deal.




-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_recvlogical --endpos

2016-11-29 Thread Craig Ringer
On 30 November 2016 at 09:18, Okano, Naoki  wrote:
>
> On November 29, 2016 at 5:03 PM Craig Ringer wrote:
>> Would it be better rephrased as "--endpos can only be used with --start" ?
> OK. I think this phrase is better than the previous phrase.
>
>>> The patch should allow --endpos to work with --create-slot.
>>
>> How? It doesn't make sense with --create-slot .
> This patch is updated to incorporate Alvaro's changes shown below, isn't it?
> https://www.postgresql.org/message-id/20160503180658.GA59498%40alvherre.pgsql
> I thought that the consent in community was taken with this specification...
> I could not find any mention that it did not make sense with --create-slot.

What would --endpos with --create-slot do?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add diff directives to gitattributes

2016-11-29 Thread Michael Paquier
On Wed, Nov 30, 2016 at 5:16 AM, Tom Lane  wrote:
> ilm...@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:
>> Attached is a patch which adds diff= directives to .gitattributes for C,
>> Perl and (X|SG)ML files.  This makes word diffs and the function
>> indicator in the diff chunk header and more useful.
>
> Could you provide some examples of what that actually does to the output?

Per the git documentation [1], this is aimed at making the hunk
headers in a diff output more readable. For example, take this dummy
patch on a perl file:
diff --git a/src/bin/pg_rewind/t/001_basic.pl b/src/bin/pg_rewind/t/001_basic.pl
index 1764b17..729bc6d 100644
--- a/src/bin/pg_rewind/t/001_basic.pl
+++ b/src/bin/pg_rewind/t/001_basic.pl
@@ -5,6 +5,13 @@ use Test::More tests => 8;

 use RewindTest;

+sub run_test2
+{
+   my $test_mode = shift;
+
+   master_sql();
+}
+
 sub run_test
 {
my $test_mode = shift;

The point to focus on is that, which is what you get on HEAD:
@@ -5,6 +5,13 @@ use Test::More tests => 8;
But what you get with this patch is that:
@@ -5,6 +5,13 @@
So this makes the detection of a subroutine being changed by a diff
more intelligent.

With or without this patch, for example I patch a subroutine I am
still getting that for both cases:
@@ -30,6 +37,10 @@ sub run_test

And as far as I can read from the docs, it is perfectly possible to
set up that at a global level with a dedicated section like [diff
"perl"]. And in short, this is a change to decide if we decide to
still rely on GNU diff -p or on what git decides is good. Honestly I
am -1 to enforce that to everybody doing serious hacking.

[1]: https://www.kernel.org/pub/software/scm/git/docs/gitattributes.html
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_recvlogical --endpos

2016-11-29 Thread Okano, Naoki

On November 29, 2016 at 5:03 PM Craig Ringer wrote:
> Would it be better rephrased as "--endpos can only be used with --start" ?
OK. I think this phrase is better than the previous phrase. 

>> The patch should allow --endpos to work with --create-slot.
>
> How? It doesn't make sense with --create-slot .
This patch is updated to incorporate Alvaro's changes shown below, isn't it? 
https://www.postgresql.org/message-id/20160503180658.GA59498%40alvherre.pgsql
I thought that the consent in community was taken with this specification...
I could not find any mention that it did not make sense with --create-slot.
But if exists, would you let me know?

Regards,
Okano Naoki
Fujitsu

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Andrew Dunstan



On 11/29/2016 05:25 PM, David Steele wrote:

On 11/29/16 5:08 PM, Andrew Dunstan wrote:

On 11/29/2016 03:07 PM, Fabien COELHO wrote:


I cannot remember a language with elseif* variants, and I find them
quite ugly, so from an aethetical point of view I would prefer to
avoid that... On the other hand having an "else if" capability makes
sense (eg do something slightly different for various versions of pg),
so that would suggest to stick to a simpler "if" without variants, if
possible.

FTR I *strongly* disagree with this. (And if you can't remember a
language that comes with them then you need to get out more. The Bourne
shell, where it's spelled "elif", and Ada are two obvious examples.)

Not to mention PL/pgSQL and Perl.




Indeed.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread David Steele
On 11/29/16 5:08 PM, Andrew Dunstan wrote:
> 
> On 11/29/2016 03:07 PM, Fabien COELHO wrote:
>>
>>
>> I cannot remember a language with elseif* variants, and I find them
>> quite ugly, so from an aethetical point of view I would prefer to
>> avoid that... On the other hand having an "else if" capability makes
>> sense (eg do something slightly different for various versions of pg),
>> so that would suggest to stick to a simpler "if" without variants, if
>> possible.
> 
> FTR I *strongly* disagree with this. (And if you can't remember a
> language that comes with them then you need to get out more. The Bourne
> shell, where it's spelled "elif", and Ada are two obvious examples.)

Not to mention PL/pgSQL and Perl.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Andrew Dunstan



On 11/29/2016 03:07 PM, Fabien COELHO wrote:



I cannot remember a language with elseif* variants, and I find them 
quite ugly, so from an aethetical point of view I would prefer to 
avoid that... On the other hand having an "else if" capability makes 
sense (eg do something slightly different for various versions of pg), 
so that would suggest to stick to a simpler "if" without variants, if 
possible.






FTR I *strongly* disagree with this. (And if you can't remember a 
language that comes with them then you need to get out more. The Bourne 
shell, where it's spelled "elif", and Ada are two obvious examples.)


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] amcheck (B-Tree integrity checking tool)

2016-11-29 Thread Peter Geoghegan
On Wed, Nov 23, 2016 at 2:47 PM, Thomas Munro
 wrote:
> Actually I meant that at T2 the btree is exactly same as it was at T1,
> but the order of the values has changed according to the comparator
> (for example, because a collation definition changed when you updated
> your operating system), so that the btree is now corrupted.  Based on
> Goetz Graefe's paper I was wondering if amcheck would be unable to
> detect this due to the cousin problem.

I love that paper (it's more like a book, really). I'm glad that at
least one other person in the community has read some of it, because I
think it has a lot of clues as to how we can begin to address some of
our problems around bloat without novel redesign. Most of the
techniques are quite complementary.

I wrote a prototype of suffix truncation for text in two days last
week. The regression tests pass, and the technique works well for
certain cases, particularly with many column indexes without
correlation (the index-only-scan case). Let's see if that becomes a
real patch.

> Thank you for your patient explanation!

Your questions were excellent, and so deserved my careful consideration.

> Please see my humble attempt
> to test this scenario and learn something about btrees in the process,
> attached.  amcheck does detect the violation of the high key
> invariant.

This is a nice pattern for testing out if the keyspace is sane, and
how things work. I might incorporate it into my own testing in the
future.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exclude pg_largeobject form pg_dump

2016-11-29 Thread Guillaume Lelarge
2016-11-29 17:12 GMT+01:00 Stephen Frost :

> Guillaume, Amul,
>
> * Amul Sul (sula...@gmail.com) wrote:
> > The following review has been posted through the commitfest application:
> > make installcheck-world:  tested, passed
> > Implements feature:   tested, passed
> > Spec compliant:   not tested
> > Documentation:tested, passed
> >
> > Patch v6 looks good to me, passing to committer.
> >
> > Thanks !
> >
> > The new status of this patch is: Ready for Committer
>
> I've pushed this patch.  In the future if you are working with the
> pg_dump TAP tests and add a new 'run', be sure to update the 'tests' to
> indicate which tests should 'like' or 'unlike' that run.  If you have
> any questions, feel free to ask.
>
>
Thank you. And thanks Amul for the review.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Corey Huinker
>
>
> Hmmm. Would you have an example use case that could not be done simply
> with the previous ifs? cpp did that end ended up with a single if in the
> end.


I think this is what you're asking for...

$ cat not_defined.sql
select :'foo';

$ psql postgres --no-psqlrc -f not_defined.sql --set foo=bar
 ?column?
--
 bar
(1 row)

$ psql postgres --no-psqlrc -f not_defined.sql
psql:not_defined.sql:3: ERROR:  syntax error at or near ":"
LINE 1: select :'foo';
   ^


Now, if we instead added a way for psql to test whether or not a psql var
was defined and set that boolean as ANOTHER variable, then we could avoid
\ifdef and \ifndef.


>
> For consistency, the possible sources of inspiration for a syntax with an
> explicit end marker are:
>
>  - PL/pgSQL: if / then / elsif / else / endif
>  - cpp: if / elif / else / endif
>  - sh: if / then / elif / else / fi
>
> Now "then" is useless in a line oriented syntax, for which the closest
> example above is cpp, which does not have it. I think that we should stick
> to one of these.
>
> I like the shell syntax (without then), but given the line orientation
> maybe it would make sense to use the cpp version, close to what you are
> proposing.
>

I think we should use pl/pgsql as our inspiration, though there's no need
for the "then" because psql commands end the linewhich makes it
identical to the C++ version.
But if we can get this thing done, I really don't care which we use.


> I cannot remember a language with elseif* variants, and I find them quite
> ugly, so from an aethetical point of view I would prefer to avoid that...
> On the other hand having an "else if" capability makes sense (eg do
> something slightly different for various versions of pg), so that would
> suggest to stick to a simpler "if" without variants, if possible.
>

We need to keep things easy to parse. Earlier someone said no psql command
should ever have more than 2 parameters, and generally only one. Increasing
the number of commands allows us to avoid multi-parameter commands. So it's
a trade-off, we have more, simpler commands, or fewer, more complicated
ones.

\if [not] [defined] []
\elsif [not] [defined] []


is problematic if string is ever "not" or "defined". If someone can show me
a way around that, I'm game.


>
> Then seems like we need an if-state-stack to handle nesting. [...]
>>
>
> Yes, a stack or recursion is needed for implementing nesting.
>
> States: None, If-Then, If-Skip, Else-Then, Else-Skip.
>>
>
> With an "else if" construct you probably need some more states: You have
> to know whether you already executed a block in which case a subsequent
> condition is ignored, so there is a state "skip all to end" needed.
>

Right, we'd have to check every level of the stack for a skip-state, not a
big deal.


Re: [HACKERS] Contains and is contained by operators of inet datatypes

2016-11-29 Thread Tom Lane
Robert Haas  writes:
> On Mon, Nov 21, 2016 at 7:57 AM, Andreas Karlsson  wrote:
>> I like the patch because it means less operators to remember for me as a
>> PostgreSQL user. And at least for me inet is a rarely used type compared to
>> hstore, json and range types which all use @> and <@.

> I agree that it would be nice to make the choice of operator names
> more consistent.  I don't know if doing so will please more or fewer
> people than it annoys.

Given the lack of support for this idea from the rest of the community,
I think it's time to reject this patch and move on.  The inconsistency
is unfortunate but it does not seem worth the costs of changing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add diff directives to gitattributes

2016-11-29 Thread Tom Lane
ilm...@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:
> Attached is a patch which adds diff= directives to .gitattributes for C,
> Perl and (X|SG)ML files.  This makes word diffs and the function
> indicator in the diff chunk header and more useful.

Could you provide some examples of what that actually does to the output?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-29 Thread Robert Haas
On Tue, Nov 29, 2016 at 3:14 PM, Robert Haas  wrote:
> On Tue, Nov 29, 2016 at 2:19 PM, Kuntal Ghosh
>  wrote:
>> On Thu, Nov 24, 2016 at 5:46 PM, Mithun Cy  
>> wrote:
>>> I have taken this suggestion now renamed target_server_type to
>>> target_session_attrs with possible 2 values "read-write", "any".
>>> May be we could expand to "readonly" and "prefer-readonly" in next patch
>>> proposal. Attaching the patch for same.
>> I was doing some testing with the patch and I found some inconsistency
>> in the error message.
>> I've a read-only server running on port 5433 and no server on 5436 and 5438.
>>
>> command: bin/psql
>> 'postgresql://localhost:5436,localhost:5433,localhost:5438/postgres?target_session_attrs=read-write'
>>
>> I get the following error message.
>>
>> psql: could not make a writable connection to server "localhost:5433"
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (::1) and accepting
>> TCP/IP connections on port 5438?
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (127.0.0.1) and accepting
>> TCP/IP connections on port 5438?
>>
>> It didn't show any error message for port 5436. But, if I modify the
>> connection string as following:
>>
>> command: bin/psql
>> 'postgresql://localhost:5433,localhost:5436,localhost:5438/postgres?target_session_attrs=read-write'
>>
>> I get the following error message:
>>
>> psql: could not make a writable connection to server "localhost:5433"
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (::1) and accepting
>> TCP/IP connections on port 5436?
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (127.0.0.1) and accepting
>> TCP/IP connections on port 5436?
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (::1) and accepting
>> TCP/IP connections on port 5438?
>> could not connect to server: Connection refused
>> Is the server running on host "localhost" (127.0.0.1) and accepting
>> TCP/IP connections on port 5438?
>
> Hmm, maybe the query buffer is getting cleared someplace in there.  We
> might need to save/restore it.

Not the query buffer.  conn->errorMessage.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-29 Thread Robert Haas
On Tue, Nov 29, 2016 at 2:19 PM, Kuntal Ghosh
 wrote:
> On Thu, Nov 24, 2016 at 5:46 PM, Mithun Cy  wrote:
>> I have taken this suggestion now renamed target_server_type to
>> target_session_attrs with possible 2 values "read-write", "any".
>> May be we could expand to "readonly" and "prefer-readonly" in next patch
>> proposal. Attaching the patch for same.
> I was doing some testing with the patch and I found some inconsistency
> in the error message.
> I've a read-only server running on port 5433 and no server on 5436 and 5438.
>
> command: bin/psql
> 'postgresql://localhost:5436,localhost:5433,localhost:5438/postgres?target_session_attrs=read-write'
>
> I get the following error message.
>
> psql: could not make a writable connection to server "localhost:5433"
> could not connect to server: Connection refused
> Is the server running on host "localhost" (::1) and accepting
> TCP/IP connections on port 5438?
> could not connect to server: Connection refused
> Is the server running on host "localhost" (127.0.0.1) and accepting
> TCP/IP connections on port 5438?
>
> It didn't show any error message for port 5436. But, if I modify the
> connection string as following:
>
> command: bin/psql
> 'postgresql://localhost:5433,localhost:5436,localhost:5438/postgres?target_session_attrs=read-write'
>
> I get the following error message:
>
> psql: could not make a writable connection to server "localhost:5433"
> could not connect to server: Connection refused
> Is the server running on host "localhost" (::1) and accepting
> TCP/IP connections on port 5436?
> could not connect to server: Connection refused
> Is the server running on host "localhost" (127.0.0.1) and accepting
> TCP/IP connections on port 5436?
> could not connect to server: Connection refused
> Is the server running on host "localhost" (::1) and accepting
> TCP/IP connections on port 5438?
> could not connect to server: Connection refused
> Is the server running on host "localhost" (127.0.0.1) and accepting
> TCP/IP connections on port 5438?

Hmm, maybe the query buffer is getting cleared someplace in there.  We
might need to save/restore it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Add diff directives to gitattributes

2016-11-29 Thread Dagfinn Ilmari Mannsåker
Hi hackers,

Attached is a patch which adds diff= directives to .gitattributes for C,
Perl and (X|SG)ML files.  This makes word diffs and the function
indicator in the diff chunk header and more useful.

>From 57d7d4ec5b94783bf68b2959128e33c28547a6b8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= 
Date: Tue, 29 Nov 2016 19:47:36 +
Subject: [PATCH] Add diff directives to .gitattributes

This makes the function indicator in the diff chunk header and word
diffs more useful for the specified languages.
---
 .gitattributes | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/.gitattributes b/.gitattributes
index 4dfc131..3805f90 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -1,11 +1,11 @@
 *		whitespace=space-before-tab,trailing-space
-*.[chly]	whitespace=space-before-tab,trailing-space,indent-with-non-tab,tabwidth=4
+*.[chly]	whitespace=space-before-tab,trailing-space,indent-with-non-tab,tabwidth=4 diff=cpp
 *.dsl		whitespace=space-before-tab,trailing-space,tab-in-indent
 *.patch		-whitespace
-*.pl		whitespace=space-before-tab,trailing-space,tabwidth=4
+*.pl		whitespace=space-before-tab,trailing-space,tabwidth=4 diff=perl
 *.po		whitespace=space-before-tab,trailing-space,tab-in-indent,-blank-at-eof
-*.sgml		whitespace=space-before-tab,trailing-space,tab-in-indent,-blank-at-eol
-*.x[ms]l	whitespace=space-before-tab,trailing-space,tab-in-indent
+*.sgml		whitespace=space-before-tab,trailing-space,tab-in-indent,-blank-at-eol diff=html
+*.x[ms]l	whitespace=space-before-tab,trailing-space,tab-in-indent diff=html
 
 # Avoid confusing ASCII underlines with leftover merge conflict markers
 README		conflict-marker-size=32
-- 
2.7.4


-- 
"A disappointingly low fraction of the human race is,
 at any given time, on fire." - Stig Sandbeck Mathisen

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] move collation import to backend

2016-11-29 Thread Tom Lane
Andres Freund  writes:
> On 2016-11-29 12:16:37 -0500, Peter Eisentraut wrote:
>> Required to avoid compiler warning about missing prototype.

> It seems not to be project style to have prototypes in the middle of the
> file...

I agree.  Please put that in builtins.h, if you can't find any better
header for it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Fabien COELHO


Hello Corey,


I agree that the boolean tests available should be *very* simple, and all
of the weight of complex calculation should be put in SQL, like we do with
\gset


Yes.


I propose those be:

\if STRING :  true if STRING evaluates to true via ParseVariableBool, empty
means false


Yep.


\ifnot STRING: true if STRING evaluates to false via ParseVariableBool,
empty means false


I'd like other opinions about having unique \if and an not operator, vs 
multiplying \if* and possibly \elif* keywords.



\ifdef psql_var: true if psql_var is defined
\ifndef psql_var: true if psql_var is not defined, helpful for when --set
var=val was omitted and should be defaulted


Hmmm. Would you have an example use case that could not be done simply 
with the previous ifs? cpp did that end ended up with a single if in the 
end.



A full compliment of \elseif \elseifnot \elseifdef \elseifndef, each
matching the corresponding \if* above. I'm ok with these being optional in
the first revision.



\else   - technically we could leave this out as well
\endif


For consistency, the possible sources of inspiration for a syntax with an 
explicit end marker are:


 - PL/pgSQL: if / then / elsif / else / endif
 - cpp: if / elif / else / endif
 - sh: if / then / elif / else / fi

Now "then" is useless in a line oriented syntax, for which the closest 
example above is cpp, which does not have it. I think that we should stick 
to one of these.


I like the shell syntax (without then), but given the line orientation 
maybe it would make sense to use the cpp version, close to what you are 
proposing.


I cannot remember a language with elseif* variants, and I find them quite 
ugly, so from an aethetical point of view I would prefer to avoid that... 
On the other hand having an "else if" capability makes sense (eg do 
something slightly different for various versions of pg), so that would 
suggest to stick to a simpler "if" without variants, if possible.



Then seems like we need an if-state-stack to handle nesting. [...]


Yes, a stack or recursion is needed for implementing nesting.


States: None, If-Then, If-Skip, Else-Then, Else-Skip.


With an "else if" construct you probably need some more states: You have 
to know whether you already executed a block in which case a subsequent 
condition is ignored, so there is a state "skip all to end" needed.



Does that seem work-able?


Sure. I think the priority is to try to agree about a syntax, the 
implementation is a detail.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] move collation import to backend

2016-11-29 Thread Andres Freund
On 2016-11-29 12:16:37 -0500, Peter Eisentraut wrote:
> On 11/12/16 10:38 AM, Andres Freund wrote:
> >>/*
> >> * Also forbid matching an any-encoding entry.  This test of course is 
> >> not
> >> * backed up by the unique index, but it's not a problem since we don't
> >> * support adding any-encoding entries after initdb.
> >> */
> > 
> > Note that this isn't true anymore...
> 
> I think this is still correct, because the collation import does not
> produce any any-encoding entries (collencoding = -1).

Well, the comment "don't support adding any-encoding entries after
initdb." is now wrong.

> >> +
> >> +Datum pg_import_system_collations(PG_FUNCTION_ARGS);
> >> +
> >> +Datum
> >> +pg_import_system_collations(PG_FUNCTION_ARGS)
> >> +{
> > 
> > Uh?
> 
> Required to avoid compiler warning about missing prototype.

It seems not to be project style to have prototypes in the middle of the
file...

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-11-29 Thread Christian Ullrich

* Michael Paquier wrote:

> On Wed, Nov 16, 2016 at 12:45 PM, Christian Ullrich
>  wrote:
>> I also did a debug build with 1+2+4 that came in at 84 μs/iteration.
>
> Moved to next CF. Christian, perhaps this patch should have an extra
> round of reviews?

It is significantly different from the last version, so yes, of course.

--
Christian



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [BUGS] [HACKERS] object_classes array is broken, again

2016-11-29 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Jaimin Pan wrote:
> > Hi all,
> > 
> > How about this patch. I believe it will never missing someone and be
> > detected while compiling.
> 
> Hmm, yeah this looks like something that's worth considering going
> forward.  I can't think of any reason not to do this.  Perhaps we can
> write getObjectClass using this, too.

I just noticed a lot of the DropFooById() functions consist of just
"open catalog, lookup tuple by OID, delete tuple, close catalog".  Which
I think we could rewrite in a generic manner using the table proposed by
this patch, and save some boilerplate code.  Now there's a portion of
the functions that have some code in addition to that, but we can still
call the generic one and then do the rest of the stuff in the
class-specific function.  Looks like a pretty easy code removal project.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-29 Thread Kuntal Ghosh
On Thu, Nov 24, 2016 at 5:46 PM, Mithun Cy  wrote:
> I have taken this suggestion now renamed target_server_type to
> target_session_attrs with possible 2 values "read-write", "any".
> May be we could expand to "readonly" and "prefer-readonly" in next patch
> proposal. Attaching the patch for same.
I was doing some testing with the patch and I found some inconsistency
in the error message.
I've a read-only server running on port 5433 and no server on 5436 and 5438.

command: bin/psql
'postgresql://localhost:5436,localhost:5433,localhost:5438/postgres?target_session_attrs=read-write'

I get the following error message.

psql: could not make a writable connection to server "localhost:5433"
could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5438?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5438?

It didn't show any error message for port 5436. But, if I modify the
connection string as following:

command: bin/psql
'postgresql://localhost:5433,localhost:5436,localhost:5438/postgres?target_session_attrs=read-write'

I get the following error message:

psql: could not make a writable connection to server "localhost:5433"
could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5436?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5436?
could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5438?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5438?

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for UUIDs

2016-11-29 Thread Tom Lane
I wrote:
> I'm kind of inclined to change uuid_parts_distance to just convert
> a given pg_uuid_t to "double" and then apply penalty_num(), as is
> done in gbt_macad_penalty.

Pushed with that change and some other mostly-cosmetic tweaking.

One not too cosmetic fix was that gbt_uuid_union was declared with the
wrong return type.  That's probably mostly harmless given that core GiST
pays little attention to the declared signatures of the support functions,
but it's not a good thing.  This would've been caught if anyone had
thought to run the amvalidate functions on the updated extension.
I think I will go and put a call to that into the regression tests of
all the contrib modules that define new opclasses.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] XactLockTableWait doesn't set wait_event correctly

2016-11-29 Thread Robert Haas
On Tue, Nov 29, 2016 at 6:26 AM, Simon Riggs  wrote:
> New (9.4) XactLockTableWait() sets the reason for the wait, so that if
> we emit a log message then it will produce a log message to say its
> not waiting on a transaction, its waiting on a lock.
>
> Nice. log_lock_waits works great.
>
> New (9.6) wait_event code is not correctly set. In ProcSleep() we set
> the wait reason according to the actual lock tag, effectively ignoring
> the information specifically provided by XactLockTableWait().
>
> Ugh.
>
> pg_stat_activity shows a wait_event of 'transactionid' rather than 'tuple'
> pg_locks shows a transactionid lock rather than a wait for tuple lock
>
> That looks like a bug since we get different answers from
> log_lock_wait and wait_event, which is annoying and confusing.

What you find annoying or confusing is up to you, but it's not a bug,
and it's not particularly inconsistent, either.  log_lock_waits
reports a transaction lock, pg_stat_activity reports a transactionid
lock, and pg_locks reports a transactionid lock.  Now, it is true *in
addition* to reporting that the lock in question is a transaction
lock, log_lock_waits also reports some context information:

2016-11-29 13:49:59.702 EST [33658] LOG:  process 33658 still waiting
for ShareLock on transaction 2997486 after 1000.748 ms
2016-11-29 13:49:59.702 EST [33658] DETAIL:  Process holding the lock:
33653. Wait queue: 33658.
2016-11-29 13:49:59.702 EST [33658] CONTEXT:  while updating tuple
(0,67) in relation "pgbench_accounts"
2016-11-29 13:49:59.702 EST [33658] STATEMENT:  update
pgbench_accounts set filler = 'b' where aid = 1;

...but right there on the first line, it clearly says "waiting for
ShareLock on transaction 2997".

I'm not totally unsympathetic to the argument that perhaps
XactLockTableWait() ought to pass down the context information through
LockAcquire() so that it reaches ProcSleep() and we can set the wait
event to something like Tuple/Update instead of Lock/Transaction.  But
I don't think it's a slam-dunk, either.  If we did that, it would be
consistent with the additional detail provided by log_lock_waits
rather than the primary message, and it would be inconsistent with
pg_locks.  Also, I think that the reason why we originally used an
error-context callback here rather than passing down some additional
detail into LockAcquire() is that the breaking the abstraction layer
seemed messy and unappealing.  IIRC, Tom, in particular, was concerned
about that last point.

But feel free to propose a patch...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] XactLockTableWait doesn't set wait_event correctly

2016-11-29 Thread Amit Kapila
On Tue, Nov 29, 2016 at 4:56 PM, Simon Riggs  wrote:
> New (9.4) XactLockTableWait() sets the reason for the wait, so that if
> we emit a log message then it will produce a log message to say its
> not waiting on a transaction, its waiting on a lock.
>
> Nice. log_lock_waits works great.
>
> New (9.6) wait_event code is not correctly set. In ProcSleep() we set
> the wait reason according to the actual lock tag, effectively ignoring
> the information specifically provided by XactLockTableWait().
>

The information provided by XactLockTableWait() helps to display the
more information via context, but the message still suggests
transaction (refer LOG line in below message):

LOG:  process 6460 still waiting for ShareLock on transaction 42960
after 41822.775 ms
DETAIL:  Process holding the lock: 5704. Wait queue: 6460.
CONTEXT:  while updating tuple (0,1) in relation "t1"
STATEMENT:  update t1 set c1=3 where c1=1;

So I am not sure if displaying tuple in pg_stat_activity is better
than displaying transactionid and how will we distinguish it when some
process is actually waiting on tuple lock?  The process waiting on
tuple lock displays log message as below:

LOG:  process 648 still waiting for ExclusiveLock on tuple (0,1) of
relation 137344 of database 12245 after 1045.220 ms
DETAIL:  Process holding the lock: 6460. Wait queue: 648.
STATEMENT:  update t1 set c1=4 where c1=1;

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving RLS planning

2016-11-29 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Nov 28, 2016 at 6:55 PM, Stephen Frost  wrote:
> >> diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README
> >> [...]
> >> + Additional rules will be needed to support safe handling of join quals
> >> + when there is a mix of security levels among join quals; for example, it
> >> + will be necessary to prevent leaky higher-security-level quals from being
> >> + evaluated at a lower join level than other quals of lower security level.
> >> + Currently there is no need to consider that since security-prioritized
> >> + quals can only be single-table restriction quals coming from RLS policies
> >> + or security-barrier views, and thus enforcement only needs to happen at
> >> + the table scan level.  With such extra rules, it should be possible to 
> >> let
> >> + security-barrier views be flattened into the parent query, allowing more
> >> + flexibility of planning while still preserving required ordering of qual
> >> + evaluation.  But that will come later.
> >
> > Are you thinking that we will be able to remove the cut-out in
> > is_simple_subquery() which currently punts whenever an RTE is marked as
> > security_barrier?  That would certainly be excellent as, currently, even
> > if everything involved is leakproof, we may end up with a poor choice of
> > plan because the join in the security barrier view must be performed
> > first.  Consider a case where we have two relativly large tables being
> > joined together in a security barrier view, but a join from outside
> > which is against a small table.  A better plan would generally be to
> > join with the smaller table first and then join the resulting small set
> > against the remaining large table.
> 
> We have to be careful that we don't introduce new security holes while
> we're improving the plans.  I guess this would be OK if the table, its
> target list, and its quals all happened to be leakproof, but otherwise
> not.  Or am I confused?

I agree that we need to be careful that we don't introduce security
holes.

Also, I do think it would be nice if we could arrange to have the same
plan in the security-barrier case as is in the no-security-barrier case
when there are no leaky functions involved.

That said, I believe this becomes a similar order-of-operations question
to make sure that values are never exposed to leaky functions until
after all necessary filtering has been performed.  In particular, when
considering joins, if all of the join operators are leakproof then we
could possibly reorder the joins however we choose, as long as anything
leaky is performed after all joins required for security correctness are
performed and all security barrier quals are applied.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Improving RLS planning

2016-11-29 Thread Tom Lane
Robert Haas  writes:
> On Mon, Nov 28, 2016 at 6:55 PM, Stephen Frost  wrote:
>> Are you thinking that we will be able to remove the cut-out in
>> is_simple_subquery() which currently punts whenever an RTE is marked as
>> security_barrier?  That would certainly be excellent as, currently, even
>> if everything involved is leakproof, we may end up with a poor choice of
>> plan because the join in the security barrier view must be performed
>> first.  Consider a case where we have two relativly large tables being
>> joined together in a security barrier view, but a join from outside
>> which is against a small table.  A better plan would generally be to
>> join with the smaller table first and then join the resulting small set
>> against the remaining large table.

> We have to be careful that we don't introduce new security holes while
> we're improving the plans.  I guess this would be OK if the table, its
> target list, and its quals all happened to be leakproof, but otherwise
> not.  Or am I confused?

The plan I have in mind --- it's not implemented in this patch --- is to
fix things so that the "lower security_level quals must be evaluated
first" rule applies to join quals.  It should then be possible to allow
flattening of security-barrier views without security holes.

One part of that would be to teach distribute_qual_to_rels about it
so that less-secure quals can't fall to a lower join level than
more-secure quals.  I think that's relatively straightforward, though
I've not tried to do it yet.

A bigger issue is that we don't have security_level attached to individual
quals at the time when view flattening gets done.  We'd need some other
way of maintaining the distinction between security quals and regular
quals between there and where RestrictInfos get built.  I don't have a good
idea about how to do that yet, but it doesn't seem insoluble.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving RLS planning

2016-11-29 Thread Robert Haas
On Mon, Nov 28, 2016 at 6:55 PM, Stephen Frost  wrote:
>> diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README
>> [...]
>> + Additional rules will be needed to support safe handling of join quals
>> + when there is a mix of security levels among join quals; for example, it
>> + will be necessary to prevent leaky higher-security-level quals from being
>> + evaluated at a lower join level than other quals of lower security level.
>> + Currently there is no need to consider that since security-prioritized
>> + quals can only be single-table restriction quals coming from RLS policies
>> + or security-barrier views, and thus enforcement only needs to happen at
>> + the table scan level.  With such extra rules, it should be possible to let
>> + security-barrier views be flattened into the parent query, allowing more
>> + flexibility of planning while still preserving required ordering of qual
>> + evaluation.  But that will come later.
>
> Are you thinking that we will be able to remove the cut-out in
> is_simple_subquery() which currently punts whenever an RTE is marked as
> security_barrier?  That would certainly be excellent as, currently, even
> if everything involved is leakproof, we may end up with a poor choice of
> plan because the join in the security barrier view must be performed
> first.  Consider a case where we have two relativly large tables being
> joined together in a security barrier view, but a join from outside
> which is against a small table.  A better plan would generally be to
> join with the smaller table first and then join the resulting small set
> against the remaining large table.

We have to be careful that we don't introduce new security holes while
we're improving the plans.  I guess this would be OK if the table, its
target list, and its quals all happened to be leakproof, but otherwise
not.  Or am I confused?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GiST support for UUIDs

2016-11-29 Thread Tom Lane
Chris Bandy  writes:
> [ btree_gist_uuid_8.patch ]

Um ... is there a reason why the penalty logic in gbt_uuid_penalty()
is completely unlike that for any other btree_gist module?

As best I can tell from the (admittedly documentation-free) code
elsewhere, the penalty ought to be proportional to the fraction
by which the original range is expanded; that's not what this
code is doing.  It also seems to be missing the machinations related
to scaling per-column results in a multi-column index.

I'm kind of inclined to change uuid_parts_distance to just convert
a given pg_uuid_t to "double" and then apply penalty_num(), as is
done in gbt_macad_penalty.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Time to up bgwriter_lru_maxpages?

2016-11-29 Thread Jeff Janes
On Mon, Nov 28, 2016 at 1:20 PM, Jim Nasby  wrote:

> On 11/28/16 11:53 AM, Joshua D. Drake wrote:
>
>> On 11/28/2016 11:40 AM, Jim Nasby wrote:
>>
>>> With current limits, the most bgwriter can do (with 8k pages) is 1000
>>> pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
>>> modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
>>>
>>
>> Considering a single SSD can do 70% of that limit, I would say yes.
>>
>
> Next question becomes... should there even be an upper limit?


Where the contortions needed to prevent calculation overflow become
annoying?

I'm not a big fan of nannyism in general, but the limits on this parameter
seem particularly pointless.  You can't write out more buffers than exist
in the dirty state, nor more than implied by bgwriter_lru_multiplier.  So
what is really the worse that can happen if you make it too high?


Cheers,

Jeff


Re: [HACKERS] Proposal: scan key push down to heap [WIP]

2016-11-29 Thread Robert Haas
On Mon, Nov 28, 2016 at 11:17 PM, Dilip Kumar  wrote:
> Actually we want to call slot_getattr instead heap_getattr, because of
> problem mentioned by Andres upthread and we also saw in test results.

Ah, right.

> Should we make a copy of HeapKeyTest lets say ExecKeyTest and keep it
> under executor ?

Sure.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: scan key push down to heap [WIP]

2016-11-29 Thread Robert Haas
On Mon, Nov 28, 2016 at 11:25 PM, Andres Freund  wrote:
> On 2016-11-28 09:55:00 -0500, Robert Haas wrote:
>> I think we should go with this approach.  I don't think it's a good
>> idea to have the heapam layer know about executor slots.
>
> I agree that that's not pretty.
>
>> Even though
>> it's a little sad to pass up an opportunity for a larger performance
>> improvement, this improvement is still quite good.
>
> It's imo not primarily about a larger performance improvement, but about
> avoid possible regressions. Doubling deforming of wide tuples will have
> noticeable impact on some workloads. I don't think we can disregard
> that.

I wasn't proposing to disregard that.  I'm saying hoist the tests up
into nodeSeqScan.c where they can use the slot without breaking the
abstraction.  It's not quite as fast but it's a lot cleaner.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2016-11-29 Thread Robert Haas
On Thu, Nov 24, 2016 at 7:16 AM, Mithun Cy  wrote:
> On Wed, Nov 23, 2016 at 10:19 PM, Catalin Iacob 
> wrote:
>On Tue, Nov 22, 2016 at 8:38 AM, Tsunakawa, Takayuki
>  wrote:
>  >> If you want to connect to a server where the transaction is read-only,
> then shouldn't the connection parameter be something like
>>>"target_session_attrs=readonly"?  That represents exactly what the code
> does.
>
>  >FWIW I find this to be a reasonable compromise. To keep the analogy
>  >with the current patch it would be more something like
> "target_session_attrs=read_write|any".
>
> I have taken this suggestion now renamed target_server_type to
> target_session_attrs with possible 2 values "read-write", "any".

I didn't hear any objections to this approach and, after some thought,
I think it's good.  So I've committed this after rewriting the
documentation, some cosmetic cleanup, and changing the logic so that
if one IP for a host turns out to be read-only, we skip all remaining
IPs for that host, not just the one that we tested already.  This
seems likely to be what users will want.

I recognize that this isn't going to be please everybody 100%, but
there's still room for followup patches to improve things further.
One thing I really like about the target_session_attrs renaming is
that it seems to leave the door open to a variety of things we might
want to do later.  The "read-write" value we now support is closely
related to the query we use for testing ("show
transaction_read_only").  If we later want to filter based on some
other server property, we can add additional values with associated
SQL commands for each.  Of course that could get a little crazy if
overdone, but hopefully we won't overdo it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] move collation import to backend

2016-11-29 Thread Peter Eisentraut
On 11/12/16 10:38 AM, Andres Freund wrote:
> E.g. what if previously present collations are now unavailable?

You get an error message when you try to use the collation.  I think
that is a different class of problems.

>>  
>>  /*
>>   * Also forbid matching an any-encoding entry.  This test of course is 
>> not
>>   * backed up by the unique index, but it's not a problem since we don't
>>   * support adding any-encoding entries after initdb.
>>   */
> 
> Note that this isn't true anymore...

I think this is still correct, because the collation import does not
produce any any-encoding entries (collencoding = -1).

>> +
>> +Datum pg_import_system_collations(PG_FUNCTION_ARGS);
>> +
>> +Datum
>> +pg_import_system_collations(PG_FUNCTION_ARGS)
>> +{
> 
> Uh?

Required to avoid compiler warning about missing prototype.

> This function needs to have !superuser permissions revoked, which it
> afaics currently hasn't.

Done.

New patch attached (includes OID change because of conflict).

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From bb6710c55df3a5f7023ddcda749e05e05e49bc59 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 29 Nov 2016 12:00:00 -0500
Subject: [PATCH v2] Add function to import operation system collations

Move this logic out of initdb into a user-callable function.  This
simplifies the code and makes it possible to update the standard
collations later on if additional operating system collations appear.
---
 src/backend/catalog/pg_collation.c|  18 +++-
 src/backend/commands/collationcmds.c  | 151 ++-
 src/bin/initdb/initdb.c   | 164 +-
 src/include/catalog/pg_collation_fn.h |   3 +-
 src/include/catalog/pg_proc.h |   3 +
 5 files changed, 172 insertions(+), 167 deletions(-)

diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c
index f37cf37..cda64c4 100644
--- a/src/backend/catalog/pg_collation.c
+++ b/src/backend/catalog/pg_collation.c
@@ -41,7 +41,8 @@ Oid
 CollationCreate(const char *collname, Oid collnamespace,
 Oid collowner,
 int32 collencoding,
-const char *collcollate, const char *collctype)
+const char *collcollate, const char *collctype,
+bool if_not_exists)
 {
 	Relation	rel;
 	TupleDesc	tupDesc;
@@ -72,10 +73,21 @@ CollationCreate(const char *collname, Oid collnamespace,
 			  PointerGetDatum(collname),
 			  Int32GetDatum(collencoding),
 			  ObjectIdGetDatum(collnamespace)))
-		ereport(ERROR,
+	{
+		if (if_not_exists)
+		{
+			ereport(NOTICE,
 (errcode(ERRCODE_DUPLICATE_OBJECT),
- errmsg("collation \"%s\" for encoding \"%s\" already exists",
+ errmsg("collation \"%s\" for encoding \"%s\" already exists, skipping",
 		collname, pg_encoding_to_char(collencoding;
+			return InvalidOid;
+		}
+		else
+			ereport(ERROR,
+	(errcode(ERRCODE_DUPLICATE_OBJECT),
+	 errmsg("collation \"%s\" for encoding \"%s\" already exists",
+			collname, pg_encoding_to_char(collencoding;
+	}
 
 	/*
 	 * Also forbid matching an any-encoding entry.  This test of course is not
diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c
index 9bba748..f4b7b65 100644
--- a/src/backend/commands/collationcmds.c
+++ b/src/backend/commands/collationcmds.c
@@ -136,7 +136,11 @@ DefineCollation(ParseState *pstate, List *names, List *parameters)
 			 GetUserId(),
 			 GetDatabaseEncoding(),
 			 collcollate,
-			 collctype);
+			 collctype,
+			 false);
+
+	if (!newoid)
+		return InvalidObjectAddress;
 
 	ObjectAddressSet(address, CollationRelationId, newoid);
 
@@ -177,3 +181,148 @@ IsThereCollationInNamespace(const char *collname, Oid nspOid)
  errmsg("collation \"%s\" already exists in schema \"%s\"",
 		collname, get_namespace_name(nspOid;
 }
+
+
+/*
+ * "Normalize" a locale name, stripping off encoding tags such as
+ * ".utf8" (e.g., "en_US.utf8" -> "en_US", but "br_FR.iso885915@euro"
+ * -> "br_FR@euro").  Return true if a new, different name was
+ * generated.
+ */
+static bool
+normalize_locale_name(char *new, const char *old)
+{
+	char	   *n = new;
+	const char *o = old;
+	bool		changed = false;
+
+	while (*o)
+	{
+		if (*o == '.')
+		{
+			/* skip over encoding tag such as ".utf8" or ".UTF-8" */
+			o++;
+			while ((*o >= 'A' && *o <= 'Z')
+   || (*o >= 'a' && *o <= 'z')
+   || (*o >= '0' && *o <= '9')
+   || (*o == '-'))
+o++;
+			changed = true;
+		}
+		else
+			*n++ = *o++;
+	}
+	*n = '\0';
+
+	return changed;
+}
+
+
+Datum pg_import_system_collations(PG_FUNCTION_ARGS);
+
+Datum
+pg_import_system_collations(PG_FUNCTION_ARGS)
+{
+	bool		if_not_exists = PG_GETARG_BOOL(0);
+	Oid nspid = PG_GETARG_OID(1);
+
+	FILE	   *locale_a_handle;
+	char		localebuf[NAMEDATALEN]; /* we assume ASCII so this is fine */
+	int			

Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Corey Huinker
>
> My 0,02 €, which is just a personal opinion:
>
> I think that handling manually "!/not" would be worth the effort rather
> than having two commands, especially if the boolean expression syntax may
> be extended some day and the negative if would become obsolete.
>
> If there is a negative condition syntax, I would slightly prefer \ifnot to
> \if_not or worse \unless. I would disaprove strongly of \unless because it
> looses the clear symmetry with a closing \fi.
>
> --
> Fabien.


Pavel had previously written this patch
http://ftp.pgpi.org/pub/databases/postgresql/projects/pgFoundry/epsql/epsql/epsql-8.5-develop/psql-enhanced-macros.diff
which I leave here as a point of reference.

Obviously, there's a lot more in there than we'd need, and it's back quite
a few versions.

I agree that the boolean tests available should be *very* simple, and all
of the weight of complex calculation should be put in SQL, like we do with
\gset

I propose those be:

\if STRING :  true if STRING evaluates to true via ParseVariableBool, empty
means false
\ifnot STRING: true if STRING evaluates to false via ParseVariableBool,
empty means false
\ifdef psql_var: true if psql_var is defined
\ifndef psql_var: true if psql_var is not defined, helpful for when --set
var=val was omitted and should be defaulted

A full compliment of \elseif \elseifnot \elseifdef \elseifndef, each
matching the corresponding \if* above. I'm ok with these being optional in
the first revision.

\else   - technically we could leave this out as well
\endif


Then seems like we need an if-state-stack to handle nesting. At any given
point, the mode could be:

1. None

psql currently isn't in an if-then structure, no non-conditional statements
are skipped. any conditional commands other than \if* are an error.
For that matter, the script can always add a new level to the stack with an
\if* command.

2. If-Then

psql is currently executing statements in an \if* branch that evaluated to
true. valid conditional commands are: \if*, \else* \endif

3. If-Skip

psql is currently in a block that evaluated to false, and will still parse
commands for psql-correctness, but will skip them until it encounters an
\endif or \else*

4. Else-Then

Just like If-Then, but encountering an \else* command would be an error.

5. Else-Skip

Just like If-Skip, but encountering an \else* command would be an error.



The only data structure we'd need is the stack of enums listed above. Any
commands would check against the stack-state before executing, but would
otherwise be parsed and checked as they are now. The new \commands would
manipulate that stack.

Does that seem work-able?


Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-29 Thread Alvaro Herrera
Daniel Verite wrote:

> If we consider what would happen with the latest patch on a platform
> with sizeof(int)=8 and a \copy invocation like this:
> 
> \copy (select big,big,big,big,big,big,big,big,.. FROM
> (select lpad('', 1024*1024*200) as big) s) TO /dev/null
> 
> if we put enough copies of "big" in the select-list to grow over 2GB,
> and then over 4GB.

Oh, right, I was forgetting that.

> On a platform with sizeof(int)=4 this should normally fail over 2GB with
> "Cannot enlarge string buffer containing $X bytes by $Y more bytes"
> 
> I don't have an ILP64 environment myself to test, but I suspect
> it would malfunction instead of cleanly erroring out on such
> platforms.

I suspect nobody has such platforms, as they are mostly defunct.  But I
see an easy way to fix it, which is to use sizeof(int32).

> Also, without this limit, we can "COPY FROM/TO file" really huge rows, 4GB
> and beyond, like I tried successfully during the tests mentioned upthread
> (again with len as int64 on x86_64).
> So such COPYs would succeed or fail depending on whether they deal with
> a file or a network connection.
> Do we want this difference in behavior?

Such a patch would be for master only.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tackling JsonPath support

2016-11-29 Thread Christian Convey
On Tue, Nov 29, 2016 at 8:18 AM, Petr Jelinek  wrote:
...

> Just to add to this, the SQL/JSON proposals I've seen so far, and what
> Oracle, MSSQL and Teradata chose to implement already is basically
> subset of jsonpath (some proposals/implementations also include
> lax/strict prefix keyword on top of that). I think that should give us
> some hint on what the base functionality should look like.


I agree.  My guess is that PG users would benefit most from:

(1) Conformance to whatever ISO standard regarding JSON operators
eventually makes it out of the working group.
(2) Compatibility with other widely-used DBMS's.
(3) Compatibility with the JSONPath functionality ​used by web developers.
 (Although I don't currently have a grasp on which frameworks / libraries
this entails.)

I *think* that (1), (2), and (3) are in approximate agreement about the
syntax and semantics of the path-expression language: the language proposed
by Stefan Groessner, plus the strict vs. lax distinction.

I think I can satisfy (3) with a PG extension which provides a function
that approximately implements JSONPath.  My short-term plans are to submit
such a patch.

Hopefully that patch's function will be a helpful starting point for
satisfying (1) and (2) as well.  But that can be decided later.

Nico Williams has argued for using "jq".  I don't think jq satisfies any of
(1), (2), or (3), so I don't see a good case for incorporating it in my
short-term plans.  There *may* be a case for using jq internally to my
implementation; I'll try to look into that.


Re: [HACKERS] Tackling JsonPath support

2016-11-29 Thread Petr Jelinek
On 29/11/16 17:28, Nico Williams wrote:
> On Tue, Nov 29, 2016 at 05:18:17PM +0100, Petr Jelinek wrote:
>> Just to add to this, the SQL/JSON proposals I've seen so far, and what
>> Oracle, MSSQL and Teradata chose to implement already is basically
>> subset of jsonpath (some proposals/implementations also include
>> lax/strict prefix keyword on top of that). I think that should give us
>> some hint on what the base functionality should look like.
> 
> Yes, that'd be base functionality.  You can go above and beyond.
> 

But let's just do the base thing first before going to much more
complicated endeavor, especially if this is supposed to be the first
patch for Christian. Also, one of the points of the SQL is the
compatibility so that's what we should strive for first, especially
given that the syntax of the jq is not compatible AFAICS.

> I agree with Pavel that jq could be used as a user-defined function, but
> proper integration would be better because it would avoid the need to
> format and parse JSON around calls to jq, and also because PG could
> compile jq programs when preparing SQL statements.  Besides, the libjq
> jv API is *very* nice.
> 

I think this would be good as extension first and then we can see what
to do with it next (ie I agree with Pavel).

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tackling JsonPath support

2016-11-29 Thread Nico Williams
On Tue, Nov 29, 2016 at 05:18:17PM +0100, Petr Jelinek wrote:
> Just to add to this, the SQL/JSON proposals I've seen so far, and what
> Oracle, MSSQL and Teradata chose to implement already is basically
> subset of jsonpath (some proposals/implementations also include
> lax/strict prefix keyword on top of that). I think that should give us
> some hint on what the base functionality should look like.

Yes, that'd be base functionality.  You can go above and beyond.

I agree with Pavel that jq could be used as a user-defined function, but
proper integration would be better because it would avoid the need to
format and parse JSON around calls to jq, and also because PG could
compile jq programs when preparing SQL statements.  Besides, the libjq
jv API is *very* nice.

Nico
-- 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exclude pg_largeobject form pg_dump

2016-11-29 Thread Stephen Frost
Guillaume, Amul,

* Amul Sul (sula...@gmail.com) wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:   tested, passed
> Spec compliant:   not tested
> Documentation:tested, passed
> 
> Patch v6 looks good to me, passing to committer.
> 
> Thanks !
> 
> The new status of this patch is: Ready for Committer

I've pushed this patch.  In the future if you are working with the
pg_dump TAP tests and add a new 'run', be sure to update the 'tests' to
indicate which tests should 'like' or 'unlike' that run.  If you have
any questions, feel free to ask.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Tackling JsonPath support

2016-11-29 Thread Petr Jelinek
On 29/11/16 07:37, Pavel Stehule wrote:
> 
> 
> 2016-11-29 7:34 GMT+01:00 Christian Convey  >:
> 
> On Mon, Nov 28, 2016 at 9:28 PM, Pavel Stehule
> >wrote:
> 
> We now support XPath function - JSONPath is similar to XPath -
> it is better for user, because have to learn only one language.
> 
> 
> I'm not sure I understand.
> 
> Are you suggesting that we use XPath, not JSONPath, as our language
> for json-path expressions?
> 
> 
> surely not.
> 
> follow ANSI/SQL :)
> 

Just to add to this, the SQL/JSON proposals I've seen so far, and what
Oracle, MSSQL and Teradata chose to implement already is basically
subset of jsonpath (some proposals/implementations also include
lax/strict prefix keyword on top of that). I think that should give us
some hint on what the base functionality should look like.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tackling JsonPath support

2016-11-29 Thread Christian Convey
On Mon, Nov 28, 2016 at 10:37 PM, Pavel Stehule 
wrote:

>
>
> 2016-11-29 7:34 GMT+01:00 Christian Convey :
>
>> On Mon, Nov 28, 2016 at 9:28 PM, Pavel Stehule 
>> wrote:
>>
>>> ​​
>>> We now support XPath function - JSONPath is similar to XPath -
>>> ​​
>>> it is better for user, because have to learn only one language.
>>>
>>
>> I'm not sure I understand.
>>
>> Are you suggesting that we use XPath, not JSONPath, as our language for
>> json-path expressions?
>>
>
> surely not.
>
> follow ANSI/SQL :)
>

I see.  Then I'm afraid I still don't understand what you're main point was
when you wrote:

​
> We now support XPath function - JSONPath is similar to XPath -
> ​​
> it is better for user, because have to learn only one language.
>

- C


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-29 Thread Karl O. Pinc
Hi Gilles,

On Sun, 27 Nov 2016 21:54:46 +0100
Gilles Darold  wrote:

> I've attached the v15 of the patch that applies your changes/fixes ...

Per the following:

On Mon, 21 Nov 2016 13:17:17 -0500
Robert Haas  wrote:

> It would really be much better to submit anything that's not
> absolutely necessary for the new feature as a separate patch, rather
> than bundling things together.

The following patch should really be submitted (along with
your patch) as a separate and independent patch:

patch_pg_current_logfile-v14.diff.cleanup_rotate

It introduces no new functionality and only cleans up existing code.
The idea is to give the maintainers only one thing to consider
at a time.  It can be looked at along with your patch since it
touches the same code but, like the GUC symbol patch, it's
not a necessary part of your patch's functionality.

At present patch_pg_current_logfile-v14.diff.cleanup_rotate is bundled
in with the v15 patch.

Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Thinko in set_rel_consider_parallel()

2016-11-29 Thread Tom Lane
Amit Langote  writes:
> The following looks like a thinko, which fixed in attached:
> -Oid proparallel = func_parallel(...
> +charproparallel = func_parallel(...

Pushed, thanks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic shared memory areas

2016-11-29 Thread Robert Haas
More review:

+ * For large objects, we just stick all of the allocations in fullness class
+ * 0. Since we can just return the space directly to the free page manager,
+ * we don't really need them on a list at all, except that if someone wants
+ * to bulk release everything allocated using this BlockAreaContext, we
+ * have no other way of finding them.

This comment is out-of-date.

+   /*
+* If this is the only span, and there is no active
span, then maybe
+* we should probably move this span to fullness class
1.  (Otherwise
+* if you allocate exactly all the objects in the only
span, it moves
+* to class 3, then you free them all, it moves to 2,
and then is
+* given back, leaving no active span).
+*/

"maybe we should probably" seems to have one more doubt-expressing
word than it needs.

+   if (size_class == DSA_SCLASS_SPAN_LARGE)
+   /* Large object frees give back segments
aggressively already. */
+   continue;

We generally use braces in this kind of case.

+* Search the fullness class 1 only.  That is where we
expect to find

extra "the"

+   /* Call for effect (we don't need the result). */
+   get_segment_by_index(area, index);
...
+   return area->segment_maps[index].mapped_address + offset;

It isn't guaranteed that area->segment_maps[index].mapped_address will
be non-NULL on return from get_segment_by_index, and then this
function will return a completely bogus pointer to the caller.  I
think you should probably elog() instead.

+   elog(ERROR, "dsa: can't attach to area handle %u", handle);

Avoid ":" in elog messages.   You don't really need to - and it isn't
project style to - tag these with "dsa:"; that's what \errverbose or
\set VERBOSITY verbose is for.  In this particular case, I might just
adopt the formulation from parallel.c:

ereport(ERROR,

(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
 errmsg("could not map dynamic shared
memory segment")));

+   elog(FATAL,
+"dsa couldn't find run of pages:
fpm_largest out of sync");

Here I'd go with "dsa could not find %u free pages".

+   elog(ERROR, "dsa_pin: area already pinned");

"dsa_area already pinned"

+   elog(ERROR, "dsa_unpin: area not pinned");

"dsa_area not pinned"

+   if (segment == NULL)
+   elog(ERROR, "dsa: can't attach to segment");

As above.

+static dsa_segment_map *
+get_segment_by_index(dsa_area *area, dsa_segment_index index)
+{
+   if (unlikely(area->segment_maps[index].mapped_address == NULL))
+   {
+   dsm_handle  handle;
+   dsm_segment *segment;
+   dsa_segment_map *segment_map;
+
+   handle = area->control->segment_handles[index];

Don't you need to acquire the lock for this?

+   /* Check all currently mapped segments to find what's
been freed. */
+   for (i = 0; i <= area->high_segment_index; ++i)
+   {
+   if (area->segment_maps[i].header != NULL &&
+   area->segment_maps[i].header->freed)
+   {
+   dsm_detach(area->segment_maps[i].segment);
+   area->segment_maps[i].segment = NULL;
+   area->segment_maps[i].header = NULL;
+   area->segment_maps[i].mapped_address = NULL;
+   }
+   }
+   area->freed_segment_counter = freed_segment_counter;

And this?

+/*
+ * Release a DSA area that was produced by dsa_create_in_place or
+ * dsa_attach_in_place.  It is preferable to use one of the 'dsa_on_XXX'
+ * callbacks so that this is managed automatically, because failure to release
+ * an area created in-place leaks its segments permanently.
+ */
+void
+dsa_release_in_place(void *place)
+{
+   decrement_reference_count((dsa_area_control *) place);
+}

Since this seems to be the only caller of decrement_reference_count,
you could just put the logic here.  The contract for this function is
also a bit unclear from the header comment.  I initially thought that
it was your intention that this should be called from every process
that has either created or attached the segment.  But that doesn't
seem like it will work, because decrement_reference_count calls
dsm_unpin_segment on every segment, and a segment can only be pinned
once, so everybody else would fail.  So maybe the idea is that ANY ONE
process has to call dsa_release_in_place.  But then that could lead to
failures in other backends inside get_segment_by_index(), because
segments they don't have mapped might already be gone.  OK, third try:
maybe the idea is that the LAST 

Re: [HACKERS] Compiler warnings

2016-11-29 Thread Stephen Frost
* Stephen Frost (sfr...@snowman.net) wrote:
> diff --git a/src/backend/utils/cache/plancache.c 
> b/src/backend/utils/cache/plancache.c
> new file mode 100644
> index 884cdab..b5d97c8
> *** a/src/backend/utils/cache/plancache.c
> --- b/src/backend/utils/cache/plancache.c
> *** GetCachedPlan(CachedPlanSource *plansour
> *** 1196,1204 
>*/
>   qlist = NIL;
>   }
> ! }
> ! 
> ! if (customplan)
>   {
>   /* Build a custom plan */
>   plan = BuildCachedPlan(plansource, qlist, boundParams);
> --- 1196,1203 
>*/
>   qlist = NIL;
>   }
> ! } 
> ! else
>   {
>   /* Build a custom plan */
>   plan = BuildCachedPlan(plansource, qlist, boundParams);

Meh, of course this isn't correct since we could change 'customplan'
inside the first if() block to be true, the right answer is really to
just do:

CachedPlan *plan = NULL;

at the top and keep it simple.

ENEEDMORECOFFEE.

Thanks!

Stephen


signature.asc
Description: Digital signature


[HACKERS] Compiler warnings

2016-11-29 Thread Stephen Frost
Greetings,

Not sure if anyone else has been seeing these, but I'm getting a bit
tired of them.  Neither is a live bug, but they also seem pretty simple
to fix.  The attached patch makes both of these warnings go away.  At
least for my common build, these are the only warnings that are thrown.

I'm building with:

gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

.../src/backend/storage/lmgr/lwlock.c: In function ‘LWLockRelease’:
.../src/backend/storage/lmgr/lwlock.c:1802:5: warning: ‘mode’ may be used 
uninitialized in this function [-Wmaybe-uninitialized]
  if (mode == LW_EXCLUSIVE)
 ^
.../src/backend/utils/cache/plancache.c: In function ‘GetCachedPlan’:
.../src/backend/utils/cache/plancache.c:1232:9: warning: ‘plan’ may be used 
uninitialized in this function [-Wmaybe-uninitialized]
  return plan;
 ^

Thoughts?

Thanks!

Stephen
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
new file mode 100644
index 9c6862f..909ff45
*** a/src/backend/storage/lmgr/lwlock.c
--- b/src/backend/storage/lmgr/lwlock.c
*** LWLockUpdateVar(LWLock *lock, uint64 *va
*** 1770,1776 
  void
  LWLockRelease(LWLock *lock)
  {
! 	LWLockMode	mode;
  	uint32		oldstate;
  	bool		check_waiters;
  	int			i;
--- 1770,1776 
  void
  LWLockRelease(LWLock *lock)
  {
! 	LWLockMode	mode = LW_EXCLUSIVE;
  	uint32		oldstate;
  	bool		check_waiters;
  	int			i;
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
new file mode 100644
index 884cdab..b5d97c8
*** a/src/backend/utils/cache/plancache.c
--- b/src/backend/utils/cache/plancache.c
*** GetCachedPlan(CachedPlanSource *plansour
*** 1196,1204 
  			 */
  			qlist = NIL;
  		}
! 	}
! 
! 	if (customplan)
  	{
  		/* Build a custom plan */
  		plan = BuildCachedPlan(plansource, qlist, boundParams);
--- 1196,1203 
  			 */
  			qlist = NIL;
  		}
! 	} 
! 	else
  	{
  		/* Build a custom plan */
  		plan = BuildCachedPlan(plansource, qlist, boundParams);


signature.asc
Description: Digital signature


Re: [HACKERS] Proposal for changes to recovery.conf API

2016-11-29 Thread Simon Riggs
On 14 November 2016 at 15:50, Robert Haas  wrote:
> On Sat, Nov 12, 2016 at 11:09 AM, Andres Freund  wrote:
>> I'm very tempted to rename this during the move to GUCs
> ...
>> Slightly less so, but still tempted to also rename these. They're not
>> actually necessarily pointing towards a primary, and namespace-wise
>> they're not grouped with recovery_*, which has become more important now
>> that recovery.conf isn't a separate namespace anymore.
>
> -1 for renaming these.  I don't think the current names are
> particularly bad, and I think trying to agree on what would be better
> could easily sink the whole patch.

OK, so we can move forward. Thanks.

I'm going to be doing final review and commit this week, at the
Developer meeting on Thurs and on Friday, with input in person and on
list.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump / copy bugs with "big lines" ?

2016-11-29 Thread Daniel Verite
Alvaro Herrera wrote:

> But I realized that doing it this way is simple enough;
> and furthermore, in any platforms where int is 8 bytes (ILP64), this
> would automatically allow for 63-bit-sized stringinfos

On such platforms there's the next problem that we can't
send COPY rows through the wire protocol when they're larger
than 2GB.

Based on the tests with previous iterations of the patch that used
int64 for the StringInfo length, the COPY backend code does not
gracefully fail in that case.

What happened when trying (linux x86_64) with a 2GB-4GB row
is that the size seems to overflow and is sent as a 32-bit integer
with the MSB set, which is confusing for the client side (at least
libpq cannot deal with it).

If we consider what would happen with the latest patch on a platform
with sizeof(int)=8 and a \copy invocation like this:

\copy (select big,big,big,big,big,big,big,big,.. FROM
(select lpad('', 1024*1024*200) as big) s) TO /dev/null

if we put enough copies of "big" in the select-list to grow over 2GB,
and then over 4GB.

On a platform with sizeof(int)=4 this should normally fail over 2GB with
"Cannot enlarge string buffer containing $X bytes by $Y more bytes"

I don't have an ILP64 environment myself to test, but I suspect
it would malfunction instead of cleanly erroring out on such
platforms.

One advantage of hardcoding the StringInfo limit to 2GB independantly
of sizeof(int) is to basically avoid the problem.

Also, without this limit, we can "COPY FROM/TO file" really huge rows, 4GB
and beyond, like I tried successfully during the tests mentioned upthread
(again with len as int64 on x86_64).
So such COPYs would succeed or fail depending on whether they deal with
a file or a network connection.
Do we want this difference in behavior?
(keeping in mind that they will both fail anyway if any individual field
in the row is larger than 1GB)


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Use procsignal_sigusr1_handler and RecoveryConflictInterrupt() from walsender?

2016-11-29 Thread Craig Ringer
On 22 November 2016 at 17:49, Kyotaro HORIGUCHI
 wrote:

>> > Yeah, I definitely don't think it's as simple as just using
>> > procsignal_sigusr1_handler as-is. I expect we'd likely create a new
>> > global IsWalSender and ignore some RecoveryConflictInterrupt cases
>> > when in a walsender, at least PROCSIG_RECOVERY_CONFLICT_SNAPSHOT, and
>> > probably add a new case for catalog_xmin conflicts that's only acted
>> > on when IsWalSender.
>>
>> The global is unncecessary if walsender have a different handler
>> from normal backends. If there's at least one or few additional
>> reasons for signal, sharing SendProcSignal and having dedicate
>> handler might be better.
>
> If no behavior is shared among normal backend and walsender, it
> would be a good reason not to share the handler function. What
> you are willing to do seems so.

I've explored this some more, and it looks like using
procsignal_sigusr1_handler for handling recovery conflicts in the
walsender during logical decoding actually makes a lot of sense.
Almost all behaviour is shared, and so far I haven't needed any
special cases at all. I needed to add a new recovery signal for
conflict with catalog_xmin advance on upstream, but that was it.

Many of the cases make no sense for physical walsenders, so it
probably makes sense to bail out early if it's a physical walsender,
but for a walsender doing logical replication the only one that I
don't think makes sense is conflict with snapshot, which won't get
sent and is harmless if received.

(The comment on it is slightly wrong anyway; it claims it's only used
by normal user backends in transactions, but database conflicts are
fired even when not in an xact.)

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tiny psql doc inconsistency

2016-11-29 Thread Stephen Frost
Fabien,

* Fabien COELHO (coe...@cri.ensmp.fr) wrote:
> While reading the documentation, I noticed an tiny inconsistency at
> the end of:
> 
>   https://www.postgresql.org/docs/devel/static/app-psql.html
> 
>  testdb=> SELECT first, second, first > 2 AS gt2 FROM my_table;
>   first | second | ge2
>  ---++-
>   1 | one| f
>   2 | two| f
>   3 | three  | t
>   4 | four   | t
>  (4 rows)
> 
> The third column name is not consistent with the query, both in 9.6
> and devel documentations.
> 
> Attached is a small patch to fix this.

Fix pushed.

Thanks!

Stephen


signature.asc
Description: Digital signature


[HACKERS] Random number generation, take two

2016-11-29 Thread Heikki Linnakangas
Ok, here's my second attempt at refactoring random number generation. 
Previous attempt crashed and burned, see 
https://www.postgresql.org/message-id/e1bw3g3-0003st...@gemulon.postgresql.org. 
This addresses the issues pointed out in that thread.


The goals are:

* Have a pg_strong_random() function available in both frontend and 
backend. (Frontend support needed by upcoming SCRAM patches)


* Use a stronger generator for query cancel keys and MD5 salts than 
PostmasterRandom() that's used today.


* Still work on platforms that don't have a supported random source.

Autoconf


* Configure chooses the implementation to use: OpenSSL, Windows native, 
or /dev/urandom, in that order. You can also force it, by specifying 
USE_OPENSSL_RANDOM=1, USE_WIN32_RANDOM=1 or USE_DEV_URANDOM=1 on the 
configure command line. That's useful for testing.


This is better than trying all the options at runtime, because it's 
better to fail early, at configure. Also, if e.g. OpenSSL's RAND_bytes() 
for some reason fails at runtime, you don't want to silently fall back 
to reading /dev/urandom. It really shouldn't fail in the first place, so 
we'd rather fail visibly so that the admin or developer can investigate.


The auto-detection will not work when cross-compiling, because checking 
for the presence of /dev/urandom on the build host doesn't tell us 
whether it will be present in the target. autoconf will give an error, 
but you can override that with USE_DEV_URANDOM=1.


* On platforms that don't have OpenSSL nor /dev/urandom (Tom's HP-UX 
box, pademelon), configure will error out. But you can use 
--disable-strong-random to fall back to a less-secure built-in 
implementation. This is not done automatically, to avoid falling back to 
a less secure implementation by accident.


* When building with --disable-strong-random, the pg_strong_random() 
function is not available at all. Callers need to use #ifdef 
HAVE_STRONG_RANDOM blocks if they want to use a fallback. This is again 
to make sure that no extension, or built-in code either for that matter, 
will accidentally fall back to the less secure implementation.


* Remove the support for /dev/random (only support /dev/urandom). I 
don't think we have any platforms where /dev/urandom is not present, but 
/dev/random is. If we do, the buildfarm will tell us, but until then 
let's keep it simple.


Fallback implementation (with --disable-strong-random)
---

In postmaster, the algorithm is similar to the existing 
PostmasterRandom() function. It's based on the built-in PRNG, seeded by 
postmaster and first backend start time. It's been rewritten to fit the 
rest of the code better, however.


In backends, there is a new function, pg_backend_random(). It also uses 
the built-in PRNG, but the seed is shared between all backends, in 
shared memory. (Otherwise all backends would inherit the same seed from 
postmaster.)


pgcrypto


pgcrypto doesn't have the same requirements for "strongness" as cancel 
keys and MD5 salts have. pgcrypto uses random numbers for generating 
salts, too, which I think has similar requirements. But it also uses 
random numbers for generating encryption keys, which I believe ought to 
be harder to predict. If you compile with --disable-strong-random, do we 
want the encryption key generation routines to fail, or to return 
known-weak keys?


This patch removes the Fortuna algorithm, that was used to generate 
fairly strong random numbers, if OpenSSL was not present. One option 
would be to keep that code as a fallback. I wanted to get rid of that, 
since it's only used on a few old platforms, but OTOH it's been around 
for a long time with little issues.


As this patch stands, it removes Fortuna, and returns known-weak keys, 
but there's a good argument to be made for throwing an error instead.



Phew, this has been way more complicated than it seemed at first. Thoughts?

- Heikki


0001-Replace-PostmasterRandom-with-a-stronger-source-seco.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] tiny psql doc inconsistency

2016-11-29 Thread Fabien COELHO


Hello,

While reading the documentation, I noticed an tiny inconsistency at the 
end of:


https://www.postgresql.org/docs/devel/static/app-psql.html

 testdb=> SELECT first, second, first > 2 AS gt2 FROM my_table;
  first | second | ge2
 ---++-
  1 | one| f
  2 | two| f
  3 | three  | t
  4 | four   | t
 (4 rows)

The third column name is not consistent with the query, both in 9.6 and 
devel documentations.


Attached is a small patch to fix this.

--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 2410bee..261652a 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -4172,7 +4172,7 @@ second | four
   with the \crosstabview command:
 
 testdb= SELECT first, second, first  2 AS gt2 FROM my_table;
- first | second | ge2 
+ first | second | gt2 
 ---++-
  1 | one| f
  2 | two| f

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Fabien COELHO


Hello Pavel,


Now, the psql statements are designed do nothing in syntax error. I am not
sure about be more strict in this case. I see strong advantages - but it
can be little bit different than current behave.


Indeed, an error on a conditional construct should stop the script, which 
is slightly different that "go on whatever", which is okay in the 
interactive mode.


I do not see that as an issue, just as features which are more interactive 
vs script oriented and behave accordingly: typing a \if in interactive 
does not makes much sense, because you have tested something is there, so 
you know the answer and can act directly, so there is no point using a 
condition.



 \if ! :has_xxx_feature


I prefer the commands instead symbols - the parsing and processing 
symbols should be more complex than it is now. A psql parser is very 
simple - and any complex syntax enforces lot of code. \if_not


My 0,02 €, which is just a personal opinion:

I think that handling manually "!/not" would be worth the effort rather 
than having two commands, especially if the boolean expression syntax may 
be extended some day and the negative if would become obsolete.


If there is a negative condition syntax, I would slightly prefer \ifnot to 
\if_not or worse \unless. I would disaprove strongly of \unless because it 
looses the clear symmetry with a closing \fi.


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make default TABLESPACE belong to target table.

2016-11-29 Thread Amos Bird

Amit Kapila writes:

> Another point to think in this regard is what if tomorrow somebody
> requests something similar for Create Materialized View? Isn't it
> better to introduce a GUC default_tablespace_parent or
> default_parent_tablespace?

That's exactly what I have in mind :)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] EvalPlanQual() doesn't follow LockTuple() pattern

2016-11-29 Thread Simon Riggs
src/backend/access/heap/README.tuplock says we do this...

LockTuple()
XactLockTableWait()
mark tuple as locked by me
UnlockTuple()

only problem is we don't... because EvalPlanQualFetch() does this

XactLockTableWait()
LockTuple()

If README.tuplock's reasons for the stated lock order is correct then
it implies that EvalPlanQual updates could be starved indefinitely,
which is probably bad.

It might also be a bug of more serious nature, though no bug seen.
This was found while debugging why wait_event not set correctly.

In any case, I can't really see any reason for this coding in
EvalPlanQual and it isn't explained in comments. Simply removing the
wait allows the access pattern to follow the documented lock order,
and allows regression tests and isolation tests to pass without
problem. Perhaps I have made an error there.

Thoughts?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


epq_locktuple.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] XactLockTableWait doesn't set wait_event correctly

2016-11-29 Thread Simon Riggs
New (9.4) XactLockTableWait() sets the reason for the wait, so that if
we emit a log message then it will produce a log message to say its
not waiting on a transaction, its waiting on a lock.

Nice. log_lock_waits works great.

New (9.6) wait_event code is not correctly set. In ProcSleep() we set
the wait reason according to the actual lock tag, effectively ignoring
the information specifically provided by XactLockTableWait().

Ugh.

pg_stat_activity shows a wait_event of 'transactionid' rather than 'tuple'
pg_locks shows a transactionid lock rather than a wait for tuple lock

That looks like a bug since we get different answers from
log_lock_wait and wait_event, which is annoying and confusing.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-29 Thread Amit Kapila
On Tue, Nov 29, 2016 at 4:40 PM, Albe Laurenz  wrote:
> Amit Kapila wrote:
>>> But with Tobias' complete patch "make installcheck-world" succeeds.
>>
>> Okay.  I have looked into patch proposed and found that it will
>> resolve the regression test problem (Create Table .. AS Execute).  I
>> think it is slightly prone to breakage if tomorrow we implement usage
>> of EXECUTE with statements other Create Table (like Copy).  Have you
>> registered this patch in CF to avoid loosing track?
>>
>> We have two patches (one proposed in this thread and one proposed by
>> me earlier [1]) and both solves the regression test failure.  Unless
>> there is some better approach, I think we can go with one of these.
>
> Tobias did that here: https://commitfest.postgresql.org/12/890/
>
> He added your patch as well.
>

Okay, Thanks.

Robert, do you have any better ideas for this problem?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2016-11-29 Thread Amit Langote
On 2016/11/17 20:27, Amit Langote wrote:
> On 2016/11/16 4:21, Robert Haas wrote:
>> Have you done any performance testing on the tuple routing code?
>> Suppose we insert a million (or 10 million) tuples into an
>> unpartitioned table, a table with 10 partitions, a table with 100
>> partitions, a table with 1000 partitions, and a table that is
>> partitioned into 10 partitions each of which has 10 subpartitions.
>> Ideally, the partitioned cases would run almost as fast as the
>> unpartitioned case, but probably there will be some overhead.
>> However, it would be useful to know how much.  Also, it would be
>> useful to set up the same cases with inheritance using a PL/pgsql ON
>> INSERT trigger for tuple routing and compare.  Hopefully the tuple
>> routing code is far faster than a trigger, but we should make sure
>> that's the case and look for optimizations if not.  Also, it would be
>> useful to know how much slower the tuple-mapping-required case is than
>> the no-tuple-mapping-required case.
> 
> OK, I will share the performance results soon.

Sorry about the delay; here are some numbers with the following
partitioning schema:

# plain table
create table plain (a date, b int, c int);

# partitioned table
create table ptab (a date, b int, c int) partition by range (a, b);

Partitions (not the full commands):

ptab_1 for values from ('2016-11-29',1) to ('2016-11-29', 1000);
ptab_2 for values from ('2016-11-29', 1000) to ('2016-11-29', 2000);
...
ptab_5 for values from ('2016-11-29', 4000) to ('2016-11-29', 5000);

ptab_6 for values from ('2016-11-30',1) to ('2016-11-30', 1000);
...
...
ptab_N for values from ('20XX-XX-XX', 4000) to ('20XX-XX-XX', 5000);

# inheritance partitioned table
create table itab (a date, b int, c int);
create table itab_1 (
check part_check check (a = '2016-11-29' and b >=1 and b < 1000)
) inherits (itab);
...
create table itab_5 (
check part_check check (a = '2016-11-29' and b >= 4000 and b < 5000)
) inherits (itab);
create table itab_0006 (
check part_check check (a = '2016-11-30' and b >=1and b < 1000)
) inherits (itab);
...
...
create table itab_N (
check part_check check (a = '2016-11-29' and b >= 4000 and b < 5000)
) inherits (itab);


The BR trigger (on itab) procedure as follows:

CREATE OR REPLACE FUNCTION itab_ins_trig()
RETURNS TRIGGER AS $$
DECLARE
  partno text;
BEGIN
  SELECT to_char((NEW.a - '2016-11-29'::date) * 5 + NEW.b / 1000 + 1,
 'fm0') INTO partno;
  EXECUTE 'INSERT INTO itab_' || partno || ' SELECT $1.*' USING NEW;
  RETURN NULL;
END; $$ LANGUAGE plpgsql;

Note that the tuple-routing procedure above assumes a fixed-stride range
partitioning scheme (shown as tg-direct-map below).  In other cases, the
simplest approach involves defining a if-else ladder, which I tried too
(shown as tg-if-else below), but reporting times only for up to 200
partitions at most (I'm sure there might be ways to be smarter there
somehow, but I didn't; the point here may only be to compare the new
tuple-routing code's overhead vs. trigger overhead in the traditional method).

# All times in seconds (on my modestly-powerful development VM)
#
# nrows = 10,000,000 generated using:
#
# INSERT INTO $tab
# SELECT '$last'::date - ((s.id % $maxsecs + 1)::bigint || 's')::interval,
#   (random() * 5000)::int % 4999 + 1,
#case s.id % 10
#  when 0 then 'a'
#  when 1 then 'b'
#  when 2 then 'c'
#  ...
#  when 9 then 'j'
#   end
# FROM generate_series(1, $nrows) s(id)
# ORDER BY random();
#
# The first item in the select list is basically a date that won't fall
# outside the defined partitions.

Time for a plain table = 98.1 sec

#partpartedtg-direct-maptg-if-else
======
10   114.3 1483.3742.4
50   112.5 1476.6   2016.8
100  117.1 1498.4   5386.1
500  125.3 1475.5 --
1000 129.9 1474.4 --
5000 137.5 1491.4 --
1154.7 1480.9 --


Then for a 2-level partitioned table with each of the above partitions
partitioned by list (c), with 10 sub-partitions each as follows:

ptab_N_a for values in ('a');
ptab_N_b for values in ('b');
...
ptab_N_k for values in ('j');

I didn't include the times for inheritance table with a routing trigger in
this case, as it seems that the results would look something like the above:

Time for a plain table = 98.1 sec

#part(sub-)parted
=
10   127.0
50   152.3
100  156.6
500  191.8
1000 187.3


Regarding tuple-mapping-required vs no-tuple-mapping-required, all cases
currently require tuple-mapping, because the decision is based on the
result of comparing parent and partition TupleDesc using
equalTupleDescs(), which fails so quickly because TupleDesc.tdtypeid are
not the same.  Anyway, I simply 

Re: [HACKERS] make default TABLESPACE belong to target table.

2016-11-29 Thread Amit Kapila
On Mon, Nov 28, 2016 at 6:19 PM, Amit Kapila  wrote:
> On Sat, Nov 26, 2016 at 9:46 PM, Tom Lane  wrote:
>>
>> If we just did points 1 and 2 then a bool GUC would suffice.  I'm
>> not sure how to handle all three cases cleanly.  We could define
>> default_index_tablespace as empty to get point 1 or a tablespace
>> name to get point 3, but that leaves us having to use some magic
>> string for point 2, which would be messy --- what if it conflicts
>> with someone's choice of a tablespace name?
>>
>
> Yeah, I think coming with a clean way to handle all three might be
> messy.  How about if just handle 2 and 3?
>

Or maybe just 1 and 2 with a bool GUC.  Another point to think in this
regard is what if tomorrow somebody requests something similar for
Create Materialized View?  Isn't it better to introduce a GUC
default_tablespace_parent or default_parent_tablespace?



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-29 Thread Albe Laurenz
Amit Kapila wrote:
>> But with Tobias' complete patch "make installcheck-world" succeeds.
> 
> Okay.  I have looked into patch proposed and found that it will
> resolve the regression test problem (Create Table .. AS Execute).  I
> think it is slightly prone to breakage if tomorrow we implement usage
> of EXECUTE with statements other Create Table (like Copy).  Have you
> registered this patch in CF to avoid loosing track?
> 
> We have two patches (one proposed in this thread and one proposed by
> me earlier [1]) and both solves the regression test failure.  Unless
> there is some better approach, I think we can go with one of these.

Tobias did that here: https://commitfest.postgresql.org/12/890/

He added your patch as well.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba_file_settings view patch

2016-11-29 Thread Ashutosh Bapat
Here's backtrace and some debugging information
Program terminated with signal 11, Segmentation fault.
#0  0x007f96cd in shm_mq_sendv (mqh=0x121e998,
iov=0x7ffc9b7b79f0, iovcnt=2, nowait=1 '\001') at shm_mq.c:357
357Assert(mq->mq_sender == MyProc);
(gdb) where
#0  0x007f96cd in shm_mq_sendv (mqh=0x121e998,
iov=0x7ffc9b7b79f0, iovcnt=2, nowait=1 '\001') at shm_mq.c:357
#1  0x006d8387 in mq_putmessage (msgtype=88 'X', s=0x0, len=0)
at pqmq.c:165
#2  0x00515147 in ParallelWorkerMain (main_arg=141900502) at
parallel.c:1120
#3  0x00783063 in StartBackgroundWorker () at bgworker.c:726
#4  0x00795b77 in do_start_bgworker (rw=0x1216f00) at postmaster.c:5535
#5  0x00795e4f in maybe_start_bgworker () at postmaster.c:5710
#6  0x00794eb3 in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:4959
#7  
#8  0x2b005933a693 in select () from /lib/x86_64-linux-gnu/libc.so.6
#9  0x00790720 in ServerLoop () at postmaster.c:1665
#10 0x0078fe76 in PostmasterMain (argc=8, argv=0x11eef40) at
postmaster.c:1309
#11 0x006d8f1d in main (argc=8, argv=0x11eef40) at main.c:228
(gdb) p mq->mq_sender
Cannot access memory at address 0x6b636568635f707d
(gdb) p mq
$1 = (shm_mq *) 0x6b636568635f706d

Looking at this, I am wondering, how could that happen with your
patches. But every time I have tried to apply your patches and run
regression, I get this crash. Just now I tried the patches on a all
new repository and reproduced the crash.

On Tue, Nov 29, 2016 at 3:10 PM, Haribabu Kommi
 wrote:
>
>
> On Tue, Nov 22, 2016 at 9:46 PM, Ashutosh Bapat
>  wrote:
>>
>>
>> It could be because of some un-initialised variable, which is
>> initialized appropriately by default on your machine but not on my
>> machine. I first applied your pg_hba_rules... patch, ran regression.
>> It didn't crash. then I applied patch for discard_hba... and it
>> started crashing. Does that give you any clue? Here's regression.out
>> file for make installcheck. Here is error log snippet that shows a
>> SIGSEGV there.
>> 2016-11-22 15:47:11.939 IST [86206] LOG:  worker process: parallel
>> worker for PID 86779 (PID 86780) was terminated by signal 11:
>> Segmentation fault
>> 2016-11-22 15:47:11.939 IST [86206] LOG:  terminating any other active
>> server processes
>> 2016-11-22 15:47:11.939 IST [86779] WARNING:  terminating connection
>> because of crash of another server process
>> 2016-11-22 15:47:11.939 IST [86779] DETAIL:  The postmaster has
>> commanded this server process to roll back the current transaction and
>> exit, because another server process exited abnormally and possibly
>> corrupted shared memory.
>>
>> Applying those patches in any order doesn't matter.
>
>
> I am not able to reproduce the crash both in debug and release mode
> builds with both check and installcheck options.
>
> Can you please share the back trace of the crash, so that it will be helpful
> for me to locate the problem.
>
>
> Regards,
> Hari Babu
> Fujitsu Australia



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] commitfest 2016-11 status summary

2016-11-29 Thread Haribabu Kommi
Hi All,


The commitfest status summary after four weeks of progress:

Needs review: 48
Waiting on author: 25
Ready for Commiter: 12
Commited: 36
Moved to next CF: 15
Rejected: 5
Returned with feedback: 6
TOTAL: 147


Overall progress of completion - 42%
week-1 progress of completion - 9%
week-2 progress of completion - 3%
week-3 progress of completion - 3%
week-4 progress of completion - 13%

Because of many patches that are moved to next CF, there is good progress
in this week.

As It is almost end of the commitfest, there are many patches in
"ready for committer" state,  committer's please have a check on those
patches and provide your update.

There are many patches on "waiting on author" state, authors, please respond
to those patches either by providing updated patch or moving the patch into
next commitfest. In case if author doesn't respond, those will be moved into
"returned with feedback" state.

Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] Parallel execution and prepared statements

2016-11-29 Thread Amit Kapila
On Fri, Nov 18, 2016 at 9:01 PM, Albe Laurenz  wrote:
> Amit Kapila wrote:
>> Can you once test by just passing CURSOR_OPT_PARALLEL_OK in
>> PrepareQuery() and run the tests by having forc_parallel_mode=regress?
>> It seems to me that the code in the planner is changed for setting a
>> parallelModeNeeded flag, so it might work as it is.
>
> No, that doesn't work.
>
> But with Tobias' complete patch "make installcheck-world" succeeds.
>

Okay.  I have looked into patch proposed and found that it will
resolve the regression test problem (Create Table .. AS Execute).  I
think it is slightly prone to breakage if tomorrow we implement usage
of EXECUTE with statements other Create Table (like Copy).  Have you
registered this patch in CF to avoid loosing track?

We have two patches (one proposed in this thread and one proposed by
me earlier [1]) and both solves the regression test failure.  Unless
there is some better approach, I think we can go with one of these.

[1] - 
https://www.postgresql.org/message-id/CAA4eK1L%3DtHmmHDK_KW_ja1_dusJxJF%2BSGQHi%3DAPS4MdNPk7HFQ%40mail.gmail.com
-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-29 Thread Pavel Stehule
2016-11-29 8:44 GMT+01:00 Fabien COELHO :

>
> Hello,
>
> I think it's really time we seriously considered adding some flow
>>> control logic, though.
>>>
>>
>> Yeah, maybe.  I'd be interested to see a fully worked out proposal
>> for that.
>>
>
> I agree that designing a fuller proposal before including individual parts
> would be great and result in a more consistent result.
>
> In order to bootstrap the discussion, I suggest the following:
>
>  - boolexpr is a simple "boolean" (t, non 0 int, non empty string.. as
>proposed by Corey and Pavel) or !/not boolexp ; it could be extended if
>necessary, but I would try to avoid that, as
>

Now, the psql statements are designed do nothing in syntax error. I am not
sure about be more strict in this case. I see strong advantages - but it
can be little bit different than current behave.



>
>  - actual more complex expressions could be left to the server through SQL
>which simplifies the client a lot by avoiding an expression language
>altogether
>
>  - then having a conditional block is very versatile and can be adapted to
>many use cases... maybe all
>
>  - \quit CODE, or I would prefer \exit CODE, could be used to exit while
>controlling the status
>
> It could look like (although I do not like gset in this context, but
> anyway):
>
>  SELECT ... AS has_foo_extension \gset
>  SELECT ... AS has_bla_extension \gset
>  \if :has_foo_extension
>...
>  \elif :has_bla_extension
>...
>  \else -- no foo nor bla extension
>\echo please install foo or bla extension
>\exit 1
>  \fi -- extension
>  ...
>  SELECT ... AS has_xxx_feature \gset
>  \if ! :has_xxx_feature
>

I prefer the commands instead symbols - the parsing and processing symbols
should be more complex than it is now. A psql parser is very simple - and
any complex syntax enforces lot of code.

\if_not

Regards

Pavel


>   \echo "feature xxx is needed, aborting"
>   \exit 2
>  \fi
>  ...
>
> --
> Fabien
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Contains and is contained by operators of inet datatypes

2016-11-29 Thread Emre Hasegeli
> I do not like this bit from the original post:
>
> EH> The patch removes the recently committed SP-GiST index support for the
> EH> existing operator symbols to give move reason to the users to use the
> EH> new symbols.

I reverted this part.  The new version of the patch is attached.


0001-inet-contain-op-v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba_file_settings view patch

2016-11-29 Thread Haribabu Kommi
On Tue, Nov 22, 2016 at 9:46 PM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

>
> It could be because of some un-initialised variable, which is
> initialized appropriately by default on your machine but not on my
> machine. I first applied your pg_hba_rules... patch, ran regression.
> It didn't crash. then I applied patch for discard_hba... and it
> started crashing. Does that give you any clue? Here's regression.out
> file for make installcheck. Here is error log snippet that shows a
> SIGSEGV there.
> 2016-11-22 15:47:11.939 IST [86206] LOG:  worker process: parallel
> worker for PID 86779 (PID 86780) was terminated by signal 11:
> Segmentation fault
> 2016-11-22 15:47:11.939 IST [86206] LOG:  terminating any other active
> server processes
> 2016-11-22 15:47:11.939 IST [86779] WARNING:  terminating connection
> because of crash of another server process
> 2016-11-22 15:47:11.939 IST [86779] DETAIL:  The postmaster has
> commanded this server process to roll back the current transaction and
> exit, because another server process exited abnormally and possibly
> corrupted shared memory.
>
> Applying those patches in any order doesn't matter.
>

I am not able to reproduce the crash both in debug and release mode
builds with both check and installcheck options.

Can you please share the back trace of the crash, so that it will be helpful
for me to locate the problem.


Regards,
Hari Babu
Fujitsu Australia


Re: [HACKERS] macaddr 64 bit (EUI-64) datatype support

2016-11-29 Thread Haribabu Kommi
On Sat, Nov 26, 2016 at 4:48 PM, Tom Lane  wrote:

> Haribabu Kommi  writes:
> > Currently the casting is supported from macaddr to macaddr8, but not the
> > other way. This is because, not all 8 byte MAC addresses can be converted
> > into 6 byte addresses.
>
> Well, yeah, so you'd throw an error if it can't be converted.  This is
> no different from casting int8 to int4, for example.  We don't refuse
> to provide that cast just because it will fail for some values.
>

Updated patch attached with added cast function from macaddr8 to
macaddr.

Currently there are no support for cross operators. Is this required
to be this patch only or can be handled later if required?

Regards,
Hari Babu
Fujitsu Australia


mac_eui64_support_3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNDO and in-place update

2016-11-29 Thread Alexander Korotkov
On Tue, Nov 29, 2016 at 8:21 AM, Amit Kapila 
wrote:

> On Mon, Nov 28, 2016 at 11:01 PM, Robert Haas 
> wrote:
> > On Sun, Nov 27, 2016 at 10:44 PM, Amit Kapila 
> wrote:
> >> On Mon, Nov 28, 2016 at 4:50 AM, Robert Haas 
> wrote:
> >>> Well, my original email did contain a discussion of the need for
> >>> delete-marking.  I said that you couldn't do in-place updates when
> >>> indexed columns were modified unless the index AMs had support for
> >>> delete-marking, which is the same point you are making here.
> >>
> >> Sorry, I had not read that part earlier, but now that I read it, I
> >> think there is a slight difference in what I am saying.   I thought
> >> along with delete-marking, we might need transaction visibility
> >> information in the index as well.
> >
> > I think we need to avoid putting the visibility information in the
> > index because that will make the index much bigger.
> >
>
> I agree that there will be an increase in index size, but it shouldn't
> be much if we have transaction information (and that too only active
> transactions) at page level instead of at tuple level.  I think the
> number of concurrent writes on the index will be lesser as compared to
> the heap.  There are a couple of benefits of having visibility
> information in the index.
>
> a. Heap and index could be independently cleaned up and in most cases
> by foreground transactions only.  The work for vacuum will be quite
> less as compared to now.  I think this will help us in avoiding the
> bloat to a great degree.
>
> b. Improved index-only scans, because now we don't need visibility map
> of the heap to check tuple visibility.
>
> c. Some speed improvements for index scans can also be expected
> because with this we don't need to perform heap fetch for invisible
> index tuples.
>

+1
I think once we're considering marking deleted index tuples, we should
provide an option of visibility-aware indexes.
Probably, it shouldn't be the only option for UNDO-based table engine.  But
it definitely should be one of options.

d. This is the way to eventually have index-organized tables.  Once index
is visiblity-aware, it becomes possible
to store date there without heap but with snapshots and transactions.
Also, it would be possible to achieve more
unification between heap and index access methods.  Imagine that heap is
TID => tuple map and index is index_key => tuple
map.  I think the reason why there is distinguishing between heap (which is
hardcoded) access method and index access method is that
heap is visiblity-aware and index is not visiblity aware.  Once they both
are visibility-aware, there is no much difference between them:
they are just different kind of maps.  And they could implement the same
interface.  Imagine you can select between heap-organized table
and index-organized table just by choosing its primary access method.  If
you select heap for primary access method, indexes would
refer TID.  If you select btree on could id as primary access method,
indexes would refer id could.  That would be great extendability.
Way better than what we have now.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] pg_recvlogical --endpos

2016-11-29 Thread Craig Ringer
On 22 November 2016 at 16:52, Okano, Naoki  wrote:
> On Monday, November 21, 2016 1:08 PM Craig Ringer wrote:
>> I've updated the patch for this. It's already posted on the logical
>> decoding timeline following thread, so I'll avoid repeating it here.
>>
>> https://www.postgresql.org/message-id/CAMsr%2BYGd5dv3zPNch6BU4UXX49NJDC9m3-Y%3DV5q%3DTNcE9QgSaQ%40mail.gmail.com
> I checked the latest patch.
> I think that the error message shown below is a typo.
>
>> + if (endpos != InvalidXLogRecPtr && !do_start_slot)
>> + {
>> + fprintf(stderr,
>> + _("%s: cannot use --create-slot or --drop-slot 
>> together with --endpos\n"),
> The condition '!do_start_slot' is not reflected in the error message.

Would it be better rephrased as "--endpos can only be used with --start" ?

> The patch should allow --endpos to work with --create-slot.

How? It doesn't make sense with --create-slot .

> Also, the document explains as follows.
>> +specified LSN.  If specified when not in --start
>> +mode, an error is raised.
> So, it is better to output an error message that matches the document when 
> changing the error message.

Not sure I understand this.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers