Re: [HACKERS] Broken SSL tests in master

2016-11-30 Thread Michael Paquier
On Fri, Nov 25, 2016 at 3:33 PM, Andreas Karlsson  wrote:
> On 11/25/2016 07:19 AM, Tsunakawa, Takayuki wrote:
>>
>> Specifying multiple hosts is a new feature to be introduced in v10, so
>> that's here:
>>
>> https://www.postgresql.org/docs/devel/static/libpq-connect.html
>
>
> Thanks, I had missed that patch. If we add support for multiple hosts I
> think we should also allow for specifying the corresponding multiple
> hostaddrs.
>
> Another thought about this code: should we not check if it is a unix socket
> first before splitting the host? While I doubt that it is common to have a
> unix socket in a directory with comma in the path it is a bit annoying that
> we no longer support this.

I had a look at the proposed patch as well as the multi-host
infrastructure that Robert has introduced, and as far as I can see the
contract of PQHost() is broken because of this code in
connectOptions2:
if (conn->pghostaddr != NULL && conn->pghostaddr[0] != '\0')
{
conn->connhost[0].host = strdup(conn->pghostaddr);
if (conn->connhost[0].host == NULL)
goto oom_error;
conn->connhost[0].type = CHT_HOST_ADDRESS;
}
This fills in the array of hosts arbitrarily a host IP. And this
breaks when combined with this code in PQhost():
if (!conn)
return NULL;
if (conn->connhost != NULL)
return conn->connhost[conn->whichhost].host;
else if (conn->pghost != NULL && conn->pghost[0] != '\0')
return conn->pghost;
So this makes PQhost return an address number even if a name is
available, which is not correct per what PQhost should do. If connhost
has multiple entries, it is definitely right to return the one
whichhost is pointing to and not fallback to pghost which may be a
list of names separated by commas. But if pghostaddr is defined *and*
there is a name available, we had better return the name that
whichhost is pointing to. The most correct fix in my opinion asdasd

-   conn->connhost[0].host = strdup(conn->pghostaddr);
-   if (conn->connhost[0].host == NULL)
+   if (conn->pghost != NULL && conn->pghost[0] != '\0')
+   {
+   char *e = conn->pghost;
+
+   /*
+* Search for the end of the first hostname; a comma or
+* end-of-string acts as a terminator.
+*/
+   while (*e != '\0' && *e != ',')
+   ++e;
+
+   /* Copy the hostname whose bounds we just identified. */
+   conn->connhost[0].host =
+   (char *) malloc(sizeof(char) * (e - conn->pghost + 1));
+   if (conn->connhost[0].host == NULL)
+   goto oom_error;
+   memcpy(conn->connhost[0].host, conn->pghost, e - conn->pghost);
+   conn->connhost[0].host[e - conn->pghost] = '\0';
+   }
+   else
+   {
+   conn->connhost[0].host = strdup(conn->pghostaddr);
+   if (conn->connhost[0].host == NULL)
+   goto oom_error;
+   }
+
+   conn->connhost[0].hostaddr = strdup(conn->pghostaddr);
+   if (conn->connhost[0].hostaddr == NULL)
goto oom_error;
conn->connhost[0].type = CHT_HOST_ADDRESS

This breaks the case where users specify both host and hostaddr in a
connection string as this would force users to do an extra lookup at
which IP a host name is mapping, which is not good.

As we know that if hostaddr is specified, the number of entries in the
connhost array will be one, wouldn't the most correct fix, based on
the current implementation of multi-hosts and its limitations, be to
tweak PQhost() so as the first element in pghost is returned back to
the caller instead of an entry of type CHT_HOST_ADDRESS? And if pghost
is NULL, we should return PG_DEFAULT_HOST which is the same way of
doing things as before multi-host was implemented. We definitely need
to make PQhost() not return any host IPs if host names are available,
but not forget the case where a host can be specified as an IP itself.

Robert, others, what do you think?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Write Ahead Logging for Hash Indexes

2016-11-30 Thread Amit Kapila
On Wed, Nov 9, 2016 at 7:40 PM, Amit Kapila  wrote:
> On Tue, Nov 8, 2016 at 10:56 PM, Jeff Janes  wrote:
>>
>> Unless we want to wait until that work is committed before doing more review
>> and testing on this.
>>
>
> The concurrent hash index patch is getting changed and some of the
> changes needs change in this patch as well.  So, I think after it gets
> somewhat stabilized, I will update this patch as well.
>

Now that concurrent hash index patch is committed [1], I will work on
rebasing this patch.  Note, I have moved this to next CF.


[1] - 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6d46f4783efe457f74816a75173eb23ed8930020

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Indexes

2016-11-30 Thread Amit Kapila
On Thu, Dec 1, 2016 at 2:18 AM, Robert Haas  wrote:
> On Wed, Nov 23, 2016 at 8:50 AM, Amit Kapila  wrote:
>> [ new patch ]
>
> Committed with some further cosmetic changes.
>

Thank you very much.

> I think it would be worth testing this code with very long overflow
> chains by hacking the fill factor up to 1000
>

1000 is not a valid value for fill factor. Do you intend to say 100?

 or something of that
> sort, so that we get lots and lots of overflow pages before we start
> splitting.  I think that might find some bugs that aren't obvious
> right now because most buckets get split before they even have a
> single overflow bucket.
>
> Also, the deadlock hazards that we talked about upthread should
> probably be documented in the README somewhere, along with why we're
> OK with accepting those hazards.
>

That makes sense.  I will send a patch along that lines.



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Pavel Stehule
2016-12-01 5:37 GMT+01:00 Tom Lane :

> Peter Eisentraut  writes:
> > OK, I got it.  The component of concern is the DocBook XSL stylesheets,
> > called docbook-style-xsl on RH-like systems (docbook-xsl on Debian).  If
> > it runs too slow, it's probably too old.
>
> OK, I updated docbook-style-xsl to 1.79.1 from Fedora rawhide (building
> and installing that was quite painless btw, didn't need a pile of build
> dependencies like I'd feared it would take).  The extraneous commas are
> gone, and the speed is better but still not really up to DSSSL speed:
> 1m44s (vs 1m5s with old toolchain).  So there's some additional piece
> that needs fixing, but that's certainly the worst of it.
>

It does much more intensive work with IO - I have feeling like there are
intensive fsync.

Regards

Pavel

>
> regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Tom Lane
Peter Eisentraut  writes:
> OK, I got it.  The component of concern is the DocBook XSL stylesheets,
> called docbook-style-xsl on RH-like systems (docbook-xsl on Debian).  If
> it runs too slow, it's probably too old.

OK, I updated docbook-style-xsl to 1.79.1 from Fedora rawhide (building
and installing that was quite painless btw, didn't need a pile of build
dependencies like I'd feared it would take).  The extraneous commas are
gone, and the speed is better but still not really up to DSSSL speed:
1m44s (vs 1m5s with old toolchain).  So there's some additional piece
that needs fixing, but that's certainly the worst of it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_sequence catalog

2016-11-30 Thread Peter Eisentraut
On 11/11/16 10:06 PM, Andreas Karlsson wrote:
> As pointed out by Peter this patch also requires the changes to 
> pg_upgrade. I have not looked at those patches.

The required changes to pg_upgrade have been committed, so that will
work now.

> - The pg_dump tests fails due to the pg_dump code not being updated. I 
> have attached a patch which fixes this.

fixed

> I was a bit worried that the extra syscache lookups might slow down 
> nextval(), but I got a measurable speed up on a very simple workload 
> which consisted of only calls to nextval() to one sequence. The speedup 
> was about 10% on my machine.

I have done a fair amount of performance testing and the results were
usually in the neighborhood of 1%-2% slower or faster, nothing really
clear.  But running nextval by itself in a tight loop isn't really a
normal use.  The important thing is the concurrency behavior.

>  > @@ -1155,6 +1156,8 @@ doDeletion(const ObjectAddress *object, int flags)
>  > else
>  > heap_drop_with_catalog(object->objectId);
>  > }
>  > +   if (relKind == RELKIND_SEQUENCE)
>  > +   DeleteSequenceTuple(object->objectId);
>  > break;
>  > }
> 
> I think it might be cleaner here to put this as a "else if" just like 
> "relKind == RELKIND_INDEX".

The sequence tuple has to be deleted in addition to the heap drop.  I
have added a comment to make the clearer.

> The patch does not update catalogs.sgml which it should do.

added

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 3a0680ec57e29d5b8b498253975d3451b99a2c8d Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 30 Nov 2016 12:00:00 -0500
Subject: [PATCH v3] Add pg_sequence system catalog

Move sequence metadata (start, increment, etc.) into a proper system
catalog instead of storing it in the sequence heap object.  This
separates the metadata from the sequence data.  Sequence metadata is now
operated on transactionally by DDL commands, whereas previously
rollbacks of sequence-related DDL commands would be ignored.
---
 doc/src/sgml/catalogs.sgml|  89 +-
 src/backend/catalog/Makefile  |   2 +-
 src/backend/catalog/dependency.c  |   6 +
 src/backend/catalog/information_schema.sql|  13 +-
 src/backend/catalog/system_views.sql  |  16 +-
 src/backend/commands/sequence.c   | 381 +++---
 src/backend/utils/cache/syscache.c|  12 +
 src/bin/pg_dump/pg_dump.c |  22 +-
 src/include/catalog/indexing.h|   3 +
 src/include/catalog/pg_sequence.h |  30 ++
 src/include/commands/sequence.h   |  29 +-
 src/include/utils/syscache.h  |   1 +
 src/test/regress/expected/rules.out   |  18 +-
 src/test/regress/expected/sanity_check.out|   1 +
 src/test/regress/expected/sequence.out|  33 ++-
 src/test/regress/expected/updatable_views.out |  93 +++
 src/test/regress/sql/sequence.sql |   8 +
 src/test/regress/sql/updatable_views.sql  |   2 +-
 18 files changed, 490 insertions(+), 269 deletions(-)
 create mode 100644 src/include/catalog/pg_sequence.h

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 561e228..76d115b 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -256,6 +256,11 @@ System Catalogs
  
 
  
+  pg_sequence
+  information about sequences
+ 
+
+ 
   pg_shdepend
   dependencies on shared objects
  
@@ -1541,7 +1546,8 @@ pg_class
The catalog pg_class catalogs tables and most
everything else that has columns or is otherwise similar to a
table.  This includes indexes (but see also
-   pg_index), sequences, views, materialized
+   pg_index), sequences (but see also
+   pg_sequence), views, materialized
views, composite types, and TOAST tables; see relkind.
Below, when we mean all of these
kinds of objects we speak of relations.  Not all
@@ -5453,6 +5459,87 @@ pg_seclabel Columns
   
  
 
+ 
+  pg_sequence
+
+  
+   pg_sequence
+  
+
+  
+   The catalog pg_sequence contains information about
+   sequences.  Some of the information about sequences, such as the name and
+   the schema, is in
+   pg_class.
+  
+
+  
+   pg_sequence Columns
+
+   
+
+ 
+  Name
+  Type
+  References
+  Description
+ 
+
+
+
+ 
+  seqrelid
+  oid
+  pg_class.oid
+  The OID of the pg_class entry for this sequence
+ 
+
+ 
+  seqstart
+  int8
+  
+  Start value of the sequence
+ 
+
+ 
+  seqincrement
+  int8
+  
+  Increment value of the sequence
+ 
+
+ 
+  seqmax
+  int8
+  
+  Maximum value of the sequence
+ 
+
+ 
+  seqmin
+ 

Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Peter Eisentraut
On 11/30/16 1:52 PM, Tom Lane wrote:
> However, speed may be the least of its problems.  I just noticed that it's
> inserting commas at random places in syntax summaries :-(.  For instance,
> the "overlay" entry in table 9.8 looks like
> 
> overlay(string, placing
> string, from int [for int])
> 
> Neither comma belongs there according to the SGML source, and I don't see
> them in guaibausaurus' rendering of the page:
> https://www.postgresql.org/docs/devel/static/functions-string.html
> 
> So I'm forced to the conclusion that I need a newer version of the
> toolchain and/or style sheets.  If you've got any idea of just what
> needs to be updated, that would be real helpful.  xsltproc itself
> is from "libxslt-1.1.26-2.el6_3.1.x86_64" but I'm unsure what packages
> contain relevant style sheets.

OK, I got it.  The component of concern is the DocBook XSL stylesheets,
called docbook-style-xsl on RH-like systems (docbook-xsl on Debian).  If
it runs too slow, it's probably too old.

Here you can see a list of available versions:
http://docbook.sourceforge.net/release/xsl/

I noticed a significant slow-down with versions older than 1.76.1.  And
indeed CentOS/RHEL 6 comes with 1.75.2.

Also, the issue with the extra commas mentioned above goes away with 1.78.0.

Here is the trick why this isn't reproducible for some:

The local stylesheet file stylesheet.xsl references
http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl.  If
you have the docbook-style-xsl package installed, then this URL gets
redirected to your local installation through the XML catalog mechanism.
 If you don't have the package installed locally, then xsltproc will
download the stylesheet files from that actual URL and cache them
locally.  So if you have an old docbook-style-xsl version, you get old
and slow behavior.  If you uninstall it or just never installed it, you
get the latest from the internet.

If you don't want to mess with your local packages, you can also prevent
the use of the XML catalog by setting the environment variable
XML_CATALOG_FILES to empty (e.g., XML_CATALOG_FILES='' make html).

There is some work to be done here to document this and make sure we
wrap releases with appropriate versions and so on, but I hope this
information can keep everyone moving for now.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: two slab-like memory allocators

2016-11-30 Thread Tomas Vondra



Dne 11/27/2016 v 11:02 PM Andres Freund napsal(a):

On 2016-11-27 22:21:49 +0100, Petr Jelinek wrote:

On 27/11/16 21:47, Andres Freund wrote:

Hi,


+typedef struct SlabBlockData *SlabBlock;   /* forward reference */
+typedef struct SlabChunkData *SlabChunk;

Can we please not continue hiding pointers behind typedefs? It's a bad
pattern, and that it's fairly widely used isn't a good excuse to
introduce further usages of it.



Why is it a bad pattern?


It hides what is passed by reference, and what by value, and it makes it
a guessing game whether you need -> or . since you don't know whether
it's a pointer or the actual object. All to save a * in parameter and
variable declaration?...



FWIW I don't like that pattern either although it's used in many
parts of our code-base.


But relatively few new ones, most of it is pretty old.



I do agree it's not particularly pretty pattern, but in this case it's 
fairly isolated in the mmgr sources, and I quite value the consistency 
in this part of the code (i.e. that aset.c, slab.c and generation.c all 
use the same approach). So I haven't changed this.


The attached v7 fixes the off-by-one error in slab.c, causing failures 
in test_decoding isolation tests, and renames Gen to Generation, as 
proposed by Petr.


regards
Tomas




slab-allocators-v7.tgz
Description: application/compressed-tar

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgcrypto compilation error due to stack-allocated EVP_CIPHER_CTX

2016-11-30 Thread Andreas Karlsson

On 12/01/2016 02:48 AM, Andres Freund wrote:

It appears openssl has removed the public definition of EVP_CIPHER_CTX
leading to pgcrypto failing with:


Yes, I believe this is one of the changes in OpenSSL 1.1. I guess you 
might be the first one to try to compile with 1.1 since 
5ff4a67f63fd6d3eb01ff9707d4674ed54a89f3b was pushed.


If we do not already have it I think we should get a build farm animal 
with OpenSSL 1.1.


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve hash-agg performance

2016-11-30 Thread Andres Freund
Hi,

On 2016-11-03 04:07:21 -0700, Andres Freund wrote:
> Hi,
>
> There's two things I found while working on faster expression
> evaluation, slot deforming and batched execution. As those two issues
> often seemed quite dominant cost-wise it seemed worthwhile to evaluate
> them independently.
>
> 1) We atm do one ExecProject() to compute each aggregate's
>arguments. Turns out it's noticeably faster to compute the argument
>for all aggregates in one go. Both because it reduces the amount of
>function call / moves more things into a relatively tight loop, and
>because it allows to deform all the required columns at once, rather
>than one-by-one.  For a single aggregate it'd be faster to avoid
>ExecProject alltogether (i.e. directly evaluate the expression as we
>used to), but as soon you have two the new approach is faster.
>
> 2) For hash-aggs we right now we store the representative tuple using
>the input tuple's format, with unneeded columns set to NULL. That
>turns out to be expensive if the aggregated-on columns are not
>leading columns, because we have to skip over a potentially large
>number of NULLs.  The fix here is to simply use a different tuple
>format for the hashtable.  That doesn't cause overhead, because we
>already move columns in/out of the hashslot explicitly anyway.

I pushed a bit more polished versions of these.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pgcrypto compilation error due to stack-allocated EVP_CIPHER_CTX

2016-11-30 Thread Andres Freund
Hi,

It appears openssl has removed the public definition of EVP_CIPHER_CTX
leading to pgcrypto failing with:

/home/andres/src/postgresql/contrib/pgcrypto/openssl.c:253:17: error: field 
‘evp_ctx’ has incomplete type
  EVP_CIPHER_CTX evp_ctx;
 ^~~
/home/andres/src/postgresql/contrib/pgcrypto/openssl.c: In function 
‘bf_check_supported_key_len’:
/home/andres/src/postgresql/contrib/pgcrypto/openssl.c:373:17: error: storage 
size of ‘evp_ctx’ isn’t known
  EVP_CIPHER_CTX evp_ctx;
 ^~~
/home/andres/src/postgresql/contrib/pgcrypto/openssl.c:373:17: warning: unused 
variable ‘evp_ctx’ [-Wunused-variable]
make[3]: *** [openssl.o] Error 1

seems we need to allocate using EVP_CIPHER_CTX_new() instead.

Am I the only one seing this?

It looks like EVP_CIPHER_CTX_new() has been available for a long time:
commit b40228a61d2f9b40fa6a834c9beaa8ee9dc490c1
Author: Dr. Stephen Henson 
Date:   2005-12-02 13:46:39 +

New functions to support opaque EVP_CIPHER_CTX handling.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Amit Langote
On 2016/12/01 1:17, Tom Lane wrote:
> Stephen Frost  writes:
>> Seems like this would be a bit better:
> 
>> --
>> All the actions, when acting on a single table and not using the ALL IN
>> TABLESPACE form, except RENAME and SET SCHEMA, can be combined into a
>> list of multiple alterations to be applied.
>> --
> 
>> I note that we say 'in parallel', but given that we have actual parallel
>> operations now, we should probably shy away from using that except in
>> cases where we actually mean operations utilizing multiple parallel
>> processes.
> 
> I follow your beef with use of the word "parallel", but your proposed
> rewording loses the entire point of multiple actions per ALTER TABLE;
> namely that they're accomplished without repeated scans of the table.
> 
> Also the above seems a bit clunky; doesn't ALL IN TABLESPACE fall outside
> the restriction "acting on a single table"?
> 
> So maybe something like
> 
>   All the forms of ALTER TABLE that act on a single table,
>   except RENAME and SET SCHEMA, can be combined into a
>   list of multiple alterations to be applied together.

Updated patch attached.

> We would have to enlarge on what "together" means, but I think there may
> already be text explaining that further down.

There is this explanation:

   
The main reason for providing the option to specify multiple changes
in a single ALTER TABLE is that multiple table scans or
rewrites can thereby be combined into a single pass over the table.
   

Thanks,
Amit
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index e48ccf2..9e79e08 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -708,12 +708,11 @@ ALTER TABLE ALL IN TABLESPACE name
   
 
   
-   All the actions except RENAME,
-   SET TABLESPACE and SET SCHEMA
-   can be combined into
-   a list of multiple alterations to apply in parallel.  For example, it
-   is possible to add several columns and/or alter the type of several
-   columns in a single command.  This is particularly useful with large
+   All the forms of ALTER TABLE that act on a single table,
+   except RENAME and SET SCHEMA, can be
+   combined into a list of multiple alterations to be applied together.
+   For example, it is possible to add several columns and/or alter the type of
+   several columns in a single command.  This is particularly useful with large
tables, since only one pass over the table need be made.
   
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] monitoring.sgml - clarify length of query text displayed in pg_stat_statements

2016-11-30 Thread Ian Barwick

Hi

Small doc patch to clarify how much of the query text is show in 
pg_stat_statements
and a link to the relevant GUC.


Regards

Ian Barwick

--
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
new file mode 100644
index 3de489e..02dab87
*** a/doc/src/sgml/monitoring.sgml
--- b/doc/src/sgml/monitoring.sgml
*** postgres   27093  0.0  0.0  30096  2752
*** 785,791 
   Text of this backend's most recent query. If
state is active this field shows the
currently executing query. In all other states, it shows the last query
!   that was executed.
   
  
 
--- 785,793 
   Text of this backend's most recent query. If
state is active this field shows the
currently executing query. In all other states, it shows the last query
!   that was executed. By default the query text is truncated at 1024
!   characters; this value can be changed via the parameter
!   .
   
  
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2016-11-30 Thread Petr Jelinek
On 30/11/16 22:37, Peter Eisentraut wrote:
> I have taken the libpqwalreceiver refactoring patch and split it into
> two: one for the latch change, one for the API change.  I have done some
> mild editing.
> 
> These two patches are now ready to commit in my mind.
> 

Hi, looks good to me, do you plan to commit this soon or would you
rather me to resubmit the patches rebased on top of this (and including
this) first?

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash continuations in \set expressions

2016-11-30 Thread Tom Lane
Fabien COELHO  writes:
>> In short, I want to mark this RWF for today and ask for a version that
>> applies globally to all backslash commands in psql and pgbench.

> I'm not sure such a simple feature deserves so much energy.

It's not a "simple feature".  As you have it, it's a non-orthogonal wart.
It's like telling users that the return key only works in \set commands
and elsewhere you have to type shift-return to end a line.  I'm fine with
setting up a global policy that backslash-newline works to continue
backslash commands across lines, but I'm not fine with saying that it
only works in \set.  That's just weird, and the fact that you can do it
with a trivial patch doesn't make it less weird.  You're proposing to
put a new cognitive load on users because you can't be bothered to write
a patch that's not half-baked.  We don't do things like that around here.

FWIW, I looked a bit further and concluded that probably psqlscan.l
doesn't need to be modified; so likely you could do this across all of
pgbench's commands without touching psql.  That might be an acceptable
compromise for now, though I still think that as soon as we have this
for pgbench, users will start wanting it in psql.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash continuations in \set expressions

2016-11-30 Thread Fabien COELHO


Hello Tom,


In short, I want to mark this RWF for today and ask for a version that
applies globally to all backslash commands in psql and pgbench.


Hmmm.

The modus operandi of backslash command scanning is to switch to possibly 
another scanner just after scanning the backslash command, so I did the 
simple thing which is to handle this in the expression scanner. A simple 
feature achieved with a simple code.


Factoring out the behavior means handling continuations from the master 
scanner, probably by using some other intermediate buffer which is then 
rescanned...


This implies that all backslash commands share some kind of minimal 
lexical convention that \ followed by a (\r|\n|\r\n) is a continuation...


But then what if the backslash command accept quoted strings, should a 
continuation still be a continuation inside quotes? If not, how do I know? 
Are all currently existing backslash command compatible with a common set 
of lexical convention? Or do some commands should accept backslash 
continuations and others not, and have to be treated differently and 
knowingly by the lexer?


There is also the issue of locating error messages if the continuation is 
in another buffer, probably someone will complain.


Basically, many issues arise, all of them somehow solvable, but I'm afraid 
with many more lines than my essentially one line patch:-)


I'm not sure such a simple feature deserves so much energy.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support for restrictive RLS policies

2016-11-30 Thread Stephen Frost
* Stephen Frost (sfr...@snowman.net) wrote:
> * Jeevan Chalke (jeevan.cha...@enterprisedb.com) wrote:
> > 4. It will be good if we have an example for this in section
> > "5.7. Row Security Policies"
> 
> I haven't added one yet, but will plan to do so.

I've now added and cleaned up the Row Security Policies section to
discuss restrictive policies and to include an example.  I also added
some documentation to ALTER POLICY.

I've also re-based the patch to current master, but the only conflict
was in the pg_dump regression test definition, which was easily
corrected.

Unless there's further comments, I'll plan to push this sometime
tomorrow.

Thanks!

Stephen
From 066575b8f5112c9750a9005e2f50219d044a980c Mon Sep 17 00:00:00 2001
From: Stephen Frost 
Date: Thu, 1 Sep 2016 02:11:30 -0400
Subject: [PATCH] Add support for restrictive RLS policies

We have had support for restrictive RLS policies since 9.5, but they
were only available through extensions which use the appropriate hooks.
This adds support into the grammer, catalog, psql and pg_dump for
restrictive RLS policies, thus reducing the cases where an extension is
necessary.

In passing, also move away from using "AND"d and "OR"d in comments.
As pointed out by Alvaro, it's not really appropriate to attempt
to make verbs out of "AND" and "OR", so reword those comments which
attempted to.

Reviewed By: Jeevan Chalke
Discussion: https://postgr.es/m/20160901063404.gy4...@tamriel.snowman.net
---
 doc/src/sgml/ddl.sgml |  58 ++-
 doc/src/sgml/ref/alter_policy.sgml|   7 +-
 doc/src/sgml/ref/create_policy.sgml   |  28 
 src/backend/catalog/system_views.sql  |   6 +
 src/backend/commands/policy.c |   9 +
 src/backend/nodes/copyfuncs.c |   1 +
 src/backend/nodes/equalfuncs.c|   1 +
 src/backend/parser/gram.y |  43 +++--
 src/backend/rewrite/rowsecurity.c |  42 +++--
 src/bin/pg_dump/pg_dump.c |  69 +---
 src/bin/pg_dump/pg_dump.h |   3 +-
 src/bin/pg_dump/t/002_pg_dump.pl  |  33 +++-
 src/bin/psql/describe.c   | 100 ---
 src/bin/psql/tab-complete.c   |  29 +++-
 src/include/catalog/pg_policy.h   |  16 +-
 src/include/nodes/parsenodes.h|   1 +
 src/include/rewrite/rowsecurity.h |   1 +
 src/test/regress/expected/rowsecurity.out | 264 --
 src/test/regress/expected/rules.out   |   4 +
 src/test/regress/sql/rowsecurity.sql  |  32 +++-
 20 files changed, 599 insertions(+), 148 deletions(-)

diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 157512c..7e1bc0e 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -1599,9 +1599,11 @@ REVOKE ALL ON accounts FROM PUBLIC;
 
   
When multiple policies apply to a given query, they are combined using
-   OR, so that a row is accessible if any policy allows
-   it.  This is similar to the rule that a given role has the privileges
-   of all roles that they are a member of.
+   either OR (for permissive policies, which are the
+   default) or using AND (for restrictive policies).
+   This is similar to the rule that a given role has the privileges
+   of all roles that they are a member of.  Permissive vs. restrictive
+   policies are discussed further below.
   
 
   
@@ -1764,6 +1766,56 @@ UPDATE 1
 
 
   
+   All of the policies constructed thus far have been permissive policies,
+   meaning that when multiple policies are applied they are combined using
+   the "OR" boolean operator.  While permissive policies can be constructed
+   to only allow access to rows in the intended cases, it can be simpler to
+   combine permissive policies with restrictive policies (which the records
+   must pass and which are combined using the "AND" boolean operator).
+   Building on the example above, we add a restrictive policy to require
+   the administrator to be connected over a local unix socket to access the
+   records of the passwd table:
+  
+
+
+CREATE POLICY admin_local_only ON passwd AS RESTRICTIVE TO admin
+USING (pg_catalog.inet_client_addr() IS NULL);
+
+
+  
+   We can then see that an administrator connecting over a network will not
+   see any records, due to the restrictive policy:
+  
+
+
+= SELECT current_user;
+ current_user 
+--
+ admin
+(1 row)
+
+= select inet_client_addr();
+ inet_client_addr 
+--
+ 127.0.0.1
+(1 row)
+
+= SELECT current_user;
+ current_user 
+--
+ admin
+(1 row)
+
+= TABLE passwd;
+ user_name | pwhash | uid | gid | real_name | home_phone | extra_info | home_dir | shell
+---++-+-+---+++--+---
+(0 rows)
+
+= UPDATE passwd set pwhash = NULL;
+UPDATE 0
+
+
+  
Referential integrity checks, such as unique or primary key constraints
and foreign key references, always bypass row security to 

Re: [HACKERS] Logical Replication WIP

2016-11-30 Thread Peter Eisentraut
I have taken the libpqwalreceiver refactoring patch and split it into
two: one for the latch change, one for the API change.  I have done some
mild editing.

These two patches are now ready to commit in my mind.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From dc95826ff5018be1f001bead888b16ea3effe099 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 30 Nov 2016 12:00:00 -0500
Subject: [PATCH 1/2] Use latch instead of select() in walreceiver

Replace use of poll()/select() by WaitLatchOrSocket(), which is more
portable and flexible.

Also change walreceiver to use its procLatch instead of a custom latch.

From: Petr Jelinek 
---
 src/backend/postmaster/pgstat.c|   3 +
 .../libpqwalreceiver/libpqwalreceiver.c| 101 +
 src/backend/replication/walreceiver.c  |  18 ++--
 src/backend/replication/walreceiverfuncs.c |   6 +-
 src/include/pgstat.h   |   1 +
 src/include/replication/walreceiver.h  |   3 +-
 6 files changed, 43 insertions(+), 89 deletions(-)

diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index a392197..c7584cb 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -3338,6 +3338,9 @@ pgstat_get_wait_client(WaitEventClient w)
 		case WAIT_EVENT_WAL_RECEIVER_WAIT_START:
 			event_name = "WalReceiverWaitStart";
 			break;
+		case WAIT_EVENT_LIBPQWALRECEIVER_READ:
+			event_name = "LibPQWalReceiverRead";
+			break;
 		case WAIT_EVENT_WAL_SENDER_WAIT_WAL:
 			event_name = "WalSenderWaitForWAL";
 			break;
diff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
index f1c843e..6c01e7b 100644
--- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
+++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
@@ -23,19 +23,11 @@
 #include "pqexpbuffer.h"
 #include "access/xlog.h"
 #include "miscadmin.h"
+#include "pgstat.h"
 #include "replication/walreceiver.h"
+#include "storage/proc.h"
 #include "utils/builtins.h"
 
-#ifdef HAVE_POLL_H
-#include 
-#endif
-#ifdef HAVE_SYS_POLL_H
-#include 
-#endif
-#ifdef HAVE_SYS_SELECT_H
-#include 
-#endif
-
 PG_MODULE_MAGIC;
 
 void		_PG_init(void);
@@ -59,7 +51,6 @@ static void libpqrcv_send(const char *buffer, int nbytes);
 static void libpqrcv_disconnect(void);
 
 /* Prototypes for private functions */
-static bool libpq_select(int timeout_ms);
 static PGresult *libpqrcv_PQexec(const char *query);
 
 /*
@@ -367,67 +358,6 @@ libpqrcv_readtimelinehistoryfile(TimeLineID tli,
 }
 
 /*
- * Wait until we can read WAL stream, or timeout.
- *
- * Returns true if data has become available for reading, false if timed out
- * or interrupted by signal.
- *
- * This is based on pqSocketCheck.
- */
-static bool
-libpq_select(int timeout_ms)
-{
-	int			ret;
-
-	Assert(streamConn != NULL);
-	if (PQsocket(streamConn) < 0)
-		ereport(ERROR,
-(errcode_for_socket_access(),
- errmsg("invalid socket: %s", PQerrorMessage(streamConn;
-
-	/* We use poll(2) if available, otherwise select(2) */
-	{
-#ifdef HAVE_POLL
-		struct pollfd input_fd;
-
-		input_fd.fd = PQsocket(streamConn);
-		input_fd.events = POLLIN | POLLERR;
-		input_fd.revents = 0;
-
-		ret = poll(_fd, 1, timeout_ms);
-#else			/* !HAVE_POLL */
-
-		fd_set		input_mask;
-		struct timeval timeout;
-		struct timeval *ptr_timeout;
-
-		FD_ZERO(_mask);
-		FD_SET(PQsocket(streamConn), _mask);
-
-		if (timeout_ms < 0)
-			ptr_timeout = NULL;
-		else
-		{
-			timeout.tv_sec = timeout_ms / 1000;
-			timeout.tv_usec = (timeout_ms % 1000) * 1000;
-			ptr_timeout = 
-		}
-
-		ret = select(PQsocket(streamConn) + 1, _mask,
-	 NULL, NULL, ptr_timeout);
-#endif   /* HAVE_POLL */
-	}
-
-	if (ret == 0 || (ret < 0 && errno == EINTR))
-		return false;
-	if (ret < 0)
-		ereport(ERROR,
-(errcode_for_socket_access(),
- errmsg("select() failed: %m")));
-	return true;
-}
-
-/*
  * Send a query and wait for the results by using the asynchronous libpq
  * functions and the backend version of select().
  *
@@ -470,14 +400,31 @@ libpqrcv_PQexec(const char *query)
 		 */
 		while (PQisBusy(streamConn))
 		{
+			int			rc;
+
 			/*
 			 * We don't need to break down the sleep into smaller increments,
-			 * and check for interrupts after each nap, since we can just
-			 * elog(FATAL) within SIGTERM signal handler if the signal arrives
-			 * in the middle of establishment of replication connection.
+			 * since we'll get interrupted by signals and can either handle
+			 * interrupts here or elog(FATAL) within SIGTERM signal handler if
+			 * the signal arrives in the middle of establishment of
+			 * replication connection.
 			 */
-			if (!libpq_select(-1))
-continue;		/* interrupted */
+			ResetLatch(>procLatch);
+			rc = 

Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-11-30 Thread Michael Paquier
On Thu, Dec 1, 2016 at 1:24 AM, Christian Ullrich  wrote:
> * From: Michael Paquier
>
>> With 0005 I am seeing a compilation failure: you need to have the
>> declarations in the _MSC_VER block at the beginning of the routine. It
>
> Sorry, too used to C++.
>
>> would be nice to mention in a code comment that this what Noah has
>> mentioned upthread: if a CRT loads while pgwin32_putenv() is
>> executing, the newly-loaded CRT will always have the latest value.
>
> I fixed the existing comment by removing the last sentence that is in the 
> wrong place now, but I don't see the point in suddenly starting to explain 
> why it is done this way and not the other.
>
>> Based on that 0006 will need a rebase, nothing huge though.
>
> Done, new revisions attached.

Okay, switched as ready for committer.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash Indexes

2016-11-30 Thread Robert Haas
On Wed, Nov 23, 2016 at 8:50 AM, Amit Kapila  wrote:
> [ new patch ]

Committed with some further cosmetic changes.  I guess I won't be very
surprised if this turns out to have a few bugs yet, but I think it's
in fairly good shape at this point.

I think it would be worth testing this code with very long overflow
chains by hacking the fill factor up to 1000 or something of that
sort, so that we get lots and lots of overflow pages before we start
splitting.  I think that might find some bugs that aren't obvious
right now because most buckets get split before they even have a
single overflow bucket.

Also, the deadlock hazards that we talked about upthread should
probably be documented in the README somewhere, along with why we're
OK with accepting those hazards.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Random number generation, take two

2016-11-30 Thread Heikki Linnakangas

On 11/30/2016 09:05 PM, Alvaro Herrera wrote:

Heikki Linnakangas wrote:

On 11/30/2016 09:01 AM, Michael Paquier wrote:



It is important that this value [nonce] be different for each
authentication (see [RFC4086] for more details on how to achieve
this)


So the nonces need to be different for each session, to avoid replay
attacks. But they don't necessarily need to be unpredictable, they are
transmitted in plaintext during the authentication, anyway. If an attacker
can calculate them in advance, it only buys him more time, but doesn't give
any new information.

If we were 100% confident on that point, we could just always use current
timestamp and a counter for the nonces. But I'm not that confident,
certainly feels better to use a stronger random number when available.


Hmm, if enough internal server state leaks through the nonce (PID
generation rate), since the generating algorithm is known, isn't it
feasible for an attacker to predict future nonces?


Yes.


That would make brute-force attacks practical.


For SCRAM, you still need to reverse the SHA-256 hash that's used in the 
protocol. That's not practical even if you have plenty of time.


Reversing the MD5 hash used in MD5 authentication, on the other hand... 
But note that this patch makes the situation better for platforms that 
do have a strong random source. Currently, we always rely on random(), 
but with this patch, we'll use a strong source.



Perhaps it's enough to have a #define to enable a weak RNG to be used
for nonces when --disable-strong-random. That way you're protected by
default because the auth mechanism doesn't even work if you don't
have a strong RNG, but you can enable it knowingly if you so desire.


That's overdoing it, IMHO. Any modern system will have a source of 
randomness, we're in practice only talking about pademelon and similar 
ancient or super-exotic systems. And --disable-strong-random is the 
escape hatch for that: you can use it if you don't care, but it makes it 
an explicit decision.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash continuations in \set expressions

2016-11-30 Thread Tom Lane
Rafia Sabih  writes:
> On Wed, Nov 9, 2016 at 3:28 PM, Fabien COELHO  wrote:
 +1.  My vote is for backslash continuations.

>> I'm fine with that!

> Looks good to me also.

I concur that we don't want implicit continuation; that creates too many
special cases and will certainly fail to generalize to other backslash
commands.  My gripe with pgbench-set-continuation-1.patch is actually
the latter: I do not like the idea that this works only for \set and
not other backslash commands.  I think we should have uniform behavior
across all backslash commands, which probably means implementing this
in psqlscan.l not exprscan.l, which probably means that it would apply
in psql as well as pgbench.  But I argue that's a Good Thing.
I certainly don't see that psql needs this less ... try a \copy command
with a complicated SELECT source sometime.

In short, I want to mark this RWF for today and ask for a version that
applies globally to all backslash commands in psql and pgbench.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] EvalPlanQual() doesn't follow LockTuple() pattern

2016-11-30 Thread Robert Haas
On Tue, Nov 29, 2016 at 6:26 AM, Simon Riggs  wrote:
> src/backend/access/heap/README.tuplock says we do this...
>
> LockTuple()
> XactLockTableWait()
> mark tuple as locked by me
> UnlockTuple()
>
> only problem is we don't... because EvalPlanQualFetch() does this
>
> XactLockTableWait()
> LockTuple()

Hmm.  Yeah.  Actually, it doesn't do LockTuple() directly but just
calls heap_lock_tuple(), which calls heap_acquire_tuplock(), which
calls LockTupleTuplock(), which calls LockTuple().   The words "lock"
and "tuple" are overloaded to the point of total confusion here, which
may account for the oversight you spotted.

> If README.tuplock's reasons for the stated lock order is correct then
> it implies that EvalPlanQual updates could be starved indefinitely,
> which is probably bad.

I suspect that it's pretty hard to hit the starvation case in
practice, because EvalPlanQual updates are pretty rare.  It's hard to
imagine a stream of updates all hitting the same tuple for long enough
to cause a serious problem.  Eventually EvalPlanQualFetch would win
the race, I think.

> It might also be a bug of more serious nature, though no bug seen.
> This was found while debugging why wait_event not set correctly.
>
> In any case, I can't really see any reason for this coding in
> EvalPlanQual and it isn't explained in comments. Simply removing the
> wait allows the access pattern to follow the documented lock order,
> and allows regression tests and isolation tests to pass without
> problem. Perhaps I have made an error there.

That might cause a problem because of this intervening test, for the
reasons explained in the comment:

/*
 * If tuple was inserted by our own
transaction, we have to check
 * cmin against es_output_cid: cmin >= current
CID means our
 * command cannot see the tuple, so we should
ignore it. Otherwise
 * heap_lock_tuple() will throw an error, and
so would any later
 * attempt to update or delete the tuple.  (We
need not check cmax
 * because HeapTupleSatisfiesDirty will
consider a tuple deleted
 * by our transaction dead, regardless of
cmax.) We just checked
 * that priorXmax == xmin, so we can test that
variable instead of
 * doing HeapTupleHeaderGetXmin again.
 */
if (TransactionIdIsCurrentTransactionId(priorXmax) &&
HeapTupleHeaderGetCmin(tuple.t_data)
>= estate->es_output_cid)
{
ReleaseBuffer(buffer);
return NULL;
}

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Random number generation, take two

2016-11-30 Thread Alvaro Herrera
Heikki Linnakangas wrote:
> On 11/30/2016 09:01 AM, Michael Paquier wrote:

> > It is important that this value [nonce] be different for each
> > authentication (see [RFC4086] for more details on how to achieve
> > this)
> 
> So the nonces need to be different for each session, to avoid replay
> attacks. But they don't necessarily need to be unpredictable, they are
> transmitted in plaintext during the authentication, anyway. If an attacker
> can calculate them in advance, it only buys him more time, but doesn't give
> any new information.
> 
> If we were 100% confident on that point, we could just always use current
> timestamp and a counter for the nonces. But I'm not that confident,
> certainly feels better to use a stronger random number when available.

Hmm, if enough internal server state leaks through the nonce (PID
generation rate), since the generating algorithm is known, isn't it
feasible for an attacker to predict future nonces?  That would make
brute-force attacks practical.  Perhaps it's enough to have a #define to
enable a weak RNG to be used for nonces when --disable-strong-random.
That way you're protected by default because the auth mechanism doesn't
even work if you don't have a strong RNG, but you can enable it
knowingly if you so desire.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Tom Lane
I wrote:
> Still sucks for me on an up-to-date RHEL6 box: about 1m5s to build oldhtml,
> about 4m50s to build html, both starting after "make maintainer-clean" in
> the doc/src/sgml/ subdirectory.

However, speed may be the least of its problems.  I just noticed that it's
inserting commas at random places in syntax summaries :-(.  For instance,
the "overlay" entry in table 9.8 looks like

overlay(string, placing
string, from int [for int])

Neither comma belongs there according to the SGML source, and I don't see
them in guaibausaurus' rendering of the page:
https://www.postgresql.org/docs/devel/static/functions-string.html

So I'm forced to the conclusion that I need a newer version of the
toolchain and/or style sheets.  If you've got any idea of just what
needs to be updated, that would be real helpful.  xsltproc itself
is from "libxslt-1.1.26-2.el6_3.1.x86_64" but I'm unsure what packages
contain relevant style sheets.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mail thread references in commits

2016-11-30 Thread Greg Stark
On 30 November 2016 at 16:19, Andrew Dunstan  wrote:
>
> https://www.postgresql.org/message-id/cab7npqthydyf-fo+fzvxrhz-7_hptm4rodbcsy9-noqhvet...@mail.gmail.com
>
> I'll be interested to know if it breaks anyone's MUA. If it doesn't all we
> will be arguing about are aesthetics, and I'm a firm believer in function
> over form.

I can't say I feel especially strongly either way on this but just to
toss out a small thing that might make a small difference

If you happen to know how your message-ids are generated then you
might be able to do something useful with them. For instance, you
could search all git commits for urls to messages you wrote -- for
instance any commit that has CAB7nPq is referencing a message written
by Michael Paquier.

On the other hand you could put something naughty in the message-id
forcing everyone else to use URLs with dirty words in them. Or with
words like "terrorist" in them. Or with some javascript/html injection
attack of some sort...

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-11-30 Thread Robert Haas
On Tue, Nov 29, 2016 at 6:24 AM, Amit Kapila  wrote:
> On Tue, Nov 29, 2016 at 4:40 PM, Albe Laurenz  wrote:
>> Amit Kapila wrote:
 But with Tobias' complete patch "make installcheck-world" succeeds.
>>>
>>> Okay.  I have looked into patch proposed and found that it will
>>> resolve the regression test problem (Create Table .. AS Execute).  I
>>> think it is slightly prone to breakage if tomorrow we implement usage
>>> of EXECUTE with statements other Create Table (like Copy).  Have you
>>> registered this patch in CF to avoid loosing track?
>>>
>>> We have two patches (one proposed in this thread and one proposed by
>>> me earlier [1]) and both solves the regression test failure.  Unless
>>> there is some better approach, I think we can go with one of these.
>>
>> Tobias did that here: https://commitfest.postgresql.org/12/890/
>>
>> He added your patch as well.
>>
>
> Okay, Thanks.
>
> Robert, do you have any better ideas for this problem?

Not really.  I think your prepared_stmt_parallel_query_v2.patch is
probably the best approach proposed so far, but I wonder why we need
to include DestCopyOut and DestTupleStore.  DestIntoRel and
DestTransientRel both write to an actual relation, which is a problem
for parallel mode, but I think the others don't.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fix typo in ecpg.sgml

2016-11-30 Thread Tobias Bussmann
Hi,

there is a missing "EXEC" in ecpg.sgml in the list of transaction management 
commands.

Attached a trivial patch to fix this.

Best regards
Tobias




ecpg-doc-typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Tom Lane
Peter Eisentraut  writes:
> On 11/16/16 3:59 PM, Peter Eisentraut wrote:
>> On my machine and on the build farm, the performance now almost matches
>> the DSSSL build.

Still sucks for me on an up-to-date RHEL6 box: about 1m5s to build oldhtml,
about 4m50s to build html, both starting after "make maintainer-clean" in
the doc/src/sgml/ subdirectory.

BTW, I notice the "make check-tabs" step isn't getting run with the new
target; is that intentional?

> Anyone who is still getting terrible performance (>2x slower) from the
> html build, please send me the output of
> xsltproc --profile --timing --stringparam pg.version '10devel'
> stylesheet.xsl postgres.xml 2>profile.txt
> so I can look into it.

It wasn't that big, so attached.

regards, tom lane

Parsing stylesheet stylesheet.xsl took 3 ms
Parsing document postgres.xml took 258 ms
number   matchname  mode  Calls Tot 100us Avg

0 gentext.template   740222 13445998 18
1l10n.language   604192 5579772  9
2 footnotefootnote.number
 36 2458970  68304
3  href.target31220 1170552 37
4 gentext.template.exists
 547516 651313  1
5 inherited.table.attribute
 124970 449412  3
6*html.title.attribute
 181126 421298  2
7  gentext13441 307385 22
8   chunk-all-sections 1332 299884225
9chunk   237364 299160  1
   10substitute-markup86180 200978  2
   11 simple.xlink91501 193179  2
   12   entry|entrytbl   entry22086 184189  8
   13   dbhtml-dir   276346 167630  0
   14   lookup.key   306424 160120  0
   15*recursive-chunk-filename
 102309 154378  1
   16 xrefno.anchor.mode
 70 131384   1876
   17 pi-attribute   382931 130482  0
   18   xpath.location   167791 123089  0
   19   anchor   171571 114268  0
   20*head.keywords.content
   5300 100556 18
   21sect1label.markup
  26594  97906  3
   22  chapterlabel.markup
  32343  88112  2
   23  dir   175593  86788  0
   24object.id   195572  85495  0
   25get-attribute   628313  84789  0
   26*title.markup
 168553  80976  0
   27 figure|table|examplelabel.markup
   1226  77663 63
   28   colnum.colspec   127081  76139  0
   29  write.chunk 1333  74765 56
   30 appendixlabel.markup
  26516  70527  2
   31 autolabel.format   101251  62680  0
   32pi.dbhtml_dir   276346  60904  0
   33 dbhtml-attribute   382900  58037  0
   34header.navigation 1332  57161 42
   35*common.html.attributes
 101057  56244  0
   36 html:p|p  unwrap.p  33426  55942  1
   37   label.this.section75320  55522  0
   38   inline.monoseq56958  54528  0
   39footer.navigation 1332  53082 39
   40*class.attribute

Re: [HACKERS] GIN non-intrusive vacuum of posting tree

2016-11-30 Thread Andrew Borodin
Here is v1 of the patch. Now it has changes for README and contains
more comments clarifying changes of locking model.

Also I will elaborate some more on what is patched. Main portion of
changes is made to function ginVacuumPostingTreeLeaves(). Before the
patch it was traversing tree in depth-first fashion, acquiring
exclusive lock on each page and removing dead tuples from leafs. Also
this function used to hold cleanup lock. Now this function is doing
same DFS, but without cleanup lock, acquiring only read locks on inner
pages and exclusive lock on leafs before eliminating dead tuples. If
this function finds empty leafs, it computes minimal subtree,
containing only empty pages and start scan for empty pages from parent
page pointing to found subtree.

This scan acquires cleanup lock on root of scan (not necessarily root
of posting tree). Cleanup lock ensures no inserts are inside subtree.
Scan traverses subtree DF taking exclusive locks from left to right.
For any page being deleted all leftmost pages were locked and unlocked
before. New reads cannot enter subtree, all old readscans were
excluded by lock\unlock. Thus there shall not be deadlocks with
ginStepRight().

We get lock on page being deleted, then on a left page.
ginStepRight() takes lock on left page, than on right page. But it’s
presence is excluded by cleanup lock and DFS scan with locks of upper
and left parts of tree.


Thank you for reading this. Concurrency bothers me a lot. If you see
that anything is wrong or suspicious, please do not hesitate to post
your thoughts.


Best regards, Andrey Borodin.
diff --git a/src/backend/access/gin/README b/src/backend/access/gin/README
index fade0cb..29dafce 100644
--- a/src/backend/access/gin/README
+++ b/src/backend/access/gin/README
@@ -314,10 +314,16 @@ deleted.
 The previous paragraph's reasoning only applies to searches, and only to
 posting trees. To protect from inserters following a downlink to a deleted
 page, vacuum simply locks out all concurrent insertions to the posting tree,
-by holding a super-exclusive lock on the posting tree root. Inserters hold a
-pin on the root page, but searches do not, so while new searches cannot begin
-while root page is locked, any already-in-progress scans can continue
-concurrently with vacuum. In the entry tree, we never delete pages.
+by holding a super-exclusive lock on the parent page of subtree with deletable
+pages. Inserters hold a pin on the root page, but searches do not, so while
+new searches cannot begin while root page is locked, any already-in-progress
+scans can continue concurrently with vacuum in corresponding subtree of
+posting tree. To exclude interference with readers vacuum takes exclusive
+locks in a depth-first scan in leaf-to-right order of page tuples. Leftmost
+page is never deleted. Thus before deleting any page we obtain exclusive
+lock on any left page, effectively excluding deadlock with any reader, despite
+takinf parent lock first and not holding left lock at all.
+In the entry tree, we never delete pages.
 
 (This is quite different from the mechanism the btree indexam uses to make
 page-deletions safe; it stamps the deleted pages with an XID and keeps the
diff --git a/src/backend/access/gin/ginbtree.c 
b/src/backend/access/gin/ginbtree.c
index a0afec4..dc28c81 100644
--- a/src/backend/access/gin/ginbtree.c
+++ b/src/backend/access/gin/ginbtree.c
@@ -30,7 +30,7 @@ static void ginFinishSplit(GinBtree btree, GinBtreeStack 
*stack,
 /*
  * Lock buffer by needed method for search.
  */
-static int
+int
 ginTraverseLock(Buffer buffer, bool searchMode)
 {
Pagepage;
diff --git a/src/backend/access/gin/ginvacuum.c 
b/src/backend/access/gin/ginvacuum.c
index 2685a1c..9bd2f50 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -108,75 +108,17 @@ xlogVacuumPage(Relation index, Buffer buffer)
PageSetLSN(page, recptr);
 }
 
-static bool
-ginVacuumPostingTreeLeaves(GinVacuumState *gvs, BlockNumber blkno, bool 
isRoot, Buffer *rootBuffer)
-{
-   Buffer  buffer;
-   Pagepage;
-   boolhasVoidPage = FALSE;
-   MemoryContext oldCxt;
-
-   buffer = ReadBufferExtended(gvs->index, MAIN_FORKNUM, blkno,
-   RBM_NORMAL, 
gvs->strategy);
-   page = BufferGetPage(buffer);
-
-   /*
-* We should be sure that we don't concurrent with inserts, insert 
process
-* never release root page until end (but it can unlock it and lock
-* again). New scan can't start but previously started ones work
-* concurrently.
-*/
-   if (isRoot)
-   LockBufferForCleanup(buffer);
-   else
-   LockBuffer(buffer, GIN_EXCLUSIVE);
-
-   Assert(GinPageIsData(page));
-
-   if (GinPageIsLeaf(page))
-   {
-   oldCxt = MemoryContextSwitchTo(gvs->tmpCxt);
-   

Re: [HACKERS] Mail thread references in commits

2016-11-30 Thread Andrew Dunstan



On 11/30/2016 10:46 AM, Heikki Linnakangas wrote:


Agreed. I just did my first commit with the shortened URL, and I 
didn't like it. If we want to use URLs, let's use the canonical 
https://www.postgresql.org/message-id/ format.



Do we know of an actual length limit that is useful to aim for?  If we
just make it just a bit shorter but it's still too long for a large
class of email readers, depending on the message ID format, then it's
not that useful.


Right, we can't control the length of the URL, because it contains the 
message-id, which can be arbitrarily long.




Here is a url with one of the long gmail Message-IDs that I arbitrarily picked 
out of a recent mail thread:

https://www.postgresql.org/message-id/cab7npqthydyf-fo+fzvxrhz-7_hptm4rodbcsy9-noqhvet...@mail.gmail.com

I'll be interested to know if it breaks anyone's MUA. If it doesn't all we will 
be arguing about are aesthetics, and I'm a firm believer in function over form.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] postgres 1 个(共 2 个) can pg 9.6 vacuum freeze skip page on index?

2016-11-30 Thread xu jian
Hello,

   Please execute me if I am using the wrong mailing list, but I ask the 
question in pgsql-admin, looks like no one know the answer.


we upgraded our pg db to 9.6, as we know, pg9.6 doesn't need full table scan in 
vacuum freeze.

http://rhaas.blogspot.com/2016/03/no-more-full-table-vacuums.html


so we think if we have run vacuum freeze on the table, and there is no change 
on table which has been vacuum freeze before  it should finish super faster.


However, it doesn't look like we expect. the next run of vacuum freeze still 
take long time. Then we run vacuum freeze with verbose. we notice it spends 
long time on scanning index.

it seems even all rows are frozen on the data page, vacuum freeze still needs 
to scan all the index pages. if we drop the index, then vacuum freeze finishes 
immediately.


Does anyone know if it is true?


Btw, our table is large, and has about 40GB index files.  is there anyway to 
make the vacuum freeze faster in this case?


Thanks for the help.


James



Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-11-30 Thread Christian Ullrich
* From: Michael Paquier

> With 0005 I am seeing a compilation failure: you need to have the
> declarations in the _MSC_VER block at the beginning of the routine. It

Sorry, too used to C++.

> would be nice to mention in a code comment that this what Noah has
> mentioned upthread: if a CRT loads while pgwin32_putenv() is
> executing, the newly-loaded CRT will always have the latest value.

I fixed the existing comment by removing the last sentence that is in the wrong 
place now, but I don't see the point in suddenly starting to explain why it is 
done this way and not the other.

> Based on that 0006 will need a rebase, nothing huge though.

Done, new revisions attached.

-- 
Christian



0005-Do-the-process-environment-update-first.patch
Description: 0005-Do-the-process-environment-update-first.patch


0006-Fix-the-unload-race-as-best-we-can.patch
Description: 0006-Fix-the-unload-race-as-best-we-can.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Tom Lane
Stephen Frost  writes:
> Seems like this would be a bit better:

> --
> All the actions, when acting on a single table and not using the ALL IN
> TABLESPACE form, except RENAME and SET SCHEMA, can be combined into a
> list of multiple alterations to be applied.
> --

> I note that we say 'in parallel', but given that we have actual parallel
> operations now, we should probably shy away from using that except in
> cases where we actually mean operations utilizing multiple parallel
> processes.

I follow your beef with use of the word "parallel", but your proposed
rewording loses the entire point of multiple actions per ALTER TABLE;
namely that they're accomplished without repeated scans of the table.

Also the above seems a bit clunky; doesn't ALL IN TABLESPACE fall outside
the restriction "acting on a single table"?

So maybe something like

All the forms of ALTER TABLE that act on a single table,
except RENAME and SET SCHEMA, can be combined into a
list of multiple alterations to be applied together.

We would have to enlarge on what "together" means, but I think there may
already be text explaining that further down.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2016-11-30 Thread Amit Langote
On Thu, Dec 1, 2016 at 12:48 AM, Robert Haas  wrote:
> On Fri, Nov 25, 2016 at 5:49 AM, Amit Langote wrote:
>> On 2016/11/25 11:44, Robert Haas wrote:
>>> On Thu, Nov 24, 2016 at 6:13 AM, Amit Langote wrote:
 Also, it does nothing to help the undesirable situation that one can
 insert a row with a null partition key (expression) into any of the range
 partitions if targeted directly, because of how ExecQual() handles
 nullable constraint expressions (treats null value as satisfying the
 partition constraint).
>>>
>>> That's going to have to be fixed somehow.  How bad would it be if we
>>> passed ExecQual's third argument as false for partition constraints?
>>> Or else you could generate the actual constraint as expr IS NOT NULL
>>> AND expr >= lb AND expr < ub.
>>
>> About the former, I think that might work.  If a column is NULL, it would
>> be caught in ExecConstraints() even before ExecQual() is called, because
>> of the NOT NULL constraint.  If an expression is NULL, or for some reason,
>> the partitioning operator (=, >=, or <) returned NULL even for a non-NULL
>> column or expression, then ExecQual() would fail if we passed false for
>> resultForNull.  Not sure if that would be violating the SQL specification
>> though.
>
> I don't think the SQL specification can have anything to say about an
> implicit constraint generated as an implementation detail of our
> partitioning implementation.

Yeah, I thought so too.

>> The latter would work too.  But I guess we would only emit expr IS NOT
>> NULL, not column IS NOT NULL, because columns are covered by NOT NULL
>> constraints.
>
> Right.

The latest patch I posted earlier today has this implementation.

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2016-11-30 Thread Robert Haas
On Tue, Nov 29, 2016 at 6:24 AM, Amit Langote
 wrote:
> # All times in seconds (on my modestly-powerful development VM)
> #
> # nrows = 10,000,000 generated using:
> #
> # INSERT INTO $tab
> # SELECT '$last'::date - ((s.id % $maxsecs + 1)::bigint || 's')::interval,
> #   (random() * 5000)::int % 4999 + 1,
> #case s.id % 10
> #  when 0 then 'a'
> #  when 1 then 'b'
> #  when 2 then 'c'
> #  ...
> #  when 9 then 'j'
> #   end
> # FROM generate_series(1, $nrows) s(id)
> # ORDER BY random();
> #
> # The first item in the select list is basically a date that won't fall
> # outside the defined partitions.
>
> Time for a plain table = 98.1 sec
>
> #partpartedtg-direct-maptg-if-else
> ======
> 10   114.3 1483.3742.4
> 50   112.5 1476.6   2016.8
> 100  117.1 1498.4   5386.1
> 500  125.3 1475.5 --
> 1000 129.9 1474.4 --
> 5000 137.5 1491.4 --
> 1154.7 1480.9 --

Very nice!

Obviously, it would be nice if the overhead were even lower, but it's
clearly a vast improvement over what we have today.

> Regarding tuple-mapping-required vs no-tuple-mapping-required, all cases
> currently require tuple-mapping, because the decision is based on the
> result of comparing parent and partition TupleDesc using
> equalTupleDescs(), which fails so quickly because TupleDesc.tdtypeid are
> not the same.  Anyway, I simply commented out the tuple-mapping statement
> in ExecInsert() to observe just slightly improved numbers as follows
> (comparing with numbers in the table just above):
>
> #part(sub-)parted
> ==
> 10   113.9 (vs. 127.0)
> 100  135.7 (vs. 156.6)
> 500  182.1 (vs. 191.8)

I think you should definitely try to get that additional speedup when
you can.  It doesn't seem like a lot when you think of how much is
already being saved, but a healthy number of users are going to
compare it to the performance on an unpartitioned table rather than to
our historical performance.   127/98.1 = 1.29, but 113.9/98.1 = 1.16
-- and obviously a 16% overhead from partitioning is way better than a
29% overhead, even if the old overhead was a million percent.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2016-11-30 Thread Robert Haas
On Fri, Nov 25, 2016 at 5:49 AM, Amit Langote
 wrote:
> On 2016/11/25 11:44, Robert Haas wrote:
>> On Thu, Nov 24, 2016 at 6:13 AM, Amit Langote wrote:
>>> Also, it does nothing to help the undesirable situation that one can
>>> insert a row with a null partition key (expression) into any of the range
>>> partitions if targeted directly, because of how ExecQual() handles
>>> nullable constraint expressions (treats null value as satisfying the
>>> partition constraint).
>>
>> That's going to have to be fixed somehow.  How bad would it be if we
>> passed ExecQual's third argument as false for partition constraints?
>> Or else you could generate the actual constraint as expr IS NOT NULL
>> AND expr >= lb AND expr < ub.
>
> About the former, I think that might work.  If a column is NULL, it would
> be caught in ExecConstraints() even before ExecQual() is called, because
> of the NOT NULL constraint.  If an expression is NULL, or for some reason,
> the partitioning operator (=, >=, or <) returned NULL even for a non-NULL
> column or expression, then ExecQual() would fail if we passed false for
> resultForNull.  Not sure if that would be violating the SQL specification
> though.

I don't think the SQL specification can have anything to say about an
implicit constraint generated as an implementation detail of our
partitioning implementation.

> The latter would work too.  But I guess we would only emit expr IS NOT
> NULL, not column IS NOT NULL, because columns are covered by NOT NULL
> constraints.

Right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Amit Langote
Hi Stephen,

On Thu, Dec 1, 2016 at 12:24 AM, Stephen Frost  wrote:
> Amit,
>
> * Amit Langote (langote_amit...@lab.ntt.co.jp) wrote:
>> Perhaps, it should say something like:
>>
>> All the actions except RENAME, SET TABLESPACE (when using the ALL IN
>> TABLESPACE form) and SET SCHEMA can be combined into a list of multiple
>> alterations to apply in parallel.
>
> Seems like this would be a bit better:
>
> --
> All the actions, when acting on a single table and not using the ALL IN
> TABLESPACE form, except RENAME and SET SCHEMA, can be combined into a
> list of multiple alterations to be applied.
> --
>
> I note that we say 'in parallel', but given that we have actual parallel
> operations now, we should probably shy away from using that except in
> cases where we actually mean operations utilizing multiple parallel
> processes.
>
> Thoughts?

Sounds good to me.

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mail thread references in commits

2016-11-30 Thread Heikki Linnakangas

On 11/30/2016 04:52 PM, Peter Eisentraut wrote:

On 11/27/16 8:37 AM, Magnus Hagander wrote:

Ok, we now have it. https://postgr.es/m/messageid will redirect to that
messageid in the main archives.


I like the idea of recording the location of the discussion, but I'm not
fond of the URL shortener.  Besides the general problem that URL
shortening looks ugly and fake, it fragments the way we address things.
Ideally, message IDs should tie together our main development tools:
email, commit fest app, web site, and commits.  If we use different
addresses in each of them, it will be confusing and harder to tie things
together.  For example, I would ideally like to just paste the thread
URL from the commit fest app into the commit message.  If I have to
manually shorten the URL, then that is error prone and less interesting
to do.  Also, the commit fest app links to a thread, not a message.
Another concern is how search engines will process this.


Agreed. I just did my first commit with the shortened URL, and I didn't 
like it. If we want to use URLs, let's use the canonical 
https://www.postgresql.org/message-id/ format.



Do we know of an actual length limit that is useful to aim for?  If we
just make it just a bit shorter but it's still too long for a large
class of email readers, depending on the message ID format, then it's
not that useful.


Right, we can't control the length of the URL, because it contains the 
message-id, which can be arbitrarily long.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-30 Thread Peter Eisentraut
On 11/16/16 3:59 PM, Peter Eisentraut wrote:
>>> Build HTML documentation using XSLT stylesheets by default
>>>
>>> The old DSSSL build is still available for a while using the make 
>>> target
>>> "oldhtml".
>>
>> This xslt build  takes  8+ minutes, compared to barely 1 minute for 
>> 'oldhtml'.
> 
> I have committed another patch to improve the build performance a bit.
> Could you check again?
> 
> On my machine and on the build farm, the performance now almost matches
> the DSSSL build.

Anyone who is still getting terrible performance (>2x slower) from the
html build, please send me the output of

xsltproc --profile --timing --stringparam pg.version '10devel'
stylesheet.xsl postgres.xml 2>profile.txt

so I can look into it.

(It will be big, so feel free to paste it somewhere or send privately.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Stephen Frost
Amit,

* Amit Langote (langote_amit...@lab.ntt.co.jp) wrote:
> Perhaps, it should say something like:
> 
> All the actions except RENAME, SET TABLESPACE (when using the ALL IN
> TABLESPACE form) and SET SCHEMA can be combined into a list of multiple
> alterations to apply in parallel.

Seems like this would be a bit better:

--
All the actions, when acting on a single table and not using the ALL IN
TABLESPACE form, except RENAME and SET SCHEMA, can be combined into a
list of multiple alterations to be applied.
--

I note that we say 'in parallel', but given that we have actual parallel
operations now, we should probably shy away from using that except in
cases where we actually mean operations utilizing multiple parallel
processes.

Thoughts?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] patch: function xmltable

2016-11-30 Thread Pavel Stehule
Dne 30. 11. 2016 14:53 napsal uživatel "Pavel Stehule" <
pavel.steh...@gmail.com>:
>
>
>
> 2016-11-30 13:38 GMT+01:00 Alvaro Herrera :
>>
>> Pavel Stehule wrote:
>> > 2016-11-30 2:40 GMT+01:00 Craig Ringer :
>> >
>> > > On 30 November 2016 at 05:36, Pavel Stehule 
>> > > wrote:
>> > >
>> > > > The problem is in unreserved keyword "PASSING" probably.
>> > >
>> > > Yeah, I think that's what I hit when trying to change it.
>> > >
>> > > Can't you just parenthesize the expression to use operators like ||
>> > > etc? If so, not a big deal.
>> > >
>> > ???
>>
>> "'(' a_expr ')'" is a c_expr; Craig suggests that we can just tell users
>> to manually add parens around any expressions that they want to use.
>> That's not necessary most of the time since we've been able to use
>> b_expr in most places.
>

still there are one c_expr, but without new reserved word there are not
change to reduce it.

>
> Now I understand
>
> Regards
>
> Pavel
>
>>
>>
>> --
>> Álvaro Herrerahttps://www.2ndQuadrant.com/
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>


[HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Amit Langote
The following sentence in the ALTER TABLE documentation is not entirely
accurate:

"All the actions except RENAME, SET TABLESPACE and SET SCHEMA can be
combined into a list of multiple alterations to apply in parallel."

SET TABLESPACE (in the ALTER TABLE form) can be combined with other
sub-commands; following works:

alter table foo set tablespace mytbls, add b int;

Perhaps, it should say something like:

All the actions except RENAME, SET TABLESPACE (when using the ALL IN
TABLESPACE form) and SET SCHEMA can be combined into a list of multiple
alterations to apply in parallel.

Attached is a patch.

Thanks,
Amit
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index e48ccf2..150a3b8 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -709,8 +709,8 @@ ALTER TABLE ALL IN TABLESPACE name
 
   
All the actions except RENAME,
-   SET TABLESPACE and SET SCHEMA
-   can be combined into
+   SET TABLESPACE (when using the ALL IN TABLESPACE
+   form) and SET SCHEMA can be combined into
a list of multiple alterations to apply in parallel.  For example, it
is possible to add several columns and/or alter the type of several
columns in a single command.  This is particularly useful with large

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mail thread references in commits

2016-11-30 Thread Peter Eisentraut
On 11/27/16 8:37 AM, Magnus Hagander wrote:
> Ok, we now have it. https://postgr.es/m/messageid will redirect to that
> messageid in the main archives.

I like the idea of recording the location of the discussion, but I'm not
fond of the URL shortener.  Besides the general problem that URL
shortening looks ugly and fake, it fragments the way we address things.
Ideally, message IDs should tie together our main development tools:
email, commit fest app, web site, and commits.  If we use different
addresses in each of them, it will be confusing and harder to tie things
together.  For example, I would ideally like to just paste the thread
URL from the commit fest app into the commit message.  If I have to
manually shorten the URL, then that is error prone and less interesting
to do.  Also, the commit fest app links to a thread, not a message.
Another concern is how search engines will process this.

Do we know of an actual length limit that is useful to aim for?  If we
just make it just a bit shorter but it's still too long for a large
class of email readers, depending on the message ID format, then it's
not that useful.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove the comment on the countereffectiveness of large shared_buffers on Windows

2016-11-30 Thread Peter Eisentraut
On 11/7/16 12:43 AM, amul sul wrote:
> On Mon, Nov 7, 2016 at 10:46 AM, Tsunakawa, Takayuki
>  wrote:
>>
>> The third paragraph may be redundant, I'm a bit inclined to leave it for 
>> kindness and completeness.  The attached revised patch just correct the 
>> existing typo (large -> larger).
>>
> 
> I am not agree to having this paragraph either, I'll leave the
> decision to committer.
> 
>> I'll change the status to needs review.
> 
> The new status of this patch is: Ready for Committer

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2016-11-30 Thread Pavel Stehule
2016-11-30 13:38 GMT+01:00 Alvaro Herrera :

> Pavel Stehule wrote:
> > 2016-11-30 2:40 GMT+01:00 Craig Ringer :
> >
> > > On 30 November 2016 at 05:36, Pavel Stehule 
> > > wrote:
> > >
> > > > The problem is in unreserved keyword "PASSING" probably.
> > >
> > > Yeah, I think that's what I hit when trying to change it.
> > >
> > > Can't you just parenthesize the expression to use operators like ||
> > > etc? If so, not a big deal.
> > >
> > ???
>
> "'(' a_expr ')'" is a c_expr; Craig suggests that we can just tell users
> to manually add parens around any expressions that they want to use.
> That's not necessary most of the time since we've been able to use
> b_expr in most places.
>

Now I understand

Regards

Pavel


>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] Random number generation, take two

2016-11-30 Thread Michael Paquier
On Wed, Nov 30, 2016 at 8:51 PM, Heikki Linnakangas  wrote:
> On 11/30/2016 09:01 AM, Michael Paquier wrote:
> I was thinking that with --disable-strong-random, we'd use plain random() in
> libpq as well. I believe SCRAM is analogous to the MD5 salt generation in
> the backend, in its requirement for randomness.

OK. That's fine by me to do so.

>>> As this patch stands, it removes Fortuna, and returns known-weak keys,
>>> but
>>> there's a good argument to be made for throwing an error instead.
>>
>>
>> IMO, leading to an error would make the users aware of them playing
>> with the fire... Now pademelon's owner may likely have a different
>> opinion on the matter :p
>
> Ok, I bit the bullet and modified those pgcrypto functions that truly need
> cryptographically strong random numbers to throw an error with
> --disable-strong-random. Notably, the pgp encrypt functions mostly fail now.

The alternate output looks good to me.

>> +bool
>> +pg_backend_random(char *dst, int len)
>> +{
>> +   int i;
>> +   char   *end = dst + len;
>> +
>> +   /* should not be called in postmaster */
>> +   Assert (IsUnderPostmaster || !IsPostmasterEnvironment);
>> +
>> +   LWLockAcquire(BackendRandomLock, LW_EXCLUSIVE);
>> Shouldn't an exclusive lock be taken only when the initialization
>> phase is called? When reading the value a shared lock would be fine.
>
>
> No, it needs to be exclusive. Each pg_jrand48() call updates the state, aka
> seed.

Do we need to worry about performance in the case of application doing
small transactions and creating new connections for each transaction?
This would become a contention point when calculating cancel keys for
newly-forked backends. It could be rather easy to measure a
concurrency impact with for example pgbench -C with many concurrent
transactions running something as light as SELECT 1.

>> Attached is a patch for MSVC to apply on top of yours to enable the
>> build for strong and weak random functions. Feel free to hack it as
>> needs be, this base implementation works for the current
>> implementation.
>
> Great, thanks! I wonder if this is overly complicated, though. For
> comparison, we haven't bothered to expose --disable-spinlocks in
> config_default.pl either. Perhaps we should just always use the Windows
> native function on MSVC, whether or not configured with OpenSSL, and just
> put USE_WIN32_RANDOM in pg_config.h.win32? See 2nd attached patch
> (untested).

I could live with that. Your patch is not complete though, you need to
add pg_strong_random.c into the array @pgportfiles in Mkvcbuild.pm.
You also need to remove fortuna.c and random.c from the list of files
in $pgcrypto->AddFiles(). After doing so the code is able to compile
properly.

+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg("pg_random_bytes() is not supported by this build"),
+errdetail("This functionality requires a source of
strong random numbers"),
+errhint("You need to rebuild PostgreSQL using
--enable-strong-random")));
Perhaps this should say "You need to rebuild PostgreSQL without
--disable-strong-random", the docs do not mention
--enable-strong-random nor does ./configure --help.

+/* port/pg_strong_random.c */
+#ifndef USE_WEAK_RANDOM
+extern bool pg_strong_random(void *buf, size_t len);
+#endif
This should be HAVE_STRONG_RANDOM.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] move collation import to backend

2016-11-30 Thread Peter Eisentraut
On 11/29/16 2:53 PM, Andres Freund wrote:
> On 2016-11-29 12:16:37 -0500, Peter Eisentraut wrote:
>> On 11/12/16 10:38 AM, Andres Freund wrote:
/*
 * Also forbid matching an any-encoding entry.  This test of course is 
 not
 * backed up by the unique index, but it's not a problem since we don't
 * support adding any-encoding entries after initdb.
 */
>>>
>>> Note that this isn't true anymore...
>>
>> I think this is still correct, because the collation import does not
>> produce any any-encoding entries (collencoding = -1).
> 
> Well, the comment "don't support adding any-encoding entries after
> initdb." is now wrong.

I think there is a misunderstanding.  The comment says that we don't
support adding encodings that have collencoding = -1 after initdb.  That
is still true.  Note that the original comment as two "any"'s.  With
this patch, we would now support adding collations with collencoding <>
-1 after initdb.

> 
 +
 +Datum pg_import_system_collations(PG_FUNCTION_ARGS);
 +
 +Datum
 +pg_import_system_collations(PG_FUNCTION_ARGS)
 +{
>>>
>>> Uh?
>>
>> Required to avoid compiler warning about missing prototype.
> 
> It seems not to be project style to have prototypes in the middle of the
> file...

OK, will fix.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Radix tree for character conversion

2016-11-30 Thread Heikki Linnakangas

On 10/31/2016 06:11 PM, Daniel Gustafsson wrote:

On 27 Oct 2016, at 09:23, Kyotaro HORIGUCHI  
wrote:

At Tue, 25 Oct 2016 12:23:48 +0300, Heikki Linnakangas  wrote in 
<08e7892a-d55c-eefe-76e6-7910bc8dd...@iki.fi>


[..]
The perl scripts are still quite messy. For example, I lost the checks
for duplicate mappings somewhere along the way - that ought to be put
back. My Perl skills are limited.


Perl scripts are to be messy, I believe. Anyway the duplicate
check as been built into the sub print_radix_trees. Maybe the
same check is needed by some plain map files but it would be just
duplication for the maps having radix tree.


I took a small stab at doing some cleaning of the Perl scripts, mainly around
using the more modern (well, modern as in +15 years old) form for open(..),
avoiding global filehandles for passing scalar references and enforcing use
strict.  Some smaller typos and fixes were also included.  It seems my Perl has
become a bit rusty so I hope the changes make sense.  The produced files are
identical with these patches applied, they are merely doing cleaning as opposed
to bugfixing.

The attached patches are against the 0001-0006 patches from Heikki and you in
this series of emails, the separation is intended to make them easier to read.


Thanks! Patches 0001-0003 seem to have been mostly unchanged for the 
later discussion and everyone seems to be happy with those patches, so I 
picked the parts of these cleanups of yours that applied to my patches 
0001-0003, and pushed those. I'll continue reviewing the rest..


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-30 Thread Andrew Dunstan



On 11/30/2016 03:47 AM, Fabien COELHO wrote:


Hello Andrew,

I cannot remember a language with elseif* variants, and I find them 
quite ugly, so from an aethetical point of view I would prefer to 
avoid that... On the other hand having an "else if" capability makes 
sense (eg do something slightly different for various versions of 
pg), so that would suggest to stick to a simpler "if" without 
variants, if possible.


FTR I *strongly* disagree with this. (And if you can't remember a 
language that comes with them then you need to get out more. The 
Bourne shell, where it's spelled "elif", and Ada are two obvious 
examples.)


There may be a misunderstanding somewhere.

I'm rather in favor of having "elif/elsif/elseif/..." constructs, 
especially if they can be useful in realistic examples, which is not 
clear yet for psql scripts.


I'm arguing against "if/elif" *variants* in the sense of various 
conditional semantics: e.g. in cpp you have several "if"s (ifdef 
ifndef if), but you do not have all the corresponding "elif"s (elifdef 
elifndef...), there is only one "elif". In cpp "ifdef"/"ifndef" were 
obsoleted by "if" with minimal expression support (#if 
!defined(HAS_SOMETHING) ...) and only this "if" has its "elif".




Oh, I see. Sorry for the misunderstanding.

I agree about generally sticking with one pattern, but I wouldn't want 
to exclude shorthand pieces like


\quit_if cond

which could be more convenient than

\if cond
\quit
\endif

c.f. perl's "foo if bar;" as shorthand for "if (bar) { foo; }"

Still, that might be a refinement to add later on.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2016-11-30 Thread Alvaro Herrera
Pavel Stehule wrote:
> 2016-11-30 2:40 GMT+01:00 Craig Ringer :
> 
> > On 30 November 2016 at 05:36, Pavel Stehule 
> > wrote:
> >
> > > The problem is in unreserved keyword "PASSING" probably.
> >
> > Yeah, I think that's what I hit when trying to change it.
> >
> > Can't you just parenthesize the expression to use operators like ||
> > etc? If so, not a big deal.
> >
> ???

"'(' a_expr ')'" is a c_expr; Craig suggests that we can just tell users
to manually add parens around any expressions that they want to use.
That's not necessary most of the time since we've been able to use
b_expr in most places.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Random number generation, take two

2016-11-30 Thread Heikki Linnakangas

On 11/30/2016 09:01 AM, Michael Paquier wrote:

On Tue, Nov 29, 2016 at 10:02 PM, Heikki Linnakangas  wrote:

Phew, this has been way more complicated than it seemed at first. Thoughts?


One of the goals of this patch is to be able to have a strong random
function as well for the frontend, which is fine. But any build where
--disable-strong-random is used is not going to have a random function
to rely on. Disabling SCRAM for such builds is a possibility, because
we assume that any build using --disable-strong-random is aware of
security risks, still that's not really appealing in terms of
portability. Another possibility would be to have an extra routine
like pg_frontend_random(), wrapping pg_strong_backend() and using a
combination of getpid + gettimeofday to generate a seed with just a
random() call? That's what we were fighting against previously, so my
mind is telling me that just returning an error when the code paths of
the SCRAM client code is used when built with --disable-strong-random
is the way to go, because we want SCRAM to be fundamentally safe to
use. What do you think?


I was thinking that with --disable-strong-random, we'd use plain 
random() in libpq as well. I believe SCRAM is analogous to the MD5 salt 
generation in the backend, in its requirement for randomness. The SCRAM 
spec (RFC5802) says:



It is important that this value [nonce] be different for each
authentication (see [RFC4086] for more details on how to achieve
this)


So the nonces need to be different for each session, to avoid replay 
attacks. But they don't necessarily need to be unpredictable, they are 
transmitted in plaintext during the authentication, anyway. If an 
attacker can calculate them in advance, it only buys him more time, but 
doesn't give any new information.


If we were 100% confident on that point, we could just always use 
current timestamp and a counter for the nonces. But I'm not that 
confident, certainly feels better to use a stronger random number when 
available. The quote above points at RFC4086, which actually talks about 
cryptographically strong random numbers, rather than just generating a 
unique nonce. So I'm not sure if the authors of SCRAM considered that 
point in any detail, seems like they just assumed that you might as well 
use a strong random source because everyone's got one.



pgcrypto


pgcrypto doesn't have the same requirements for "strongness" as cancel keys
and MD5 salts have. pgcrypto uses random numbers for generating salts, too,
which I think has similar requirements. But it also uses random numbers for
generating encryption keys, which I believe ought to be harder to predict.
If you compile with --disable-strong-random, do we want the encryption key
generation routines to fail, or to return known-weak keys?

This patch removes the Fortuna algorithm, that was used to generate fairly
strong random numbers, if OpenSSL was not present. One option would be to
keep that code as a fallback. I wanted to get rid of that, since it's only
used on a few old platforms, but OTOH it's been around for a long time with
little issues.

As this patch stands, it removes Fortuna, and returns known-weak keys, but
there's a good argument to be made for throwing an error instead.


IMO, leading to an error would make the users aware of them playing
with the fire... Now pademelon's owner may likely have a different
opinion on the matter :p


Ok, I bit the bullet and modified those pgcrypto functions that truly 
need cryptographically strong random numbers to throw an error with 
--disable-strong-random. Notably, the pgp encrypt functions mostly fail now.



Documentation for --disable-strong-random needs to be added.


Ah, I didn't remember we have a section in the user manual for these. Added.


+   int32   MyCancelKey;
Those would be better as unsigned?


Perhaps, but it's historically been signed, I'm afraid of changing it..


+bool
+pg_backend_random(char *dst, int len)
+{
+   int i;
+   char   *end = dst + len;
+
+   /* should not be called in postmaster */
+   Assert (IsUnderPostmaster || !IsPostmasterEnvironment);
+
+   LWLockAcquire(BackendRandomLock, LW_EXCLUSIVE);
Shouldn't an exclusive lock be taken only when the initialization
phase is called? When reading the value a shared lock would be fine.


No, it needs to be exclusive. Each pg_jrand48() call updates the state, 
aka seed.



Attached is a patch for MSVC to apply on top of yours to enable the
build for strong and weak random functions. Feel free to hack it as
needs be, this base implementation works for the current
implementation.


Great, thanks! I wonder if this is overly complicated, though. For 
comparison, we haven't bothered to expose --disable-spinlocks in 
config_default.pl either. Perhaps we should just always use the Windows 
native function on MSVC, whether or not configured with OpenSSL, and 
just put USE_WIN32_RANDOM in pg_config.h.win32? See 2nd attached patch 

Re: [HACKERS] XactLockTableWait doesn't set wait_event correctly

2016-11-30 Thread Simon Riggs
On 29 November 2016 at 19:03, Amit Kapila  wrote:

> how will we distinguish it when some
> process is actually waiting on tuple lock?

The point is that both those actions are waiting for a tuple lock.

Obtaining a tuple lock requires two separate actions: First we do
LockTuple() and then we do XactLockTableWait().

So log_lock_wait output has two separate log entries for the same
thing, which is inconsistent. (One mentions the relation name, the
other mentions the relation id).

(Note that I'm not saying that all callers of XactLockTableWait are
tuple lockers; if they were it would be less confusing).

But at least that info is visible. log_lock_waits output allows you to
see that a XactLockTableWait is actually held for a tuple. There is no
way to do that for pg_stat_activity or pg_locks output, which is
inconsistent.

I'm not worried about abstraction layers, I'd just like to get useful
information out of the server to diagnose locking issues. Right now,
nobody can do that.

My proposal to resolve this is...

1. Make log_lock_wait output the same for both cases... following this format
LOG:  process 648 still waiting for ExclusiveLock on tuple (0,1) of
relation 137344 of database 12245 after 1045.220 ms
DETAIL:  Process holding the lock: 6460. Wait queue: 648.
STATEMENT:  update t1 set c1=4 where c1=1;
Nobody will miss the other format, since the above format has all the
same information.

2. Set wait_event_type = tuple when we wait during XactLockTableWait.
We need the reason info, not the actual wait info, since this is for
users not for our own diagnostics. This isn't very important, since
wait_event_type doesn't include details like which tuple or relation
caused the wait.

3. pg_locks output can't fit both locktag and reason info inside the
LOCKTAG struct, so we'd need to do something like store the reason
info in a separate hash table, so it can be used to explain a
transaction lock entry. I'm sure that will raise an objection, so
we'll need something like a view called pg_lock_wait_reason. Better
suggestions welcome.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix checkpoint skip logic on idle systems by tracking LSN progress

2016-11-30 Thread Amit Kapila
On Mon, Nov 21, 2016 at 11:08 AM, Michael Paquier
 wrote:
> On Tue, Nov 15, 2016 at 9:59 PM, Amit Kapila  wrote:
>> On Tue, Nov 15, 2016 at 9:27 AM, Kyotaro HORIGUCHI
>>  wrote:
>>> The term "WAL activity' is used in the comment for
>>> GetProgressRecPtr. Its meaning is not clear but not well
>>> defined. Might need a bit detailed explanation about that or "WAL
>>> activity tracking". What do you think about this?
>>>
>>
>> I would have written it as below:
>>
>> GetProgressRecPtr -- Returns the WAL progress.  WAL progress is
>> determined by scanning each WALinsertion lock by taking directly the
>> light-weight lock associated to it.
>
> Not sure if that's better.. What about something as fancy as that?
>  /*
> - * Get the time of the last xlog segment switch
> + * GetProgressRecPtr -- Returns the newest WAL progress position.  WAL
> + * progress is determined by scanning each WALinsertion lock by taking
> + * directly the light-weight lock associated to it.  The result of this
> + * routine can be compared with the last checkpoint LSN to check if
> + * a checkpoint can be skipped or not.
> + *
> It may be worth mentioning that the result of this routine is
> basically used for checkpoint skip logic.
>

That's okay, but I think you are using it to skip switch segment stuff
as well.  Today, again going through patch, I noticed small anomaly

> + * Switch segment only when WAL has done some progress since the
+ * > last time a segment has switched because of a timeout.

> + if (GetProgressRecPtr() > last_switch_lsn)

Either the above comment is wrong or the code after it has a problem.
last_switch_lsn aka XLogCtl->lastSegSwitchLSN is updated not only for
a timeout but also when there is a lot of WAL activity which makes WAL
Write to cross a segment boundary.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: scan key push down to heap [WIP]

2016-11-30 Thread Dilip Kumar
On Tue, Nov 29, 2016 at 11:21 PM, Robert Haas  wrote:
> On Mon, Nov 28, 2016 at 11:17 PM, Dilip Kumar  wrote:
>> Actually we want to call slot_getattr instead heap_getattr, because of
>> problem mentioned by Andres upthread and we also saw in test results.
>
> Ah, right.
>
>> Should we make a copy of HeapKeyTest lets say ExecKeyTest and keep it
>> under executor ?
>
> Sure.

I have worked on the idea you suggested upthread. POC patch is attached.

Todo:
1. Comments.
2. Test.
3. Some regress output will change as we are adding some extra
information to analyze output.

I need some suggestion..

1. As we decided to separate scankey and qual during planning time. So
I am doing it at create_seqscan_path. My question is currently we
don't have path node for seqscan, so where should we store scankey ?
In Path node, or create new SeqScanPath node ?. In attached patch I
have stored in Path node.

2. This is regarding instrumentation information for scan key, after
my changes currently explain analyze result will look like this.

postgres=# explain (analyze, costs off) select * from lineitem
where l_shipmode in ('MAIL', 'AIR')
and l_receiptdate >= date '1994-01-01';
QUERY PLAN
--
 Seq Scan on lineitem (actual time=0.022..12179.946 rows=6238212 loops=1)
   Scan Key: (l_receiptdate >= '1994-01-01'::date)
   Filter: (l_shipmode = ANY ('{MAIL,AIR}'::bpchar[]))
   Rows Removed by Scan Key: 8162088
   Rows Removed by Filter: 15599495
 Planning time: 0.182 ms
 Execution time: 12410.529 ms

My question is, how should we show pushdown filters  ?
In above plan I am showing as  "Scan Key: ", does this look fine or we
should name it something else ?

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


heap_scankey_pushdown_POC_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw : altering foreign table not invalidating prepare statement execution plan.

2016-11-30 Thread Etsuro Fujita

On 2016/11/30 17:53, Amit Langote wrote:

On 2016/11/30 17:25, Etsuro Fujita wrote:

Done.  I modified the patch so that any inval in pg_foreign_server also
blows the whole plan cache.



I noticed the following addition:

+   CacheRegisterSyscacheCallback(FOREIGNDATAWRAPPEROID,
PlanCacheSysCallback, (Datum) 0);

Is that intentional?  I thought you meant only to add for pg_foreign_server.


Yes, that's intentional; we would need that as well, because cached 
plans might reference FDW-level options, not only server/table-level 
options.  I couldn't come up with regression tests for FDW-level options 
in postgres_fdw, though.


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw : altering foreign table not invalidating prepare statement execution plan.

2016-11-30 Thread Amit Langote

Fujita-san,

On 2016/11/30 17:25, Etsuro Fujita wrote:
> On 2016/11/22 15:24, Etsuro Fujita wrote:
>> On 2016/11/22 4:49, Tom Lane wrote:
> 
>>> OK, please update the patch to handle those catalogs that way.
> 
>> Will do.
> 
> Done.  I modified the patch so that any inval in pg_foreign_server also
> blows the whole plan cache.

Thanks for updating the patch and sorry that I didn't myself.

I noticed the following addition:

+   CacheRegisterSyscacheCallback(FOREIGNDATAWRAPPEROID,
PlanCacheSysCallback, (Datum) 0);

Is that intentional?  I thought you meant only to add for pg_foreign_server.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-11-30 Thread Fabien COELHO


Hello Andrew,

I cannot remember a language with elseif* variants, and I find them quite 
ugly, so from an aethetical point of view I would prefer to avoid that... 
On the other hand having an "else if" capability makes sense (eg do 
something slightly different for various versions of pg), so that would 
suggest to stick to a simpler "if" without variants, if possible.


FTR I *strongly* disagree with this. (And if you can't remember a language 
that comes with them then you need to get out more. The Bourne shell, where 
it's spelled "elif", and Ada are two obvious examples.)


There may be a misunderstanding somewhere.

I'm rather in favor of having "elif/elsif/elseif/..." constructs, 
especially if they can be useful in realistic examples, which is not clear 
yet for psql scripts.


I'm arguing against "if/elif" *variants* in the sense of various 
conditional semantics: e.g. in cpp you have several "if"s (ifdef ifndef 
if), but you do not have all the corresponding "elif"s (elifdef 
elifndef...), there is only one "elif". In cpp "ifdef"/"ifndef" were 
obsoleted by "if" with minimal expression support (#if 
!defined(HAS_SOMETHING) ...) and only this "if" has its "elif".


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Push down more UPDATEs/DELETEs in postgres_fdw

2016-11-30 Thread Etsuro Fujita

On 2016/11/23 20:28, Rushabh Lathia wrote:

On Tue, Nov 22, 2016 at 6:24 PM, Etsuro Fujita
> wrote:



1)
-static void deparseExplicitTargetList(List *tlist, List
**retrieved_attrs,
+static void deparseExplicitTargetList(bool is_returning,
+  List *tlist,
+  List **retrieved_attrs,
   deparse_expr_cxt *context);

Any particular reason of inserting new argument as 1st
argunment? In general
we add new flags at the end or before the out param for the existing
function.



No, I don't.  I think the *context should be the last argument as in
other functions in deparse.c.  How about inserting that before the
out param **retrieved_attrs, like this?

static void
deparseExplicitTargetList(List *tlist,
  bool is_returning,
  List **retrieved_attrs,
  deparse_expr_cxt *context);



Yes, adding it before retrieved_attrs would be good.


OK, will do.


5) make_explicit_returning_list() pull the var list from the
returningList and
build the targetentry for the returning list and at the end
rewrites the
fdw_scan_tlist.

AFAIK, in case of DML - which is going to pushdown to the remote
server
ideally fdw_scan_tlist should be same as returning list, as
final output
for the query is query will be RETUNING list only. isn't that true?



That would be true.  But the fdw_scan_tlist passed from the core
would contain junk columns inserted by the rewriter and planner
work, such as CTID for the target table and whole-row Vars for other
tables specified in the FROM clause of an UPDATE or the USING clause
of a DELETE.  So, I created the patch so that the pushed-down
UPDATE/DELETE retrieves only the data needed for the RETURNING
calculation from the remote server, not the whole fdw_scan_tlist data.



Yes, I noticed that fdw_scan_tlist have CTID for the target table and
whole-raw vars for
other tables specified in the FROM clause of the DML. But I was thinking
isn't it possible
to create new fdw_scan_tlist once we found that DML is direct pushable
to foreign server?
I tried quickly doing that - but later its was throwing "variable not
found in subplan target list"
from set_foreignscan_references().



If yes, then why can't we directly replace the fdw_scan_tlist
with the
returning
list, rather then make_explicit_returning_list()?



I think we could do that if we modified the core (maybe the executor
part).  I'm not sure that's a good idea, though.



Yes modifying core is not good idea. But just want to understand why you
think that this need need to modify core?


Sorry, I don't remember that.  Will check.

I'd like to move this to the next CF.

Thank you for the comments!

Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw : altering foreign table not invalidating prepare statement execution plan.

2016-11-30 Thread Etsuro Fujita

On 2016/11/22 15:24, Etsuro Fujita wrote:

On 2016/11/22 4:49, Tom Lane wrote:

Etsuro Fujita  writes:

On 2016/11/10 5:17, Tom Lane wrote:

I think there's a very good argument that we should just treat any
inval
in pg_foreign_data_wrapper as a reason to blow away the whole plan
cache.
People aren't going to alter FDW-level options often enough to make it
worth tracking things more finely.  Personally I'd put
pg_foreign_server
changes in the same boat, though maybe those are changed slightly more
often.  If it's worth doing anything more than that, it would be to
note
which plans mention foreign tables at all, and not invalidate those
that
don't.  We could do that with, say, a hasForeignTables bool in
PlannedStmt.



I agree on that point.



OK, please update the patch to handle those catalogs that way.



Will do.


Done.  I modified the patch so that any inval in pg_foreign_server also 
blows the whole plan cache.



That leaves the question of whether it's worth detecting table-level
option changes this way, or whether we should just handle those by
forcing
a relcache inval in ATExecGenericOptions, as in Amit's original
patch in
<5702298d.4090...@lab.ntt.co.jp>.  I kind of like that approach; that
patch was short and sweet, and it put the cost on the unusual path
(ALTER
TABLE) not the common path (every time we create a query plan).



I think it'd be better to detect table-level option changes because that
could allow us to do more;



Why not?  But the bigger picture here is that relcache inval is the
established practice for forcing replanning due to table-level changes,
and I have very little sympathy for inventing a new mechanism for that
just for foreign tables.  If it were worth bypassing replanning for
changes in tables in dead subqueries, for instance, it would surely be
worth making that happen for all table types not just foreign tables.



My point here is that FDW-option changes wouldn't affect replanning by
core; even if forcing a replan, we would have the same foreign tables
excluded by constraint exclusion, for example.  So, considering updates
on pg_foreign_table to be rather frequent, it might be better to avoid
replanning for the pg_foreign_table changes to foreign tables excluded
by constraint exclusion, for example.  My concern about the
relcache-inval approach is: that approach wouldn't allow us to do that,
IIUC.


I thought updates on pg_foreign_table would be rather frequent, but we 
don't prove that that is enough that more-detailed tracking is worth the 
effort.  Yes, as you mentioned, it wouldn't be a good idea to invent the 
new mechanism just for foreign tables.  So, +1 for relcache inval. 
Attached is an updated version of the patch, in which I also modified 
regression tests; re-created tests to check to see if execution as well 
as explain work properly and added some tests for altering server-level 
options.


Sorry for the delay.

Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
***
*** 3575,3586  EXECUTE st5('foo', 1);
--- 3575,3754 
1 |  1 | 1 | Fri Jan 02 00:00:00 1970 PST | Fri Jan 02 00:00:00 1970 | 1  | 1  | foo
  (1 row)
  
+ -- altering FDW options requires replanning
+ PREPARE st6 AS SELECT * FROM ft1 t1 WHERE t1.c1 = t1.c2;
+ EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st6;
+   QUERY PLAN  
+ --
+  Foreign Scan on public.ft1 t1
+Output: c1, c2, c3, c4, c5, c6, c7, c8
+Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" = c2))
+ (3 rows)
+ 
+ PREPARE st7(int,int,text) AS INSERT INTO ft1 (c1,c2,c3) VALUES ($1,$2,$3);
+ EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st7(1001,101,'foo');
+QUERY PLAN
+ -
+  Insert on public.ft1
+Remote SQL: INSERT INTO "S 1"."T 1"("C 1", c2, c3, c4, c5, c6, c7, c8) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
+->  Result
+  Output: NULL::integer, 1001, 101, 'foo'::text, NULL::timestamp with time zone, NULL::timestamp without time zone, NULL::character varying, 'ft1   '::character(10), NULL::user_enum
+ (4 rows)
+ 
+ PREPARE st8(int) AS UPDATE ft1 SET c3 = 'bar' WHERE c1 = $1 RETURNING *;
+ EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st8(1001);
+QUERY PLAN   
+ 

[HACKERS] Minor correction in alter_table.sgml

2016-11-30 Thread Amit Langote
The following sentence in the ALTER TABLE documentation is not entirely
accurate:

"All the actions except RENAME, SET TABLESPACE and SET SCHEMA can be
combined into a list of multiple alterations to apply in parallel."

SET TABLESPACE (in the ALTER TABLE form) can be combined with other
subcommands; for example, following works:

alter table foo set tablespace mytbls, add b int;

Perhaps, it should say something like:

All the actions except RENAME, SET TABLESPACE (when using the ALL IN
TABLESPACE form) and SET SCHEMA can be combined into a list of multiple
alterations to apply in parallel.

Attached is a patch.

Thanks,
Amit
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index e48ccf2..150a3b8 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -709,8 +709,8 @@ ALTER TABLE ALL IN TABLESPACE name
 
   
All the actions except RENAME,
-   SET TABLESPACE and SET SCHEMA
-   can be combined into
+   SET TABLESPACE (when using the ALL IN TABLESPACE
+   form) and SET SCHEMA can be combined into
a list of multiple alterations to apply in parallel.  For example, it
is possible to add several columns and/or alter the type of several
columns in a single command.  This is particularly useful with large

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers