Re: [HACKERS] Draft release notes up for review

2017-08-24 Thread Alvaro Herrera
Tom Lane wrote:
> Jonathan Katz  writes:
> > I see this one
> > > Fix potential data corruption when freezing a tuple whose XMAX is a 
> > multixact with exactly one still-interesting member
> > But I’m unsure how prevalent it is and if it should be highlighted.
> 
> I'm not sure about that either.  I do not think anyone did the legwork
> to determine the exact consequences of that bug, or the probability of
> someone hitting it in the field.  But I think the latter must be really
> low, because we haven't heard any field reports that seem to match up.

My assessment is that that bug is extremely hard to hit.  The main
conditions are, according to FreezeMultiXactId, that
1) the tuple must have a multixact xmax; and
2) the update xid must be newer than the cutoff freeze xid;
3) the multixact itself must be older than the cutoff freeze multi.

so the multixact counter needs to go faster than the xid counter (in
terms of who gets past the freeze age first), and a vacuum freeze must
be attempted on that tuple before the update xid becomes freezable.

The consequence is that the infomask, instead of ending up as

 frz->t_infomask = tuple->t_infomask;
 frz->t_infomask &= ~HEAP_XMAX_BITS;
 |= HEAP_XMAX_COMMITTED;
 tuple->t_infomask = frz->t_infomask;

is instead

 frz->t_infomask = tuple->t_infomask;
 frz->t_infomask &= ~HEAP_XMAX_BITS;
 &= HEAP_XMAX_COMMITTED;
 tuple->t_infomask = frz->t_infomask;

so any bit other than XMAX_COMMITTED is turned off -- which could be
pretty bad for HEAP_HASNULL, etc.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql command \graw

2017-08-24 Thread Fabien COELHO



Argh, sorry, I put it in the September commitfest, and it seems that it
cannot be changed afterwards.

Maybe you can close it and redeclare it in the commitfest you want?


It can be moved


Indeed it can. Feel free to move it, then.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Fix bug that can cause walsender not to terminating at shutdown.

2017-08-24 Thread Alvaro Herrera
Andres Freund wrote:
> Fix bug that can cause walsender not to terminating at shutdown.
> 
> When backpatching c6c333436 I (Andres Freund) mis-resolved a conflict
> in the 9.4 branch. Unfortunately that leads to walsenders waiting
> forever when shutting down with connected standbys, unless immediate
> mode is used, or the standbys are forced to disconnect by other means.

Hmm, I think we should issue a new 9.4 release with this bug fix.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-24 Thread vinayak

Hi Sawada-san,

On 2017/08/25 11:07, Masahiko Sawada wrote:

On Fri, Aug 18, 2017 at 5:20 PM, vinayak
 wrote:

On 2017/06/20 17:35, vinayak wrote:

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
 wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes"  wrote:

Could you please add a "DO CONTINUE" case to one of the test cases? Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Please check the attached updated patch.


Thank you for updating.

The regression test failed after applied latest patch by git am.

*** /tmp/pg/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
2017-08-24 20:01:10.023201132 -0700
--- /tmp/pg/src/interfaces/ecpg/test/results/preproc-whenever_do_continue.c
 2017-08-24 20:22:54.308200853 -0700
***
*** 140,147 
 printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
 }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!proceed if any further errors do occur. */
 /* exec sql whenever sqlerror  continue ; */
   #line 53 "whenever_do_continue.pgc"

--- 140,147 
 printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
 }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!   proceed if any further errors do occur. */
 /* exec sql whenever sqlerror  continue ; */
   #line 53 "whenever_do_continue.pgc"

==

+   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
+   proceed if any further errors do occur. */

I think this comment should obey the coding style guide.

Thank you for testing.

I have updated the patch.
PFA.

Regards,
Vinayak Pokale
NTT Open Source Software Center
>From cd71bf7229a8566cadfde3d0e89b1b445baf1fee Mon Sep 17 00:00:00 2001
From: Vinayak Pokale 
Date: Thu, 22 Jun 2017 11:08:38 +0900
Subject: [PATCH] WHENEVER statement DO CONTINUE support

---
 doc/src/sgml/ecpg.sgml |   12 ++
 src/interfaces/ecpg/preproc/ecpg.trailer   |6 +
 src/interfaces/ecpg/preproc/output.c   |3 +
 src/interfaces/ecpg/test/ecpg_schedule |1 +
 .../test/expected/preproc-whenever_do_continue.c   |  159 
 .../expected/preproc-whenever_do_continue.stderr   |  112 ++
 .../expected/preproc-whenever_do_continue.stdout   |2 +
 src/interfaces/ecpg/test/preproc/Makefile  |1 +
 .../ecpg/test/preproc/whenever_do_continue.pgc |   61 
 9 files changed, 357 insertions(+), 0 deletions(-)
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout
 create mode 100644 src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc

diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
index f13a0e9..3cb4001 100644
--- a/doc/src/sgml/ecpg.sgml
+++ b/doc/src/sgml/ecpg.sgml
@@ -4763,6 +4763,17 @@ EXEC SQL WHENEVER condition action
 
  
+  DO CONTINUE
+  
+   
+Execute the C statement continue.  This should
+only be used in loops statements.  if executed, will cause the flow 
+of control to return to the top of the loop.
+   
+  
+ 
+
+ 
   CALL name (args)
   DO name (args)
   
@@ -7799,6 +7810,7 @@ WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } ac
 
 EXEC SQL WHENEVER NOT FOUND CONTINUE;
 EXEC SQL WHENEVER NOT FOUND DO BREAK;
+EXEC SQL WHENEVER NOT FOUND DO CONTINUE;
 EXEC SQL WHENEVER SQLWARNING SQLPRINT;
 EXEC SQL WHENEVER SQLWARNING DO warn();
 EXEC SQL WHENEVER SQLERROR sqlprint;
diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer
index 1c10879..b42bca4 100644
--- a/src/interfaces/ecpg/preproc/ecpg.trailer
+++ b/src/interfaces/ecpg/preproc/ecpg.trailer
@@ -1454,6 +1454,12 @@ action : CONTINUE_P
 			$$.command = NULL;
 			$$.str = mm_strdup("b

Re: [HACKERS] Support to COMMENT ON DATABASE CURRENT_DATABASE

2017-08-24 Thread Jing Wang
Hi all,

Enclosed please find the updated patch with covering security labels on
database.

The patch cover the following commands:

1. COMMENT ON DATABASE CURRENT_DATABASE is ...
2. ALTER DATABASE CURRENT_DATABASE OWNER to ...
3. ALTER DATABASE CURRENT_DATABASE SET parameter ...
4. ALTER DATABASE CURRENT_DATABASE RESET parameter ...
5. SELECT CURRENT_DATABASE
6. SECURITY LABEL ON DATABASE CURRENT_DATABASE is ...

The patch doesn't cover the pg_dump part which will be included in another
patch later.

Regards,
Jing Wang
Fujitsu Australia


comment_on_current_database_no_pgdump_v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-24 Thread Masahiko Sawada
On Fri, Aug 18, 2017 at 5:20 PM, vinayak
 wrote:
>
> On 2017/06/20 17:35, vinayak wrote:
>>
>> Hi Sawada-san,
>>
>> On 2017/06/20 17:22, Masahiko Sawada wrote:
>>>
>>> On Tue, Jun 20, 2017 at 1:51 PM, vinayak
>>>  wrote:


 On 2017/06/12 13:09, vinayak wrote:

 Hi,

 On 2017/06/10 12:23, Vinayak Pokale wrote:

 Thank you for your reply

 On Jun 9, 2017 5:39 PM, "Michael Meskes"  wrote:
>
> Could you please add a "DO CONTINUE" case to one of the test cases? Or
> add a new one? We would need a test case IMO.
>
 Yes I will add test case and send updated patch.

 I have added new test case for DO CONTINUE.
 Please check the attached patch.

 I have added this in Sept. CF
 https://commitfest.postgresql.org/14/1173/

>>> --
>>> In whenever_do_continue.pgc file, the following line seems not to be
>>> processed successfully by ecpg but should we fix that?
>>>
>>> +
>>> +   exec sql whenever sqlerror continue;
>>> +
>>>
>>> Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
>>> action but that seems not to emit sqlerror, so "DO CONTINUE" is not
>>> executed. I think the test case for DO CONTINUE should be a C code
>>> that executes the "continue" clause.
>>
>> Thank you for testing the patch.
>> I agreed with your comments. I will update the patch.
>
> Please check the attached updated patch.
>

Thank you for updating.

The regression test failed after applied latest patch by git am.

*** /tmp/pg/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
   2017-08-24 20:01:10.023201132 -0700
--- /tmp/pg/src/interfaces/ecpg/test/results/preproc-whenever_do_continue.c
2017-08-24 20:22:54.308200853 -0700
***
*** 140,147 
printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
}

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!proceed if any further errors do occur. */
/* exec sql whenever sqlerror  continue ; */
  #line 53 "whenever_do_continue.pgc"

--- 140,147 
printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
}

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!   proceed if any further errors do occur. */
/* exec sql whenever sqlerror  continue ; */
  #line 53 "whenever_do_continue.pgc"

==

+   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
+   proceed if any further errors do occur. */

I think this comment should obey the coding style guide.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-24 Thread Andres Freund
On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> 2.  Andres didn't like what I did to DecrTupleDescRefCount, namely
> allowing to run when there is no ResourceOwner.  I now see that this
> is probably an indication of a different problem; even if there were a
> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
> which a worker would reset before being reused), it wouldn't be the
> one that was active when the TupleDesc was created.  I think I have
> failed to understand the contracts here and will think/read about it
> some more.

Maybe I'm missing something, but isn't the issue here that using
DecrTupleDescRefCount() simply is wrong, because we're not actually
necessarily tracking the TupleDesc via the resowner mechanism?

If you look at the code, in the case it's a previously unknown tupledesc
it's registered with:

entDesc = CreateTupleDescCopy(tupDesc);
...
/* mark it as a reference-counted tupdesc */
entDesc->tdrefcount = 1;
...
RecordCacheArray[newtypmod] = entDesc;
...

Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
ResourceOwnerRememberTupleDesc() managing the reference from the
array. Nor was there one before.

We have other code managing TupleDesc lifetimes similarly, and look at
how they're freeing it:
/* Delete tupdesc if we have it */
if (typentry->tupDesc != NULL)
{
/*
 * Release our refcount, and free the tupdesc if none 
remain.
 * (Can't use DecrTupleDescRefCount because this 
reference is not
 * logged in current resource owner.)
 */
Assert(typentry->tupDesc->tdrefcount > 0);
if (--typentry->tupDesc->tdrefcount == 0)
FreeTupleDesc(typentry->tupDesc);
typentry->tupDesc = NULL;
}




This also made me think about how we're managing the lookup from the
shared array:

/*
 * Our local array can now point 
directly to the TupleDesc
 * in shared memory.
 */
RecordCacheArray[typmod] = tupdesc;

Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
that case manipulate shared memory in a concurrency unsafe manner.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update low-level backup documentation to match actual behavior

2017-08-24 Thread Michael Paquier
On Thu, Aug 24, 2017 at 10:49 PM, David Steele  wrote:
> Thanks for reviewing!  Sorry for the late response, those eclipses don't
> just chase themselves...

That's quite something to see.

> On 8/20/17 10:22 PM, Michael Paquier wrote:
>> On Fri, Aug 18, 2017 at 3:35 AM, David Steele  wrote:
>>
>> + Prior to PostgreSQL 9.6, this
>> Markup ?
>
> Fixed.
>
>> +  Note well that if the server crashes during the backup it may not be
>> +  possible to restart until the backup_label file has been
>> +  manually deleted from the PGDATA directory.
>> Missing a markup  here for PGDATA.
>
> Fixed.
>
>> s/Note well/Note as well/, no?
>
> This was a literal translation of nota bene but I've changed it to
> simply "Note" as that seems common in the docs.

Oh, OK.

>> Documentation does not state yet that the use of low-level APIs for
>> exclusive backups are not supported on standbys.
>
> The first paragraph of the exclusive section states, "this type of
> backup can only be taken on a primary".

Sorry, missed that.

>> Now in the docs:
>>  If the backup process monitors and ensures that all WAL segment files
>>  required for the backup are successfully archived then the second
>>  parameter (which defaults to true) can be set to false to have
>> I would recommend adding some details here and mention
>> "wait_for_archive" instead of "second parameter".
>
> Done.
>
>> I am wondering as
>> well if this paragraph should be put in red with a warning or
>> something like that. This is really, really important to ensure
>> consistent backups!
>
> Maybe, but this logic could easily apply to a lot of sections in the
> backup docs.  I'm not sure where it would end.

True as well. The patch looks good to me. If a committer does not show
up soon, it may be better to register that in the CF and wait. I am
not sure that adding an open item is suited, as docs have the same
problem on 9.6.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-08-24 Thread Michael Paquier
On Thu, Aug 24, 2017 at 11:28 PM, Bossart, Nathan  wrote:
> I should also note that the dedupe_relations(...) function needs another
> small fix for column lists.  Since the lack of a column list means that we
> should ANALYZE all columns, a duplicate table name with an empty column
> list should effectively null out any other specified columns.  For example,
> "ANALYZE table (a, b), table;" currently dedupes to the equivalent of
> "ANALYZE table (a, b);" when it should dedupe to "ANALYZE table;".

This makes me think that it could be a good idea to revisit this bit
in a separate patch. ANALYZE fails as well now when the same column is
defined multiple times with an incomprehensible error message.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MCXT_ALLOC_NO_OOM -> DSA_ALLOC_NO_OOM in dsa.c

2017-08-24 Thread Andres Freund
Hi,

On 2017-08-24 23:08:52 +1200, Thomas Munro wrote:
> I spotted a (harmless) thinko in dsa.c.  Please see attached.

Pushed to 10 and master. Thanks.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression stoping PostgreSQL 9.4.13 if a walsender is running

2017-08-24 Thread Andres Freund
Hi,

On 2017-08-22 19:28:22 +0200, Marco Nenciarini wrote:
> I have noticed that after the 9.4.13 release PostgreSQL reliably fails
> to shutdown with smart and fast method if there is a running walsender.
> 
> The postmaster continues waiting forever for the walsender termination.
> 
> It works perfectly with all the other major releases.
> 
> I bisected the issue to commit 1cdc0ab9c180222a94e1ea11402e728688ddc37d
> 
> After some investigation I discovered that the instruction that sets
> got_SIGUSR2 was lost during the backpatch in the WalSndLastCycleHandler
> function.
> 
> The trivial patch is the following:

Pushed, thanks!  And sorry again.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] taking stdbool.h into use

2017-08-24 Thread Tom Lane
Peter Eisentraut  writes:
> Some not so long time ago, it was discussed to look into taking
> stdbool.h into use.  The reason was that third-party libraries (perl?,
> ldap?, postgis?) are increasingly doing so, and having incompatible
> definitions of bool could/does create a mess.

> Here is a patch set that aims to accomplish that.  On the way there, it
> cleans up various loose and weird uses of bool and proposes a way to
> ensure that the system catalog structs get a 1-byte bool even if the
> "standard" bool is not.

I played around with this on my dinosaur machines.

On gaur, every source file gives me a complaint like this:

In file included from ../../src/include/postgres.h:47,
 from username.c:16:
../../src/include/c.h:207: warning: `SIZEOF__BOOL' redefined
../../src/include/pg_config.h:801: warning: this is the location of the 
previous definition

pg_config.h contains:

/* The size of `_Bool', as computed by sizeof. */
#define SIZEOF__BOOL 0

c.h then attempts to override that, which would be bad style in any case.
I think you should make configure take care of such situations and emit
the correct SIZEOF__BOOL value to begin with.  Perhaps the script could
check for a zero result and change it to 1.

Note that even though this platform has a , configure rejects
it because the name "bool" is not a macro.  Dunno if we care to change
that; I see we're using a standard autoconf macro, so messing with that
is likely more trouble than it's worth.

After temporarily commenting out the bogus SIZEOF__BOOL definition,
I got a clean compile and clean regression tests.  That's not too
surprising though because without use of  it's effectively
equivalent to the old code.

BTW, I also wonder why 0008 is doing

AC_CHECK_SIZEOF(_Bool)
and then
#define SIZEOF_BOOL SIZEOF__BOOL

rather than just

AC_CHECK_SIZEOF(bool)

I should think that not touching _Bool when we don't have to is a
good thing.


On prairiedog, configure seems to detect things correctly:

$ grep BOOL src/include/pg_config.h
#define HAVE_STDBOOL_H 1
#define HAVE__BOOL 1
#define SIZEOF__BOOL 4

It builds without warnings, but the regression tests crash:

2017-08-24 16:53:42.621 EDT [24029] LOG:  server process (PID 24311) was 
terminated by signal 10: Bus error
2017-08-24 16:53:42.621 EDT [24029] DETAIL:  Failed process was running: CREATE 
INDEX textarrayidx ON array_index_op_test USING gin (t);

The core dump is a bit confused, but it seems to be trying to dereference
a null pointer in bttextcmp.  I'm pretty sure the underlying problem is
that you've not done anything with GinNullCategory:

/*
 * Category codes to distinguish placeholder nulls from ordinary NULL keys.
 * Note that the datatype size and the first two code values are chosen to be
 * compatible with the usual usage of bool isNull flags.
 *
 * GIN_CAT_EMPTY_QUERY is never stored in the index; and notice that it is
 * chosen to sort before not after regular key values.
 */
typedef signed char GinNullCategory;

Overlaying that with "bool" is just Not Gonna Work.  It also ain't gonna
work to do "typedef bool GinNullCategory", so I'm not very sure how to
resolve that.  Maybe we could hack it like

#if SIZEOF__BOOL == 1
typedef signed char GinNullCategory;
#elif SIZEOF__BOOL == 4
typedef int32 GinNullCategory;
#else
#error "unsupported sizeof(bool)"
#endif

However, the quoted comment implies that we store GinNullCategory values
on-disk, which might mean that some additional hacks are needed to
preserve index compatibility.

> I have done a fair amount of testing on this, including a hand-rigged
> setup where _Bool is not 1 byte.

I'm not very sure how you got through regression tests despite this issue.
Possibly it's got something to do with prairiedog being an alignment-picky
machine ... but the bus error is *not* at a spot where a bool or
GinNullCategory value is being accessed, so the problem seems like it
should manifest with jury-rigged _Bool on non-picky hardware as well.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] One-shot expanded output in psql using \gx

2017-08-24 Thread Stephen Frost
* Noah Misch (n...@leadboat.com) wrote:
> On Tue, Aug 15, 2017 at 10:24:34PM +0200, Tobias Bussmann wrote:
> > I've tested the new \gx against 10beta and current git HEAD. Actually one 
> > of my favourite features of PostgreSQL 10! However in my environment it was 
> > behaving strangely. After some debugging I found that \gx does not work if 
> > you have \set FETCH_COUNT n before. Please find attached a patch that fixes 
> > this incl. new regression test.

Fixed in 0cdc3e4.

Thanks for the report!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Proposal: global index

2017-08-24 Thread Andres Freund
Hi,

On 2017-08-18 12:12:58 +0300, Ildar Musin wrote:
> While we've been developing pg_pathman extension one of the most frequent
> questions we got from our users was about global index support. We cannot
> provide it within an extension. And I couldn't find any recent discussion
> about someone implementing it. So I'm thinking about giving it a shot and
> start working on a patch for postgres.

FWIW, I personally think for constraints the better approach is to make
the constraint checking code cope with having to check multiple
indexes. Initially by just checking all indexes, over the longer term
perhaps pruning the set of to-be-checked indexes based on the values in
the partition key if applicable.   The problem with creating huge global
indexes is that you give away some the major advantages of partitioning:
- dropping partitions now is slow / leaves a lof of garbage again
- there's no way you can do this with individual partitions being remote
  or such
- there's a good chunk of locality loss in global indexes

The logic we have for exclusion constraints checking can essentially be
extended to do uniqueness checking over multiple partitions. Depending
on the desired deadlock behaviour one might end up doing speculative
insertions in addition.  The foreign key constraint checking is fairly
simple, essentially one "just" need to remove the ONLY from the
generated check query.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code in pg_upgrade

2017-08-24 Thread Peter Eisentraut
On 8/23/17 09:36, Robert Haas wrote:
> I think I agree.  It seems to me that the version of pg_upgrade
> shipped with release N only needs to support upgrades to release N,
> not older releases.  There's probably room for debate about whether a
> pg_upgrade needs to support direct upgrades FROM very old releases,
> but we need not decide anything about that right now.
> 
> I think you could go ahead and rip out this code, but as it seems like
> a non-critical change, -1 for back-patching it.

committed to PG10 and master

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
Douglas Doole  writes:
>> TBH I dislike the fact that
>> you did the subquery case randomly differently from the existing cases;
>> it should just have been added as an additional recursive case.  Since
>> this is done only once at query startup, worrying about hypothetical
>> micro-performance issues seems rather misguided.

> Habit mostly - why write code with potential performance problems when the
> alternative is just as easy to read?

I disagree that having adjacent code doing the same thing in two different
ways is "easy to read"; what it is is confusing.  More globally, since
we're dealing with community code that will be read and hacked on by many
people, I think we need to prioritize simplicity, readability, and
extensibility over micro-efficiency.  I'm willing to compromise those
goals for efficiency where a clear need has been demonstrated, but no such
need has been shown here, nor do I think it's likely that one can be
shown.

I'd have been okay with changing the entire function if it could still be
doing all cases the same way.  But the exception for merge-append (and
probably soon other cases) means you still end up with a readability
problem.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM salt length

2017-08-24 Thread Peter Eisentraut
On 8/17/17 17:00, Joe Conway wrote:
>> Hence my original inquiry: "I suspect that this length was chosen based
>> on the example in RFC 5802 (SCRAM-SHA-1) section 5.  But the analogous
>> example in RFC 7677 (SCRAM-SHA-256) section 3 uses a length of 16.
>> Should we use that instead?"
> Unless there is some significant downside to using 16 byte salt, that
> would be my vote.

committed

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: global index

2017-08-24 Thread Joshua D. Drake

On 08/24/2017 10:52 AM, Adam Brusselback wrote:
My understanding is that global indexes allow foreign keys to work 
naturally with partitioned tables, or tables in an inheritance 
hierarchy.  That is pretty big IMO, as it allows you to partition a 
table without making a trade-off in your database integrity.


It is, in fact the reason that even with 10 we don't really have 
partitioning as much as syntactical sugar for partitioning. (Not trying 
to take away from that, having the syntactical sugar is a huge first step).


JD


--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] More replication race conditions

2017-08-24 Thread Tom Lane
sungazer just failed with

pg_recvlogical exited with code '256', stdout '' and stderr 'pg_recvlogical: 
could not send replication command "START_REPLICATION SLOT "test_slot" LOGICAL 
0/0 ("include-xids" '0', "skip-empty-xacts" '1')": ERROR:  replication slot 
"test_slot" is active for PID 8913148
pg_recvlogical: disconnected
' at 
/home/nm/farm/gcc64/HEAD/pgsql.build/src/test/recovery/../../../src/test/perl/PostgresNode.pm
 line 1657.

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2017-08-24%2015%3A16%3A10

Looks like we're still not there on preventing replication startup
race conditions.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: global index

2017-08-24 Thread Adam Brusselback
My understanding is that global indexes allow foreign keys to work
naturally with partitioned tables, or tables in an inheritance hierarchy.
That is pretty big IMO, as it allows you to partition a table without
making a trade-off in your database integrity.


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
Douglas Doole  writes:
> My first thought was to do a regex over the explain output to mask the
> troublesome value, but I couldn't figure out how to make it work and didn't
> find an example (admittedly didn't spent a huge amount of time hunting
> though). I'd love to see how to put it together.

I did it like this:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=1177ab1dabf72bafee8f19d904cee3a299f25892

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Douglas Doole
>
> No.  You can't easily avoid recursion for the merge-append case, since
> that has to descend to multiple children.


Agreed. That's why it's not in the while loop in my sketch of the suggested
rework.


>   TBH I dislike the fact that
> you did the subquery case randomly differently from the existing cases;
> it should just have been added as an additional recursive case.  Since
> this is done only once at query startup, worrying about hypothetical
> micro-performance issues seems rather misguided.
>

Habit mostly - why write code with potential performance problems when the
alternative is just as easy to read?

I disagree with saying "it's just done at query startup so don't worry
about it too much". Back in my days with DB2 we were working on the TPCC
benchmark (when it was still relevant) and we found that the startup cost
was a huge factor when doing rapid fire, cached, simple queries. In many
cases the startup cost was >50% of the overall query execution time. Our
testing here in Salesforce is showing a similar pattern in some of our
workloads and PostgreSQL.

I'm not trying to argue that this particular issue of recurse/don't recurse
is going to make or break anything. It's just where my head's at when I
look at the code.

- Doug
Salesforce


Re: [HACKERS] type cache for concat functions

2017-08-24 Thread Alexander Kuzmenkov

Hi Pavel,

I tried applying your patch, it applies and compiles fine, check and 
checkworld pass.


I ran a simple performance test, select 
concat(generate_series(1,10), ... [x5 total]) vs select 
generate_series(1,10)::text || ... .
Operator || runs in 60 ms, while unpatched concat takes 90 and patched 
-- 55 ms.


About the code:
* There seems to be an extra tab here:
FmgrInfo*typcache;
It's not aligned with the previous declaration with tab size 4.

* You could allocate the cache and store it into the context inside 
rebuildConcatCache. It would return return the cache pointer, and the 
calling code would look cleaner -- just one line. This is a matter of 
taste though.


* The nearby functions use snake_case, so it should be 
rebuild_concat_cache too.


Overall the patch looks good to me.

--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
I wrote:
> To get the buildfarm back to green, I agree with the suggestion of
> setting force_parallel_mode=off for this test, at least for now.

Actually, there's an easier way: we can just make the test table be
TEMP.  That is a good idea anyway to prevent possible interference
from auto-analyze, which conceivably could come along at just the
wrong time and cause a plan change.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Douglas Doole
>
> I don't greatly like the way that the regression test case filters
> the output; it hides too much IMO.  I'd be inclined to try to return
> the EXPLAIN output with as much detail preserved as possible.  Maybe
> we could use regex substitution on lines of the output to get rid of
> stuff that won't be invariant.
>

My first thought was to do a regex over the explain output to mask the
troublesome value, but I couldn't figure out how to make it work and didn't
find an example (admittedly didn't spent a huge amount of time hunting
though). I'd love to see how to put it together.

Doug
- Salesforce


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
Robert Haas  writes:
> Buildfarm members with force_parallel_mode=regress are failing now.  I
> haven't had a chance to investigate exactly what's going on here, but
> I think there are probably several issues:

> 1. It's definitely the case that the details about a sort operation
> aren't propagated from a worker back to the leader.  This has elicited
> complaint previously.

Check; this must be the explanation for the buildfarm failures.  That'd
be worth fixing but it seems like material for an independent patch.

> 2. pass_down_bound() probably gets the Gather node at the top of the
> tree and stops, since it doesn't know how to pass down a bound through
> a Gather.  We could probably teach it to pass down the bound through
> Gather or Gather Merge on the same theory as Append/MergeAppend.

While that might be worth doing, it's not the issue here, because
the forced Gather is on top of the Limit node.

> In the short run, I'm not sure we have a better alternative than
> removing this test - unless somebody has a better idea? - but it would
> be good to work on all of the above problems.

To get the buildfarm back to green, I agree with the suggestion of
setting force_parallel_mode=off for this test, at least for now.

I don't greatly like the way that the regression test case filters
the output; it hides too much IMO.  I'd be inclined to try to return
the EXPLAIN output with as much detail preserved as possible.  Maybe
we could use regex substitution on lines of the output to get rid of
stuff that won't be invariant.

Unless you're already on it, I can go do that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
Douglas Doole  writes:
> Would we be better off moving those cases into the while loop I added to
> avoid the recursion?

No.  You can't easily avoid recursion for the merge-append case, since
that has to descend to multiple children.  TBH I dislike the fact that
you did the subquery case randomly differently from the existing cases;
it should just have been added as an additional recursive case.  Since
this is done only once at query startup, worrying about hypothetical
micro-performance issues seems rather misguided.

Perhaps, since this function is recursive, it ought to have a
check_stack_depth call, just to be safe.  I'm dubious though that it could
ever eat more stack than the preceding node-initialization calls, so maybe
we needn't bother.

In the long run I wonder whether we should convert "push down bound" into
a generic node operation like the others managed by execAmi.c.  It seems
like you could push a limit through any node type that's guaranteed not to
reduce the number of rows returned, and so we're missing cases.  Maybe
they're not ones that are important in practice, but I'm unsure.

> The two casts I added (to SubqueryScanState and Node) should probably be
> changed to castNode() calls.

The SubqueryScanState cast seems fine given that it immediately follows an
IsA() check.  You can't use castNode() for a cast to Node, because that's
a generic not a specific node type.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Thoughts on unit testing?

2017-08-24 Thread Torsten Zuehlsdorff



On 13.08.2017 21:19, Peter Geoghegan wrote:

On Thu, Aug 10, 2017 at 2:53 PM, Thomas Munro
 wrote:

The current regression tests, isolation tests and TAP tests are very
good (though I admit my experience with TAP is limited), but IMHO we
are lacking support for C-level unit testing.  Complicated, fiddly
things with many states, interactions, edge cases etc can be hard to
get full test coverage on from the outside.  Consider
src/backend/utils/mmgr/freepage.c as a case in point.


It is my understanding that much of the benefit of unit testing comes
from maintainability. 


I never had this understanding. I write tests to test expected behavior 
and not the coded one. If possible i separate the persons writing 
unit-tests from the ones writing the function itself. Having someone 
really read the documentation or translate the expectation into a 
test-case, makes sure, the function itself works well.


Also it really safes time in the long run. Subtle changes / bugs can be 
caught which unit-tests. Also a simple bug-report can be translated into 
a unit-test make sure that the bugfix really works and that no 
regression will happen later. I literally saved ones a week of work with 
a single unit-test.


There are many other advantages, but to earn them the code need to be 
written to be testable. And this is often not the case. Most literature 
advises to Mocking, mixins or other techniques, which most times just 
translate into "this code is not written testable" or "the technique / 
language / concept / etc is not very good in being testable".


Greetings,
Torsten


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Push limit to sort through a subquery

2017-08-24 Thread Tom Lane
Robert Haas  writes:
> On Wed, Aug 23, 2017 at 6:55 PM, Douglas Doole  wrote:
>> From previous experience, pushing the limit to the workers has the potential
>> to be a big win .

> Here's a not-really-tested draft patch for that.

Both this patch and the already-committed one contain useless (and not
very cheap) expression_returns_set tests.  Since commit 69f4b9c85,
SRFs could only appear in the tlist of ProjectSet nodes.

No opinion on whether this patch is right otherwise.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql command \graw

2017-08-24 Thread Pavel Stehule
2017-08-24 17:27 GMT+02:00 Fabien COELHO :

>
> Hello,
>
> I'll wait to winter commitfest
>>
>
> Argh, sorry, I put it in the September commitfest, and it seems that it
> cannot be changed afterwards.
>
> Maybe you can close it and redeclare it in the commitfest you want?
>

It can be moved


>
> for some other ideas, tests, comments - it is topic for PostgreSQL 11, and
>> then there are a time for discussion.
>>
>
> I was rather seeing that as a small patch which could have been processed
> quickly, but if you expect feedback maybe it is better to give it more time.


I would to verify format, but thank you very much for your interest :)

Regards

Pavel




>
>
> --
> Fabien.
>


Re: [HACKERS] proposal: psql command \graw

2017-08-24 Thread Fabien COELHO


Hello,


I'll wait to winter commitfest


Argh, sorry, I put it in the September commitfest, and it seems that it 
cannot be changed afterwards.


Maybe you can close it and redeclare it in the commitfest you want?

for some other ideas, tests, comments - it is topic for PostgreSQL 11, 
and then there are a time for discussion.


I was rather seeing that as a small patch which could have been processed 
quickly, but if you expect feedback maybe it is better to give it more 
time.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Page Scan Mode in Hash Index

2017-08-24 Thread Jesper Pedersen

On 08/24/2017 01:21 AM, Ashutosh Sharma wrote:

Done.

Attached are the patches with above changes.



Thanks !

Based on the feedback in this thread, I have moved the patch to "Ready 
for Committer".


Best regards,
 Jesper



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Updating line length guidelines

2017-08-24 Thread Tom Lane
[ returning from the wilds of Kentucky... ]

Stephen Frost  writes:
> * Craig Ringer (cr...@2ndquadrant.com) wrote:
>> Personally I'd be fine with 100 or so, but when I'm using buffers side by
>> side, or when I'm working in poor conditions where I've set my terminal to
>> "giant old people text" sizes, I remember the advantages of a width limit.

> I wouldn't be against 100 either really, but I don't really feel all
> that strongly either way.  Then again, there is the back-patching pain
> which would ensue to consider..

Yeah.  If we changed this, we'd have to adjust pgindent's settings to
match, whereupon it would immediately reflow every unprotected comment
block.  When Bruce changed the effective right margin for comment blocks
by *one* column back around 8.1, pgindent changed enough places to cause
nasty back-patching pain for years afterward.  I don't even want to
think about how much worse a 20-column change would be.

(Of course, we could avoid that problem by re-indenting all the active
back branches along with master ... but I bet we'd get push-back from
people maintaining external patch sets.)

Aside from that, I'm -1 on changing the target column width for pretty
much the same reasons other people have stated.  I also agree with not
breaking error message texts across lines.  Having them wrap is kind of
ugly, but it's tolerable, and moving the target width by 10 or 20
wouldn't make that situation very much better anyway.

I don't feel a strong need to touch existing code just to make it fit in
80 columns, but we could try to improve that situation whenever we have
to touch a function for other reasons.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Explicit relation name in VACUUM VERBOSE log

2017-08-24 Thread Tom Lane
Alvaro Herrera  writes:
> Michael Paquier wrote:
>> Hm. I am not sure what you have in mind here.

> I'm thinking that this data is useful to analyze as a stream of related
> events, rather than as individual data points.  Grepping logs in order to
> extract the numbers is lame and slow.

Yes.  And there is a bigger issue here, which is that the output of
VACUUM VERBOSE is meant to be sent to the client for *human* readability.
(As you noted, INFO-level messages don't normally go to the server log
in the first place, and that's not by accident.)  Repeating the full table
name in every line will be really annoying for that primary use-case.
I am not sure what we want to do to address Masahiko-san's use-case, but
ideally his workflow wouldn't involve log-scraping at all.  It definitely
shouldn't involve depending on INFO messages --- for one thing, what
happens when those get translated?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-08-24 Thread Bossart, Nathan
On 8/23/17, 11:59 PM, "Michael Paquier"  wrote:
> Robert, Amit and other folks working on extending the existing
> partitioning facility would be in better position to answer that, but
> I would think that we should have something as flexible as possible,
> and storing a list of relation OID in each VacuumRelation makes it
> harder to track the uniqueness of relations vacuumed. I agree that the
> concept of a partition with multiple parents induces a lot of
> problems. But the patch as proposed worries me as it complicates
> vacuum() with a double loop: one for each relation vacuumed, and one
> inside it with the list of OIDs. Modules calling vacuum() could also
> use flexibility, being able to analyze a custom list of columns for
> each relation has value as well.

I understand your concern, and I'll look into this for v9.  I think the
majority of this change will go into get_rel_oids(...).  Like you, I am
also curious to what the partitioning folks think.

> + * relations is a list of VacuumRelations to process.  If it is NIL, all
> + * relations in the database are processed.
> Typo here, VacuumRelation is singular.

I'll make this change in v9.

I should also note that the dedupe_relations(...) function needs another
small fix for column lists.  Since the lack of a column list means that we
should ANALYZE all columns, a duplicate table name with an empty column
list should effectively null out any other specified columns.  For example,
"ANALYZE table (a, b), table;" currently dedupes to the equivalent of
"ANALYZE table (a, b);" when it should dedupe to "ANALYZE table;".

Nathan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Update low-level backup documentation to match actual behavior

2017-08-24 Thread David Steele
Hi Michael,

Thanks for reviewing!  Sorry for the late response, those eclipses don't
just chase themselves...

On 8/20/17 10:22 PM, Michael Paquier wrote:
> On Fri, Aug 18, 2017 at 3:35 AM, David Steele  wrote:
> 
> + Prior to PostgreSQL 9.6, this
> Markup ?

Fixed.

> +  Note well that if the server crashes during the backup it may not be
> +  possible to restart until the backup_label file has been
> +  manually deleted from the PGDATA directory.
> Missing a markup  here for PGDATA.

Fixed.

> s/Note well/Note as well/, no?

This was a literal translation of nota bene but I've changed it to
simply "Note" as that seems common in the docs.

> - This function, when called on a primary, terminates the backup mode and
> + This function terminates backup mode and
>   performs an automatic switch to the next WAL segment. The reason for the
>   switch is to arrange for the last WAL segment written during the backup
> - interval to be ready to archive.  When called on a standby, this 
> function
> - only terminates backup mode.  A subsequent WAL segment switch will be
> - needed in order to ensure that all WAL files needed to restore the 
> backup
> - can be archived; if the primary does not have sufficient write activity
> - to trigger one, pg_switch_wal should be executed on
> - the primary.
> + interval to be ready to archive.
> pg_stop_backup() still waits for the last WAL segment to be archived
> on the primary. Perhaps we'd want to mention that?

That's detailed in the next paragraph, ISTM.

> Documentation does not state yet that the use of low-level APIs for
> exclusive backups are not supported on standbys.

The first paragraph of the exclusive section states, "this type of
backup can only be taken on a primary".

> Now in the docs:
>  If the backup process monitors and ensures that all WAL segment files
>  required for the backup are successfully archived then the second
>  parameter (which defaults to true) can be set to false to have
> I would recommend adding some details here and mention
> "wait_for_archive" instead of "second parameter". 

Done.

> I am wondering as
> well if this paragraph should be put in red with a warning or
> something like that. This is really, really important to ensure
> consistent backups!

Maybe, but this logic could easily apply to a lot of sections in the
backup docs.  I'm not sure where it would end.

Thanks!
-- 
-David
da...@pgmasters.net
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 0e7c6e2051..95aeb35507 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -889,8 +889,11 @@ SELECT pg_start_backup('label', false, false);
 
 SELECT * FROM pg_stop_backup(false, true);
 
- This terminates the backup mode and performs an automatic switch to
- the next WAL segment.  The reason for the switch is to arrange for
+ This terminates backup mode. On a primary, it also performs an automatic
+ switch to the next WAL segment.  On a standby, it is not possible to
+ automatically switch WAL segments, so you may wish to run
+ pg_switch_wal on the primary to perform a manual
+ switch. The reason for the switch is to arrange for
  the last WAL segment file written during the backup interval to be
  ready to archive.
 
@@ -908,9 +911,12 @@ SELECT * FROM pg_stop_backup(false, true);
  Once the WAL segment files active during the backup are archived, you are
  done.  The file identified by pg_stop_backup's first return
  value is the last segment that is required to form a complete set of
- backup files.  If archive_mode is enabled,
+ backup files.  On a primary, if archive_mode is enabled and 
the
+ wait_for_archive parameter is true,
  pg_stop_backup does not return until the last segment has
  been archived.
+ On a standby, archive_mode must be always in order
+ for pg_stop_backup to wait.
  Archiving of these files happens automatically since you have
  already configured archive_command. In most cases this
  happens quickly, but you are advised to monitor your archive
@@ -926,8 +932,9 @@ SELECT * FROM pg_stop_backup(false, true);
 
 
  If the backup process monitors and ensures that all WAL segment files
- required for the backup are successfully archived then the second
- parameter (which defaults to true) can be set to false to have
+ required for the backup are successfully archived then the
+ wait_for_archive parameter (which defaults to true) can be set
+ to false to have
  pg_stop_backup return as soon as the stop backup record is
  written to the WAL.  By default, pg_stop_backup will wait
  until all WAL has been archived, which can take some time.  This option
@@ -943,9 +950,9 @@ SELECT * FROM pg_stop_backup(false, true);
 Making an exclusive low level backup
 
  The process for an exclusive backup is mostly the same

[HACKERS] [PATCH] Fix drop replication slot blocking instead of returning error

2017-08-24 Thread Simone Gotti
Hi all,

I noticed that in postgres 10beta3, calling pg_drop_replication_slot on an
active slot will block until it's released instead of returning an error
like
done in pg 9.6. Since this is a change in the previous behavior and the docs
wasn't changed I made a patch to restore the previous behavior.

Thanks,

Simone.

--

after git commit 9915de6c1cb calls to pg_drop_replication_slot or the
replication protocol DROP_REPLICATION_SLOT command will block when a
slot is in use instead of returning an error as before (as the doc
states).

This patch will set nowait to true to restore the previous
behavior.
diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c
index d4cbd83bde..ab776e85d2 100644
--- a/src/backend/replication/slotfuncs.c
+++ b/src/backend/replication/slotfuncs.c
@@ -171,7 +171,7 @@ pg_drop_replication_slot(PG_FUNCTION_ARGS)
 
 	CheckSlotRequirements();
 
-	ReplicationSlotDrop(NameStr(*name), false);
+	ReplicationSlotDrop(NameStr(*name), true);
 
 	PG_RETURN_VOID();
 }
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index 03e1cf44de..c6b40ec0fb 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -1028,7 +1028,7 @@ CreateReplicationSlot(CreateReplicationSlotCmd *cmd)
 static void
 DropReplicationSlot(DropReplicationSlotCmd *cmd)
 {
-	ReplicationSlotDrop(cmd->slotname, false);
+	ReplicationSlotDrop(cmd->slotname, true);
 	EndCommand("DROP_REPLICATION_SLOT", DestRemote);
 }
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] auto_explain : log queries with wrong estimation

2017-08-24 Thread Maksim Milyutin

On 24.08.2017 14:56, Adrien Nayrat wrote:


Hi hackers,


Hi,

I try to made a patch to auto_explain in order to log queries with 
wrong estimation.


I compare planned row id : queryDesc->planstate->plan->plan_rows

Vs ntuples : queryDesc->planstate->instrument->ntuples;


AFAICS you want to introduce two additional per-node variables:
 - auto_explain_log_estimate_ratio that denotes minimum ratio (>= 1) 
between real value and planned one. I would add 'min' prefix before 
'ratio'.
 - auto_explain_log_estimate_min_rows - minimum absolute difference 
between those two values. IMHO this name is somewhat poor, the suffix 
'min_diff_rows' looks better.
If real expressions (ratio and diff) exceed these threshold values both, 
you log this situation. I'm right?


If I understand, instrumentation is used only with explain. So my 
patch works

only with explain (and segfault without).


Instrumentation is initialized only with analyze (log_analyze is true)[1]


Is there a simple way to get ntuples?


It's interesting question. In one's time I didn't find any way to get 
the amount of tuples emitted from a node.



1. contrib/auto_explain/auto_explain.c:221

--
Regards,
Maksim Milyutin



Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-24 Thread Thomas Munro
On Wed, Aug 23, 2017 at 11:58 PM, Thomas Munro
 wrote:
> On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund  wrote:
>> Notes for possible followup commits of the dshash API:
>> - nontrivial portions of dsahash are essentially critical sections lest
>>   dynamic shared memory is leaked. Should we, short term, introduce
>>   actual critical section markers to make that more obvious? Should we,
>>   longer term, make this more failsafe / easier to use, by
>>   extending/emulating memory contexts for dsa memory?
>
> Hmm.  I will look into this.

Yeah, dshash_create() leaks the control object if the later allocation
of the initial hash table array raises an error.  I think that should
be fixed -- please see 0001 in the new patch set attached.

The other two places where shared memory is allocated are resize() and
insert_into_bucket(), and both of those seem exception-safe to me: if
dsa_allocate() elogs then nothing is changed, and the code after that
point is no-throw.  Am I missing something?

>> - SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
>>   which calls EnsureCurrentSession(), but
>>   SharedRecordTypmodRegistryInit() does so again - sprinkling those
>>   around liberally seems like it could hide bugs.
>
> Yeah.  Will look into this.

One idea is to run InitializeSession() in InitPostgres() instead, so
that CurrentSession is initialized at startup, but initially empty.
See attached.  (I realised that that terminology is a bit like a large
volume called FRENCH CUISINE which turns out to have just one recipe
for an omelette in it, but you have to start somewhere...)  Better
ideas?

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmods-v9.patchset.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] auto_explain : log queries with wrong estimation

2017-08-24 Thread Adrien Nayrat
Hi hackers,


I try to made a patch to auto_explain in order to log queries with wrong 
estimation.

I compare planned row id : queryDesc->planstate->plan->plan_rows

Vs ntuples : queryDesc->planstate->instrument->ntuples;

If I understand, instrumentation is used only with explain. So my patch works
only with explain (and segfault without).


Is there a simple way to get ntuples?

Attached a naive patch.

Thanks :)

-- 
Adrien NAYRAT

http://dalibo.com - http://dalibo.org
diff --git a/contrib/auto_explain/auto_explain.c 
b/contrib/auto_explain/auto_explain.c
index edcb91542a..165fe8e4ae 100644
--- a/contrib/auto_explain/auto_explain.c
+++ b/contrib/auto_explain/auto_explain.c
@@ -30,6 +30,8 @@ static bool auto_explain_log_timing = true;
 static int auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
 static bool auto_explain_log_nested_statements = false;
 static double auto_explain_sample_rate = 1;
+static int auto_explain_log_estimate_ratio = -1;
+static int auto_explain_log_estimate_min_rows = -1;
 
 static const struct config_enum_entry format_options[] = {
{"text", EXPLAIN_FORMAT_TEXT, false},
@@ -52,7 +54,9 @@ static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
 static bool current_query_sampled = true;
 
 #define auto_explain_enabled() \
-   (auto_explain_log_min_duration >= 0 && \
+   (auto_explain_log_min_duration >= 0 || \
+auto_explain_log_estimate_ratio >= 0 || \
+auto_explain_log_estimate_min_rows >= 0 && \
 (nesting_level == 0 || auto_explain_log_nested_statements))
 
 void   _PG_init(void);
@@ -176,6 +180,31 @@ _PG_init(void)
 NULL,
 NULL);
 
+   DefineCustomIntVariable("auto_explain.estimate_ratio",
+"Planned / returned 
row ratio.",
+NULL,
+
&auto_explain_log_estimate_ratio,
+-1,
+-1, INT_MAX / 1000,
+PGC_SUSET,
+0,
+NULL,
+NULL,
+NULL);
+
+   DefineCustomIntVariable("auto_explain.estimate_min_rows",
+"Planned / returned 
min rows.",
+NULL,
+
&auto_explain_log_estimate_min_rows,
+-1,
+-1, INT_MAX / 1000,
+PGC_SUSET,
+0,
+NULL,
+NULL,
+NULL);
+
+
EmitWarningsOnPlaceholders("auto_explain");
 
/* Install hooks. */
@@ -309,6 +338,30 @@ explain_ExecutorEnd(QueryDesc *queryDesc)
if (queryDesc->totaltime && auto_explain_enabled() && 
current_query_sampled)
{
double  msec;
+   int estimate_ratio = -1;
+   int estimate_rows = -1;
+   int plan_rows = 
queryDesc->planstate->plan->plan_rows;
+   int ntuples = 
queryDesc->planstate->instrument->ntuples;
+
+   if (plan_rows == 0 && ntuples == 0)
+   {
+   estimate_ratio = 1;
+   }
+   else if ( ntuples > plan_rows && plan_rows > 0)
+   {
+   estimate_ratio = ntuples / plan_rows;
+   }
+   else if ( ntuples > 0)
+   {
+   estimate_ratio = plan_rows / ntuples;
+   }
+   else
+   {
+   estimate_ratio = INT_MAX;
+   }
+
+   estimate_rows = abs(ntuples - plan_rows);
+
 
/*
 * Make sure stats accumulation is done.  (Note: it's okay if 
several
@@ -318,7 +371,10 @@ explain_ExecutorEnd(QueryDesc *queryDesc)
 
/* Log plan if duration is exceeded. */
msec = queryDesc->totaltime->total * 1000.0;
-   if (msec >= auto_explain_log_min_duration)
+   if ((msec >= auto_explain_log_min_duration &&
+ auto_explain_log_min_duration >= 0 ) ||
+   (estimate_ratio >= auto_explain_log_es

[HACKERS] MCXT_ALLOC_NO_OOM -> DSA_ALLOC_NO_OOM in dsa.c

2017-08-24 Thread Thomas Munro
Hi hackers,

I spotted a (harmless) thinko in dsa.c.  Please see attached.

-- 
Thomas Munro
http://www.enterprisedb.com


dsa-alloc-no-oom.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Default Partition for Range

2017-08-24 Thread Beena Emerson
Hello,

On Thu, Aug 24, 2017 at 3:08 PM, Jeevan Ladhe
 wrote:
> Hi Beena,
>
>
> On Tue, Aug 22, 2017 at 5:22 PM, Beena Emerson 
> wrote:
>>
>> On Tue, Aug 22, 2017 at 4:20 PM, Beena Emerson 
>> wrote:
>> > Hi Jeevan,
>> >
>> > On Tue, Aug 22, 2017 at 7:53 AM, Jeevan Ladhe
>> >  wrote:
>> >>
>> >>
>> >> 4.
>> >>  static List *
>> >> -get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec)
>> >> +get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
>> >> +  bool for_default)
>> >>  {
>> >>
>> >> The addition and the way flag ‘for_default’ is being used is very
>> >> confusing.
>> >> At this moment I am not able to think about a alternate solution to the
>> >> way you have used this flag. But will try and see if I can come up with
>> >> an alternate approach.
>> >
>> > Well, we need to skip the get_range_nulltest while collecting the qual
>> > of other partition to make one for default. We could skip this flag
>> > and remove the NullTest from the qual returned for each partition but
>> > I am not sure if thats a good way to go about it.
>> >
>> >
>>
>> Please find attached a patch which removes the for_default flag from
>> the get_qual_for_range function. This patch is just to show an idea
>> and should be applied over my earlier patch. There could be a better
>> way to do this. Let me know your opinion.
>>
>
> +
> + foreach (lc, list_tmp)
> + {
> + Node *n = (Node *) lfirst(lc);
> +
> + if (IsA(n, NullTest))
> + {
> + list_delete(part_qual, n);
> + count++;
> + }
> + }
> +
>
> I think its an unnecessary overhead to have the nulltest prepared first
> and then search for it and remove it from the partition qual. This is very
> ugly.

Yes, I felt the same but just put it out there to see if someone can
improve on this.

> I think the original idea is still better compared to this. May be we can
> rename
> the 'for_default' flag to something like 'part_of_default_qual'.
>

Ya. I think that would work.



-- 

Beena Emerson

EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Default Partition for Range

2017-08-24 Thread Jeevan Ladhe
Hi Beena,


On Tue, Aug 22, 2017 at 5:22 PM, Beena Emerson 
wrote:

> On Tue, Aug 22, 2017 at 4:20 PM, Beena Emerson 
> wrote:
> > Hi Jeevan,
> >
> > On Tue, Aug 22, 2017 at 7:53 AM, Jeevan Ladhe
> >  wrote:
> >>
> >>
> >> 4.
> >>  static List *
> >> -get_qual_for_range(PartitionKey key, PartitionBoundSpec *spec)
> >> +get_qual_for_range(Relation parent, PartitionBoundSpec *spec,
> >> +  bool for_default)
> >>  {
> >>
> >> The addition and the way flag ‘for_default’ is being used is very
> confusing.
> >> At this moment I am not able to think about a alternate solution to the
> >> way you have used this flag. But will try and see if I can come up with
> >> an alternate approach.
> >
> > Well, we need to skip the get_range_nulltest while collecting the qual
> > of other partition to make one for default. We could skip this flag
> > and remove the NullTest from the qual returned for each partition but
> > I am not sure if thats a good way to go about it.
> >
> >
>
> Please find attached a patch which removes the for_default flag from
> the get_qual_for_range function. This patch is just to show an idea
> and should be applied over my earlier patch. There could be a better
> way to do this. Let me know your opinion.
>
>
+
+ foreach (lc, list_tmp)
+ {
+ Node *n = (Node *) lfirst(lc);
+
+ if (IsA(n, NullTest))
+ {
+ list_delete(part_qual, n);
+ count++;
+ }
+ }
+

I think its an unnecessary overhead to have the nulltest prepared first
and then search for it and remove it from the partition qual. This is very
ugly.
I think the original idea is still better compared to this. May be we can
rename
the 'for_default' flag to something like 'part_of_default_qual'.

Also, as discussed offline I will merge this with default list partitioning
patch.

Regards,
Jeevan Ladhe


Re: [HACKERS] coverage analysis improvements

2017-08-24 Thread Michael Paquier
On Wed, Aug 23, 2017 at 3:47 AM, Peter Eisentraut
 wrote:
> On 8/21/17 01:23, Michael Paquier wrote:
>> Patch 0001 fails to apply as of c629324.
>
> Updated patches attached.
>
>> Which versions of lcov and gcov did you use for your tests?
>
> LCOV version 1.13, and gcc-7 and gcc-6

LCOV can be compiled from here (I have for example just changed PREFIX
in the main makefile):
https://github.com/linux-test-project/lcov.gi
And testing down to 1.11 I am not seeing issues with your patches. I
have used gcc 7.1.1.

Patch 0001 removes the .gcov files, which offer a text representation
of the coverage. Sometimes I use that with a terminal... Not sure for
the others, but that's my status on the matter. This also removes the
target coverage. Please note that on some distributions, like, err...
ArchLinux, lcov is not packaged in the core packages and it is
necessary to make use of external sources (AUR). It would be nice to
keep the original gcov calls as well, and keep the split between
coverage-html and coverage. I think as well that html generate should
be done only if lcov is found, and that text generation should be done
only if gcov is used. It is annoying to see --enable-coverage fail
because lcov only is missing, but it is not mandatory for coverage.

Patches 2, 4, 5, 6 and 9 are good independent ideas.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-08-24 Thread Michael Banck
Hi,

On Wed, May 24, 2017 at 07:16:06AM +, Tsunakawa, Takayuki wrote:
> I confirmed that the attached patch successfully provides:

I was going to take a look at this, but the patch no longer applies
cleanly for me:

Hunk #1 succeeded at 1474 (offset 49 lines).
Hunk #2 succeeded at 1762 (offset 49 lines).
Hunk #3 succeeded at 1773 (offset 49 lines).
patching file doc/src/sgml/protocol.sgml
patching file src/backend/access/transam/xlog.c
Hunk #1 succeeded at 7909 (offset 5 lines).
patching file src/backend/tcop/postgres.c
Hunk #1 succeeded at 3737 (offset 15 lines).
patching file src/backend/utils/misc/check_guc
patching file src/backend/utils/misc/guc.c
Hunk #1 succeeded at 147 with fuzz 1.
Hunk #5 succeeded at 10062 (offset 11 lines).
patching file src/interfaces/libpq/fe-connect.c
Hunk #1 succeeded at 1178 (offset 112 lines).
Hunk #2 succeeded at 2973 (offset 124 lines).
Hunk #3 succeeded at 3003 (offset 124 lines).
Hunk #4 succeeded at 3067 (offset 124 lines).
Hunk #5 FAILED at 3022.
Hunk #6 succeeded at 3375 (offset 144 lines).
1 out of 6 hunks FAILED -- saving rejects to file
src/interfaces/libpq/fe-connect.c.rej
patching file src/interfaces/libpq/fe-exec.c
patching file src/interfaces/libpq/libpq-int.h
Hunk #1 succeeded at 361 (offset 1 line).
Hunk #2 succeeded at 421 (offset -1 lines).

Can you rebase it, please?


Best

Michael

-- 
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax:  +49 2166 9901-100
Email: michael.ba...@credativ.de

credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Quorum commit for multiple synchronous replication.

2017-08-24 Thread Masahiko Sawada
On Thu, Aug 24, 2017 at 3:11 AM, Josh Berkus  wrote:
> On 08/22/2017 11:04 PM, Masahiko Sawada wrote:
>> WARNING:  what you did is ok, but you might have wanted to do something else
>>
>> First of all, whether or not that can properly be called a warning is
>> highly debatable.  Also, if you do that sort of thing to your spouse
>> and/or children, they call it "nagging".  I don't think users will
>> like it any more than family members do.
>
> Realistically, we'll support the backwards-compatible syntax for 3-5
> years.  Which is fine.
>
> I suggest that we just gradually deprecate the old syntax from the docs,
> and then around Postgres 16 eliminate it.  I posit that that's better
> than changing the meaning of the old syntax out from under people.
>

It seems to me that there is no folk who intently votes for making the
quorum commit the default. There some folks suggest to keep backward
compatibility in PG10 and gradually deprecate the old syntax. And only
the issuing from docs can be possible in PG10.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql command \graw

2017-08-24 Thread Pavel Stehule
2017-08-24 8:53 GMT+02:00 Fabien COELHO :

>
> "column_header" is somehow redundant with "tuples_only". Use the
>>> existing one instead of adding a new one?
>>>
>>
>> It is different - a result of tuples_only is just tuples - not column
>> names, not title, footer. I needed new special flag for enhancing
>> tuples_only to print column names
>>
>
> I do not understand. If you keep the special print_raw function, it can
> use tuples_only as true for without column names, and false for with column
> names?


yes - in this case you have true.


>
>
> More generally, ISTM that the same effect could be achieved without
>>> adding a new print function, but by setting more options (separator,
>>> ...) and calling an existing print function. If so, I think it would
>>> reduce the code size.
>>>
>>
>> Maybe, maybe not. removing PRINT_RAW you need to enhance PRINT_UNALIGNED
>> to
>> use one shot parameters and you have to teach it to print column names in
>> tuples_only mode. The code's length will be same. The form of this patch
>> is
>> not final.
>>
>
> Hmmm. Ok. It depends on the change implication on the print unaligned
> function.


It is open - I'll wait to winter commitfest for some other ideas, tests,
comments - it is topic for PostgreSQL 11, and then there are a time for
discussion

Now, I'll be happy if some other people will test it with larger set of
applications and send me a feedback.

Regards

Pavel


>
>
> --
> Fabien.
>


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-08-24 Thread Masahiko Sawada
On Tue, Aug 22, 2017 at 5:15 PM, Fabien COELHO  wrote:
>
>>> Why not allow -I as a short option for --custom-initialize?
>>
>>
>> Other options for similar purpose such as --foreign-keys also have
>> only a long option. Since I think --custom-initialize option is in the
>> same context as other options I didn't add short option to it for now.
>> Because the options name is found by a prefix searching we can use a
>> short name --cu for now.
>
>
> Hmmm. I like short options:-)

Okay, I added -I option for custom initialization :)

>
>> I'm inclined to remove the restriction so that we can specify
>> --foreign-keys, --no-vacuum and --custom-initialize at the same time.
>
>
> Ok. I favor that as well.

If building foreign keys command is not specified in -I but
--foreign-keys options is specified (e.g. pgbench -i -I tpd
--foreign-keys) I think we can add 'f' to the end of the
initialization commands.

>
>> I think a list of char would be better here rather than a single
>> malloced string to remove particular initialization step easily.
>> Thought?
>
>
> My thought is not to bother with a list of char.
>
> To remove a char from a string, I suggest to allow ' ' to stand for nothing
> and be skipped, so that substituting a letter by space would simply remove
> an initialization phase.

I think we should not add/remove a command of initialization commands
during parsing pgbench options in order to not depend on its order.
Therefore, if -I,  --foreign-keys and --no-vacuum are specified at the
same time, what we do is removing some 'v' commands if novacuum and
adding a 'f' command if foreignkey. Also we expect that the length of
initialization steps would not long. Using malloced string would less
the work. Ok, I changed the patch so.

Attached latest v4 patch. Please review it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


pgbench_custom_initialization_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers