Re: [HACKERS] Hooks to track changed pages for backup purposes

2017-09-12 Thread Andrey Borodin
Hi Tomas! Thank you for looking into that patch.

> 8 сент. 2017 г., в 1:53, Tomas Vondra  
> написал(а):
> 
> A few more comments:
> 
> * The patch defines wal_switch_hook, but it's never called.
That call was missing, that's a bug, thanks for spotting that out.

> * I see there are conditions like this:
> 
>if(xlogreader->blocks[nblock].forknum == MAIN_FORKNUM)
> 
> Why is it enough to restrict the block-tracking code to main fork?
> Aren't we interested in all relation forks?
fsm, vm and others are small enough to take them 

> I guess you'll have to explain
> what the implementation of the hooks is supposed to do, and why these
> locations for hook calls are the right ones. It's damn impossible to
> validate the patch without that information.
> 
> Assuming you still plan to use the hook approach ...
Yes, I still think hooking is good idea, but you are right - I need prototype 
first. I'll mark patch as Returned with feedback before prototype 
implementation.

> 
>>> There
>>> are no arguments fed to this hook, so modules would not be able to
>>> analyze things in this context, except shared memory and process
>>> state?
>> 
>>> 
>>> Those hooks are put in hot code paths, and could impact performance of
>>> WAL insertion itself.
>> I do not think sending few bytes to cached array is comparable to disk
> write of XLog record. Checking the func ptr is even cheaper with correct
> branch prediction.
>> 
> 
> That seems somewhat suspicious, for two reasons. Firstly, I believe we
> only insert the XLOG records into WAL buffer here, so why should there
> be any disk write related? Or do you mean the final commit?
Yes, I mean finally we will be waiting for disk. Hundred empty ptr checks are 
neglectable in comparision with disk.
> 
> But more importantly, doesn't this kind of information require some
> durability guarantees? I mean, if it gets lost during server crashes or
> restarts, doesn't that mean the incremental backups might miss some
> buffers? I'd guess the hooks will have to do some sort of I/O, to
> achieve that, no?
We need durability only on the level of one segment. If we do not have info 
from segment we can just rescan it.
If we send segment to S3 as one file, we are sure in it's integrity. But this 
IO can by async.

PTRACK in it's turn switch bits in fork's buffers which are written in 
checkpointer and..well... recovered during recovery. By usual WAL replay of 
recovery.


> From this POV, the idea to collect this information on the backup system
> (WAL archive) by pre-processing the arriving WAL segments seems like the
> most promising. It moves the work to another system, the backup system
> can make it as durable as the WAL segments, etc.

Well, in some not so rare cases users encrypt backups and send to S3. And there 
is no system with CPUs that can handle that WAL parsing. Currently, I'm 
considering mocking prototype for wal-g, which works exactly this.

Your comments were very valuable, thank you for looking into the patch and 
joining the discussion.

Best regards, Andrey Borodin.




Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-12 Thread Amit Langote
On 2017/09/13 13:05, Tom Lane wrote:
> Amit Langote  writes:
>> On 2017/09/12 23:27, Amit Kapila wrote:
>>> I think one point which might be missed is that the patch needs to
>>> modify pd_lower for all usages of metapage, not only when it is first
>>> time initialized.
> 
>> Maybe I'm missing something, but isn't the metadata size fixed and hence
>> pd_lower won't change once it's initialized?  Maybe, it's not true for all
>> index types?
> 
> No, the point is that you might be dealing with an index recently
> pg_upgraded from v10 or before, which does not have the correct
> value for pd_lower on that page.  This has to be coped with.

Ah, got it.  Thanks for the explanation.

I updated the patches so that the metapage's pd_lower is set to the
correct value just before *every* point where we are about to insert a
full page image of the metapage into WAL.  That's in addition to doing the
same in various metapage init routines, which the original patch did
already anyway.  I guess this now ensures that wal_consistency_checking
masking of these metapages as standard layout pages always works, even for
pre-v11 indexes that were upgraded.

Also, we now pass the metapage buffer as containing a page of standard
layout to XLogRegisterBuffer(), so that any hole in it is compressed when
actually writing to WAL.

Thanks,
Amit
From 607b4ab062652e7ffc0f95338c9265b09be18b56 Mon Sep 17 00:00:00 2001
From: amit 
Date: Fri, 23 Jun 2017 11:20:41 +0900
Subject: [PATCH 1/3] Set pd_lower correctly in the GIN metapage.

Also tell xlog.c to treat the metapage like a standard page, so any
hole in it is compressed.
---
 src/backend/access/gin/ginfast.c   | 22 --
 src/backend/access/gin/gininsert.c |  4 ++--
 src/backend/access/gin/ginutil.c   | 19 ++-
 src/backend/access/gin/ginxlog.c   | 24 +---
 4 files changed, 49 insertions(+), 20 deletions(-)

diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c
index 59e435465a..d96529cf72 100644
--- a/src/backend/access/gin/ginfast.c
+++ b/src/backend/access/gin/ginfast.c
@@ -399,6 +399,15 @@ ginHeapTupleFastInsert(GinState *ginstate, 
GinTupleCollector *collector)
/*
 * Write metabuffer, make xlog entry
 */
+
+   /*
+* Set pd_lower just past the end of the metadata.  This is not 
essential
+* but it makes the page look compressible to xlog.c, because we pass 
the
+* buffer containing this page to XLogRegisterBuffer() as a page with
+* standard layout.
+*/
+   ((PageHeader) metapage)->pd_lower =
+   ((char *) metadata + sizeof(GinMetaPageData)) - (char 
*) metapage;
MarkBufferDirty(metabuffer);
 
if (needWal)
@@ -407,7 +416,7 @@ ginHeapTupleFastInsert(GinState *ginstate, 
GinTupleCollector *collector)
 
memcpy(, metadata, sizeof(GinMetaPageData));
 
-   XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT);
+   XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT | 
REGBUF_STANDARD);
XLogRegisterData((char *) , sizeof(ginxlogUpdateMeta));
 
recptr = XLogInsert(RM_GIN_ID, XLOG_GIN_UPDATE_META_PAGE);
@@ -572,6 +581,14 @@ shiftList(Relation index, Buffer metabuffer, BlockNumber 
newHead,
metadata->nPendingHeapTuples = 0;
}
 
+   /*
+* Set pd_lower just past the end of the metadata.  This is not
+* essential but it makes the page look compressible to xlog.c,
+* because we pass the buffer containing this page to
+* XLogRegisterBuffer() as page with standard layout.
+*/
+   ((PageHeader) metapage)->pd_lower =
+   ((char *) metadata + sizeof(GinMetaPageData)) - (char 
*) metapage;
MarkBufferDirty(metabuffer);
 
for (i = 0; i < data.ndeleted; i++)
@@ -586,7 +603,8 @@ shiftList(Relation index, Buffer metabuffer, BlockNumber 
newHead,
XLogRecPtr  recptr;
 
XLogBeginInsert();
-   XLogRegisterBuffer(0, metabuffer, REGBUF_WILL_INIT);
+   XLogRegisterBuffer(0, metabuffer,
+  REGBUF_WILL_INIT | 
REGBUF_STANDARD);
for (i = 0; i < data.ndeleted; i++)
XLogRegisterBuffer(i + 1, buffers[i], 
REGBUF_WILL_INIT);
 
diff --git a/src/backend/access/gin/gininsert.c 
b/src/backend/access/gin/gininsert.c
index 5378011f50..c9aa4ee147 100644
--- a/src/backend/access/gin/gininsert.c
+++ b/src/backend/access/gin/gininsert.c
@@ -348,7 +348,7 @@ ginbuild(Relation heap, Relation index, IndexInfo 
*indexInfo)
Pagepage;
 
XLogBeginInsert();
-   XLogRegisterBuffer(0, 

Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-09-12 Thread Vaishnavi Prabakaran
On Wed, Sep 13, 2017 at 3:33 PM, Craig Ringer  wrote:

>
> I really do not like calling it "commit" as that conflates with a database
> commit.
>
> A batch can embed multiple BEGINs and COMMITs. It's entirely possible for
> an earlier part of the batch to succeed and commit, then a later part to
> fail, if that's the case. So that name is IMO wrong.
>

Ok, SendQueue seems ok to me as well. Will change it in next version.



>>> +"a"?
>>>
>>
>> Hmm, Can you explain the question please. I don't understand.
>>
>
> s/of new query/of a new query/
>
>
Thanks for explaining. Will change this too in next version.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-09-12 Thread Craig Ringer
On 13 September 2017 at 13:06, Vaishnavi Prabakaran <
vaishnaviprabaka...@gmail.com> wrote:

>
>
> On Wed, Aug 23, 2017 at 7:40 PM, Andres Freund  wrote:
>
>>
>>
>>
>> > Am failing to see the benefit in allowing user to set
>> > PQBatchAutoFlush(true|false) property? Is it really needed?
>>
>> I'm inclined not to introduce that for now. If somebody comes up with a
>> convincing usecase and numbers, we can add it later. Libpq API is set in
>> stone, so I'd rather not introduce unnecessary stuff...
>>
>>
> Thanks for reviewing the patch and yes ok.
>
>
>>
>>
>> > +   
>> > +Much like asynchronous query mode, there is no performance
>> disadvantage to
>> > +using batching and pipelining. It increases client application
>> complexity
>> > +and extra caution is required to prevent client/server deadlocks
>> but
>> > +can sometimes offer considerable performance improvements.
>> > +   
>>
>> That's not necessarily true, is it? Unless you count always doing
>> batches of exactly size 1.
>>
>
> Client application complexity is increased in batch mode,because
> application needs to remember the query queue status. Results processing
> can be done at anytime, so the application needs to know till what query,
> the results are consumed.
>
>

Yep. Also, the client/server deadlocks at issue here are a buffer
management issue, and deadlock is probably not exactly the right word. Your
app has to process replies from the server while it's sending queries,
otherwise it can get into a state where it has no room left in its send
buffer, but the server isn't consuming its receive buffer because the
server's send buffer is full. To allow the system to make progress, the
client must read from the client receive buffer.

This isn't an issue when using libpq normally.

PgJDBC has similar issues with its batch mode, but in PgJDBC it's much
worse because there's no non-blocking send available. In libpq you can at
least set your sending socket to non-blocking.



>
> > +   
>> > +Use batches when your application does lots of small
>> > +INSERT, UPDATE and
>> > +DELETE operations that can't easily be
>> transformed into
>> > +operations on sets or into a
>> > +COPY
>> operation.
>> > +   
>>
>> Aren't SELECTs also a major beneficiarry of this?
>>
>
Yes, many individual SELECTs that cannot be assembled into a single more
efficient query would definitely also benefit.


> Hmm, though SELECTs also benefit from batch mode, doing multiple selects
> in batch mode will fill up the memory rapidly and might not be as
> beneficial as other operations listed.
>

Depends on the SELECT. With wide results you'll get less benefit, but even
then you can gain if you're on a high latency network. With "n+1" patterns
and similar, you'll see huge gains.


> Maybe note that multiple batches can be "in flight"?
>> I.e. PQbatchSyncQueue() is about error handling, nothing else? Don't
>> have a great idea, but we might want to rename...
>>
>>
> This function not only does error handling, but also sends the "Sync"
> message to backend. In batch mode, "Sync" message is not sent with every
> query but will
> be sent only via this function to mark the end of implicit transaction.
> Renamed it to PQbatchCommitQueue. Kindly let me know if you think of any
> other better name.
>

I really do not like calling it "commit" as that conflates with a database
commit.

A batch can embed multiple BEGINs and COMMITs. It's entirely possible for
an earlier part of the batch to succeed and commit, then a later part to
fail, if that's the case. So that name is IMO wrong.


>>
>> > +
>> > + 
>> > +  PQbatchSyncQueue
>> > +  
>> > +   PQbatchSyncQueue
>> > +  
>> > + 
>>
>> I wonder why this isn't framed as PQbatchIssue/Send/...()? Syncing seems
>> to mostly make sense from a protocol POV.
>>
>>
> Renamed to PQbatchCommitQueue.
>
>
Per above, strong -1 on that. But SendQueue seems OK, or FlushQueue?


>
>> > + *   Put an idle connection in batch mode. Commands submitted after
>> this
>> > + *   can be pipelined on the connection, there's no requirement to
>> wait for
>> > + *   one to finish before the next is dispatched.
>> > + *
>> > + *   Queuing of new query or syncing during COPY is not allowed.
>>
>> +"a"?
>>
>
> Hmm, Can you explain the question please. I don't understand.
>

s/of new query/of a new query/


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-09-12 Thread Vaishnavi Prabakaran
On Wed, Aug 23, 2017 at 7:40 PM, Andres Freund  wrote:

>
>
>
> > Am failing to see the benefit in allowing user to set
> > PQBatchAutoFlush(true|false) property? Is it really needed?
>
> I'm inclined not to introduce that for now. If somebody comes up with a
> convincing usecase and numbers, we can add it later. Libpq API is set in
> stone, so I'd rather not introduce unnecessary stuff...
>
>
Thanks for reviewing the patch and yes ok.


>
>
> > +   
> > +Much like asynchronous query mode, there is no performance
> disadvantage to
> > +using batching and pipelining. It increases client application
> complexity
> > +and extra caution is required to prevent client/server deadlocks but
> > +can sometimes offer considerable performance improvements.
> > +   
>
> That's not necessarily true, is it? Unless you count always doing
> batches of exactly size 1.
>

Client application complexity is increased in batch mode,because
application needs to remember the query queue status. Results processing
can be done at anytime, so the application needs to know till what query,
the results are consumed.


> +   
> > +Use batches when your application does lots of small
> > +INSERT, UPDATE and
> > +DELETE operations that can't easily be
> transformed into
> > +operations on sets or into a
> > +COPY
> operation.
> > +   
>
> Aren't SELECTs also a major beneficiarry of this?
>


Hmm, though SELECTs also benefit from batch mode, doing multiple selects in
batch mode will fill up the memory rapidly and might not be as beneficial
as other operations listed.



> > +   
> > +Batching is less useful when information from one operation is
> required by the
> > +client before it knows enough to send the next operation.
>
> s/less/not/
>
>
Corrected.


>
> > +   
> > +
> > + The batch API was introduced in PostgreSQL 10.0, but clients using
> PostgresSQL 10.0 version of libpq can
> > + use batches on server versions 8.4 and newer. Batching works on
> any server
> > + that supports the v3 extended query protocol.
> > +
> > +   
>
> Where's the 8.4 coming from?
>
>

I guess it is 7.4 where "PQsendQueryParams" is introduced, and not 8.4.
Corrected.


> +   
> > +
> > + It is best to use batch mode with libpq
> in
> > + non-blocking mode.
> If used in
> > + blocking mode it is possible for a client/server deadlock to
> occur. The
> > + client will block trying to send queries to the server, but the
> server will
> > + block trying to send results from queries it has already processed
> to the
> > + client. This only occurs when the client sends enough queries to
> fill its
> > + output buffer and the server's receive buffer before switching to
> > + processing input from the server, but it's hard to predict exactly
> when
> > + that'll happen so it's best to always use non-blocking mode.
> > +
> > +   
>
> Mention that nonblocking only actually helps if send/recv is done as
> required, and can essentially require unbound memory?  We probably
> should either document or implement some smarts about when to signal
> read/write readyness. Otherwise we e.g. might be receiving tons of
> result data without having sent the next query - or the other way round.
>
>

Added a statement for caution in documentation and again this is one of the
reason why SELECT query is not so beneficial in batch mode.



> Maybe note that multiple batches can be "in flight"?
> I.e. PQbatchSyncQueue() is about error handling, nothing else? Don't
> have a great idea, but we might want to rename...
>
>
This function not only does error handling, but also sends the "Sync"
message to backend. In batch mode, "Sync" message is not sent with every
query but will
be sent only via this function to mark the end of implicit transaction.
Renamed it to PQbatchCommitQueue. Kindly let me know if you think of any
other better name.



> > +
> > + 
> > +  The client must not assume that work is committed when it
> > +  sends a COMMIT, only when
> the
> > +  corresponding result is received to confirm the commit is
> complete.
> > +  Because errors arrive asynchronously the application needs to be
> able to
> > +  restart from the last received committed
> change and
> > +  resend work done after that point if something goes wrong.
> > + 
> > +
>
> This seems fairly independent of batching.
>
>
Yes and the reason why is it explicitly specified for batch mode is that if
more than one explicit transactions are used in Single batch, then failure
of one transaction will lead to skipping the consequent transactions until
the end of current batch is reached. This behavior is specific to batch
mode, so adding a precautionary note here is needed I think.



> > +   
> > +
> > +   
> > +Interleaving result processing and query dispatch
> > +
> > +
> > + To avoid deadlocks on large batches the client should be
> 

Re: [HACKERS] pg_rewind proposed scope and interface changes

2017-09-12 Thread Michael Paquier
On Tue, Sep 12, 2017 at 11:52 PM, Chris Travers
 wrote:
> Additionally the wal, xact, timestamp and logical directories must be
> processed in some way.

To what does the term "logical directories" refer to?

>   * if --wal=sync the directories are processed the way they are today
>   * if --wal=clear then the contents of the directories are cleared and
> replication is assumed to be used to bring the system up after.  Note this
> will need to come with warning about the need for replication slots.

Hm. I am not sure in what --wal=clear is helpful. Keeping around WAL
segments from the point of the last checkpoint where WAL forked up to
the point where WAL has forked is helpful, because you don't need to
copy again those WAL segments, be they come from an archive or from
streaming. Copying a set of WAL segments during the rewind of the new
timeline is helpful as well because you don't need to do the copy
again. One configuration where this is helpful is that there is
already an archive local to the target server available with the
segments of the new timeline available.

> Base, global, pg_tablespace
>
> With
> pg_wal, pg_xact, pg_commit_ts, pg_logical added if wal strategy is set to
> sync.

Skipping some directories in a way similar to what a base backup does
would be nicer I think. We already have a list of those in
basebackup.c in the shape of excludeDirContents and excludeFiles. I
think that it would be a good idea to export those into a header that
pg_rewind could include, and refer to in order to exclude them when
fetching a set of files. At the end of the day, a rewind is a kind of
base backup in itself, and this patch would already serve well a lot
of people.

Having on top of that a way to exclude a wanted set of files and the
log directory (for example: should we look at log_directory and
exclude it from the fetched paths if it is not an absolute path?),
which is smart enough to take care of not removing paths critical for
a rewind like anything in base/, then you are good to go with a
full-blown tool that I think would serve the purposes you are looking
for.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 1:13 PM, Kyotaro HORIGUCHI
 wrote:
> This patch creates a new memory context "Vacuum" under
> PortalContext in vacuum.c, but AFAICS the current context there
> is PortalHeapMemory, which has the same expected lifetime with
> the new context (that is, a child of PotalContext and dropeed in
> PortalDrop). On the other hand the PortalMemory's lifetime is not
> PortalStart to PortaDrop but the backend lifetime (initialized in
> InitPostgres).

Which patch are you looking to? This introduces no new memory context,
be it in 0001 or 0002 in its last versions. I don't recall during the
successive reviews seeing that pattern as well.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-12 Thread Kyotaro HORIGUCHI
Hello, I began to look on this. (But it seems almost ready for committer..)

At Wed, 13 Sep 2017 11:47:11 +0900, Michael Paquier  
wrote in 
> On Wed, Sep 13, 2017 at 12:31 AM, Bossart, Nathan  wrote:
> > Sorry for the spam.  I am re-sending these patches with modified names so 
> > that
> > the apply order is obvious to the new automated testing framework (and to
> > everybody else).
> 
> - * relid, if not InvalidOid, indicate the relation to process; otherwise,
> - * the RangeVar is used.  (The latter must always be passed, because it's
> - * used for error messages.)
> [...]
> +typedef struct VacuumRelation
> +{
> +   NodeTag  type;
> +   RangeVar*relation;  /* single table to process */
> +   List*va_cols;   /* list of column names, or NIL for all */
> +   Oid  oid;   /* corresponding OID (filled in by [auto]vacuum.c) */
> +} VacuumRelation;
> We lose a bit of information here. I think that it would be good to
> mention in the declaration of VacuumRelation that the RangeVar is used
> for error processing, and needs to be filled. I have complained about
> that upthread already, perhaps this has slipped away when rebasing.
> 
> +   int i = attnameAttNum(rel, col, false);
> +
> +   if (i != InvalidAttrNumber)
> +   continue;
> Nit: allocating "i" makes little sense here. You are not using it for
> any other checks.
> 
>  /*
> - * Build a list of Oids for each relation to be processed
> + * Determine the OID for each relation to be processed
>   *
>   * The list is built in vac_context so that it will survive across our
>   * per-relation transactions.
>   */
> -static List *
> -get_rel_oids(Oid relid, const RangeVar *vacrel)
> +static void
> +get_rel_oids(List **vacrels)
> Yeah, that's not completely correct either. This would be more like
> "Fill in the list of VacuumRelation entries with their corresponding
> OIDs, adding extra entries for partitioned tables".
> 
> Those are minor points. The patch seems to be in good shape, and
> passes all my tests, including some pgbench'ing to make sure that
> nothing goes weird. So I'll be happy to finally switch both patches to
> "ready for committer" once those minor points are addressed.

May I ask one question?

This patch creates a new memory context "Vacuum" under
PortalContext in vacuum.c, but AFAICS the current context there
is PortalHeapMemory, which has the same expected lifetime with
the new context (that is, a child of PotalContext and dropeed in
PortalDrop). On the other hand the PortalMemory's lifetime is not
PortalStart to PortaDrop but the backend lifetime (initialized in
InitPostgres).

>  /*
>   * Create special memory context for cross-transaction storage.
>   *
>   * Since it is a child of PortalContext, it will go away eventually even
>   * if we suffer an error; there's no need for special abort cleanup logic.
>   */
>  vac_context = AllocSetContextCreate(PortalContext,
>"Vacuum",
>ALLOCSET_DEFAULT_SIZES);

So this seems to work as opposite to the expectation. Am I
missing something?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Call RelationDropStorage() for broader range of object drops.

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 2:40 AM, Hadi Moshayedi  wrote:
> Motivation for this patch is that some FDWs (notably, cstore_fdw) try
> utilizing PostgreSQL internal storage. PostgreSQL assigns relfilenode's to
> foreign tables, but doesn't clean up storage for foreign tables when
> dropping tables. Therefore, in cstore_fdw we have to do some tricks to
> handle dropping objects that lead to dropping of cstore table properly.

Foreign tables do not have physical storage assigned to by default. At
least heap_create() tells so, create_storage being set to false for a
foreign table. So there is nothing to clean up normally. Or is
cstore_fdw using directly heap_create with its own relfilenode set,
creating a physical storage?

> So I am suggesting to change the check at heap_drop_with_catalog() at
> src/backend/catalog/heap.c:
>
> -if (rel->rd_rel->relkind != RELKIND_VIEW &&
> -rel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE &&
> -rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
> -rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
> +if (OidIsValid(rel->rd_node.relNode))
>  {
>  RelationDropStorage(rel);
>  }
>
> Any feedback on this?

I agree that there is an inconsistency here if a module calls
heap_create() with an enforced relfilenode.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race between SELECT and ALTER TABLE NO INHERIT

2017-09-12 Thread Amit Langote
On 2017/09/13 12:05, Simon Riggs wrote:
> On 26 June 2017 at 10:16, Amit Langote  wrote:
> 
>> BTW, in the partitioned table case, the parent is always locked first
>> using an AccessExclusiveLock.  There are other considerations in that case
>> such as needing to recreate the partition descriptor upon termination of
>> inheritance (both the DETACH PARTITION and also DROP TABLE child cases).
> 
> Is this requirement documented or in comments anywhere?

Yes.  See the last sentence in the description of PARTITION OF clause in
CREATE TABLE:

https://www.postgresql.org/docs/devel/static/sql-createtable.html#sql-createtable-partition

And, the 4th point in the list of differences between declarative
partitioning and inheritance:

https://www.postgresql.org/docs/devel/static/ddl-partitioning.html#ddl-partitioning-implementation-inheritance

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why not parallel seq scan for slow functions

2017-09-12 Thread Amit Kapila
On Tue, Sep 12, 2017 at 5:47 PM, Amit Khandekar  wrote:
> On 5 September 2017 at 14:04, Amit Kapila  wrote:
>
> I started with a quick review ... a couple of comments below :
>
> - * If this is a baserel, consider gathering any partial paths we may have
> - * created for it.  (If we tried to gather inheritance children, we could
> + * If this is a baserel and not the only rel, consider gathering any
> + * partial paths we may have created for it.  (If we tried to gather
>
>   /* Create GatherPaths for any useful partial paths for rel */
> -  generate_gather_paths(root, rel);
> +  if (lev < levels_needed)
> + generate_gather_paths(root, rel, NULL);
>
> I think at the above two places, and may be in other place also, it's
> better to mention the reason why we should generate the gather path
> only if it's not the only rel.
>

I think the comment you are looking is present where we are calling
generate_gather_paths in grouping_planner. Instead of adding same or
similar comment at multiple places, how about if we just say something
like "See in grouping_planner where we generate gather paths" at all
other places?

> --
>
> -   if (rel->reloptkind == RELOPT_BASEREL)
> -   generate_gather_paths(root, rel);
> +   if (rel->reloptkind == RELOPT_BASEREL &&
> root->simple_rel_array_size > 2)
> +   generate_gather_paths(root, rel, NULL);
>
> Above, in case it's a partitioned table, root->simple_rel_array_size
> includes the child rels. So even if it's a simple select without a
> join rel, simple_rel_array_size would be > 2, and so gather path would
> be generated here for the root table, and again in grouping_planner().
>

Yeah, that could be a problem.  I think we should ensure that there is
no append rel list by checking root->append_rel_list.  Can you think
of a better way to handle it?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP SUBSCRIPTION hangs if sub is disabled in the same transaction

2017-09-12 Thread Masahiko Sawada
On Wed, Sep 13, 2017 at 12:48 AM, Arseny Sher  wrote:
> Masahiko Sawada  writes:
>
>> FWIW, perhaps we can change the replication origin management so that
>> DROP SUBSCRIPTION doesn't drop the replication origin and the apply
>> worker itself removes it when exit. When an apply worker exits it
>> removes the replication origin if the corresponding subscription had
>> been removed.
>

After thought, I think we can change it like followings.
* If the replication origin is not acquired, DROP SUBSCRIPTION can drop it.
* If the replication origin is acquired by someone DROP SUBSCRIPTION
takes over a job of dropping it to the apply worker.
* The apply worker drops the replication origin when exit if the apply
worker has to drop it.

> I don't think this is reliable -- what if worker suddenly dies without
> accomplishing the job?

The apply worker will be launched by the launcher later. If DROP
SUBSCRIPTION is issued before the apply worker launches again, DROP
SUBSCRIPTION itself can remove the replication origin.

Attached a very rough patch for reference. It's very ugly but we can
deal with this case.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2ef414e..9ed773e 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -940,7 +940,7 @@ DropSubscription(DropSubscriptionStmt *stmt, bool isTopLevel)
 	snprintf(originname, sizeof(originname), "pg_%u", subid);
 	originid = replorigin_by_name(originname, true);
 	if (originid != InvalidRepOriginId)
-		replorigin_drop(originid, false);
+		replorigin_drop(originid, true, true, true);
 
 	/*
 	 * If there is no slot associated with the subscription, we can finish
diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c
index edc6efb..05423a7 100644
--- a/src/backend/replication/logical/origin.c
+++ b/src/backend/replication/logical/origin.c
@@ -129,6 +129,8 @@ typedef struct ReplicationState
 	 */
 	ConditionVariable origin_cv;
 
+	bool	drop_by_worker;
+
 	/*
 	 * Lock protecting remote_lsn and local_lsn.
 	 */
@@ -329,7 +331,7 @@ replorigin_create(char *roname)
  * Needs to be called in a transaction.
  */
 void
-replorigin_drop(RepOriginId roident, bool nowait)
+replorigin_drop(RepOriginId roident, bool nowait, bool need_lock, bool takeover)
 {
 	HeapTuple	tuple;
 	Relation	rel;
@@ -342,7 +344,8 @@ replorigin_drop(RepOriginId roident, bool nowait)
 restart:
 	tuple = NULL;
 	/* cleanup the slot state info */
-	LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE);
+	if (need_lock)
+		LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE);
 
 	for (i = 0; i < max_replication_slots; i++)
 	{
@@ -355,6 +358,22 @@ restart:
 			{
 ConditionVariable *cv;
 
+if (takeover)
+{
+	ereport(WARNING,
+			(errcode(ERRCODE_OBJECT_IN_USE),
+			 errmsg("could not drop replication origin with OID %d, in use by PID %d, takeover",
+	state->roident,
+	state->acquired_by)));
+	state->drop_by_worker = true;
+	if (need_lock)
+		LWLockRelease(ReplicationOriginLock);
+
+	/* now release lock again */
+	heap_close(rel, ExclusiveLock);
+	return;
+}
+
 if (nowait)
 	ereport(ERROR,
 			(errcode(ERRCODE_OBJECT_IN_USE),
@@ -363,7 +382,8 @@ restart:
 	state->acquired_by)));
 cv = >origin_cv;
 
-LWLockRelease(ReplicationOriginLock);
+if (need_lock)
+	LWLockRelease(ReplicationOriginLock);
 ConditionVariablePrepareToSleep(cv);
 ConditionVariableSleep(cv, WAIT_EVENT_REPLICATION_ORIGIN_DROP);
 ConditionVariableCancelSleep();
@@ -384,10 +404,12 @@ restart:
 			state->roident = InvalidRepOriginId;
 			state->remote_lsn = InvalidXLogRecPtr;
 			state->local_lsn = InvalidXLogRecPtr;
+			state->drop_by_worker = false;
 			break;
 		}
 	}
-	LWLockRelease(ReplicationOriginLock);
+	if (need_lock)
+		LWLockRelease(ReplicationOriginLock);
 
 	tuple = SearchSysCache1(REPLORIGIDENT, ObjectIdGetDatum(roident));
 	if (!HeapTupleIsValid(tuple))
@@ -785,6 +807,7 @@ replorigin_redo(XLogReaderState *record)
 		state->roident = InvalidRepOriginId;
 		state->remote_lsn = InvalidXLogRecPtr;
 		state->local_lsn = InvalidXLogRecPtr;
+		state->drop_by_worker = false;
 		break;
 	}
 }
@@ -987,6 +1010,15 @@ ReplicationOriginExitCleanup(int code, Datum arg)
 		cv = _replication_state->origin_cv;
 
 		session_replication_state->acquired_by = 0;
+
+		if (session_replication_state->drop_by_worker)
+		{
+			replorigin_session_origin = InvalidRepOriginId;
+			StartTransactionCommand();
+			replorigin_drop(session_replication_state->roident, false, false, false);
+			CommitTransactionCommand();
+		}
+
 		session_replication_state = NULL;
 	}
 
@@ -1075,6 +1107,7 @@ replorigin_session_setup(RepOriginId node)
 		

Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-12 Thread Tom Lane
Amit Langote  writes:
> On 2017/09/12 23:27, Amit Kapila wrote:
>> I think one point which might be missed is that the patch needs to
>> modify pd_lower for all usages of metapage, not only when it is first
>> time initialized.

> Maybe I'm missing something, but isn't the metadata size fixed and hence
> pd_lower won't change once it's initialized?  Maybe, it's not true for all
> index types?

No, the point is that you might be dealing with an index recently
pg_upgraded from v10 or before, which does not have the correct
value for pd_lower on that page.  This has to be coped with.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - additional extra checks

2017-09-12 Thread Pavel Stehule
2017-09-13 1:42 GMT+02:00 Daniel Gustafsson :

> > On 08 Apr 2017, at 15:46, David Steele  wrote:
> >
> >> On 1/13/17 6:55 AM, Marko Tiikkaja wrote:
> >>> On Fri, Jan 13, 2017 at 2:46 AM, Jim Nasby  >>> > wrote:
> >>>
> >>>On 1/11/17 5:54 AM, Pavel Stehule wrote:
> >>>
> >>>+too_many_rows
> >>>+
> >>>+ 
> >>>+  When result is assigned to a variable by
> >>>INTO clause,
> >>>+  checks if query returns more than one row. In this case
> >>>the assignment
> >>>+  is not deterministic usually - and it can be signal some
> >>>issues in design.
> >>>
> >>>
> >>>Shouldn't this also apply to
> >>>
> >>>var := blah FROM some_table WHERE ...;
> >>>
> >>>?
> >>>
> >>>AIUI that's one of the beefs the plpgsql2 project has.
> >>>
> >>>
> >>> No, not at all.  That syntax is undocumented and only works because
> >>> PL/PgSQL is a hack internally.  We don't use it, and frankly I don't
> >>> think anyone should.
> >
> > This submission has been moved to CF 2017-07.
>
> This patch was automatically marked as “Waiting for author” since it needs
> to
> be updated with the macro changes in 2cd70845240087da205695baedab64
> 12342d1dbe
> to compile.  Changing to using TupleDescAttr(); makes it compile again.
> Can
> you submit an updated version with that fix Pavel?
>

I am sending fixed patch

Regards

Pavel

>
> Stephen, you signed up to review this patch in the previous Commitfest, do
> you
> still intend to work on this?
>
> cheers ./daniel
diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
index 6dc438a152..7de0b8005a 100644
--- a/doc/src/sgml/plpgsql.sgml
+++ b/doc/src/sgml/plpgsql.sgml
@@ -4862,7 +4862,7 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$
 
   
   
-   Additional Compile-time Checks
+   Additional Compile-time and Run-time Checks
 

 To aid the user in finding instances of simple but common problems before
@@ -4874,6 +4874,11 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$
 so you are advised to test in a separate development environment.

 
+   
+The setting plpgsql.extra_warnings to all is a 
+good idea in developer or test environments.
+   
+
  
   These additional checks are enabled through the configuration variables
   plpgsql.extra_warnings for warnings and
@@ -4890,6 +4895,30 @@ a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$
  
 

+
+   
+strict_multi_assignment
+
+ 
+  Some PL/PgSQL commands allows to assign a values to
+  more than one variable. The number of target variables should not be
+  equal to number of source values. Missing values are replaced by NULL
+  value, spare values are ignored. More times this situation signalize
+  some error.
+ 
+
+   
+
+   
+too_many_rows
+
+ 
+  When result is assigned to a variable by INTO clause,
+  checks if query returns more than one row. In this case the assignment
+  is not deterministic usually - and it can be signal some issues in design.
+ 
+
+   
   
 
   The following example shows the effect of plpgsql.extra_warnings
@@ -4909,6 +4938,34 @@ LINE 3: f1 int;
 ^
 CREATE FUNCTION
 
+
+  The another example shows the effect of plpgsql.extra_warnings
+  set to strict_multi_assignment:
+
+SET plpgsql.extra_warnings TO 'strict_multi_assignment';
+
+CREATE OR REPLACE FUNCTION public.foo()
+ RETURNS void
+ LANGUAGE plpgsql
+AS $$
+DECLARE
+  x int;
+  y int;
+BEGIN
+  SELECT 1 INTO x, y;
+  SELECT 1, 2 INTO x, y;
+  SELECT 1, 2, 3 INTO x, y;
+END;
+$$
+
+SELECT foo();
+WARNING:  Number of evaluated attributies (1) does not match expected attributies (2)
+WARNING:  Number of evaluated attributies (3) does not match expected attributies (2)
+ foo 
+-
+ 
+(1 row)
+
  
  
  
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index 9716697259..c14fdc0233 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -3623,6 +3623,24 @@ exec_stmt_execsql(PLpgSQL_execstate *estate,
 	long		tcount;
 	int			rc;
 	PLpgSQL_expr *expr = stmt->sqlstmt;
+	bool		too_many_rows_check;
+	int			too_many_rows_level;
+
+	if (plpgsql_extra_errors & PLPGSQL_XCHECK_TOOMANYROWS)
+	{
+		too_many_rows_check = true;
+		too_many_rows_level = ERROR;
+	}
+	else if (plpgsql_extra_warnings & PLPGSQL_XCHECK_TOOMANYROWS)
+	{
+		too_many_rows_check = true;
+		too_many_rows_level = WARNING;
+	}
+	else
+	{
+		too_many_rows_check = false;
+		too_many_rows_level = NOTICE;
+	}
 
 	/*
 	 * On the first call for this statement generate the plan, and detect
@@ -3672,7 +3690,7 @@ exec_stmt_execsql(PLpgSQL_execstate *estate,
 	 */
 	if (stmt->into)
 	{
-		if (stmt->strict || stmt->mod_stmt)
+		if (stmt->strict || stmt->mod_stmt || 

Re: [HACKERS] psql: new help related to variables are not too readable

2017-09-12 Thread Pavel Stehule
2017-09-09 1:30 GMT+02:00 Alvaro Herrera :

> Tomas Vondra wrote:
>
> > > Finally, as vertical scrolling is mandatory, I would be fine with
> > > skipping lines with entries for readability, but it is just a matter of
> > > taste and I expect there should be half a dozen different opinions on
> > > the matter of formatting.
> >
> > FWIW, +1 to extra lines from me - I find it way more readable, as it
> > clearly separates the items.
>
> +1
>

I'll assign this patch to next commitfest


>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Bruce Momjian
On Tue, Sep 12, 2017 at 07:54:15PM -0400, Stephen Frost wrote:
> Andreas,
> 
> * Andreas Joseph Krogh (andr...@visena.com) wrote:
> > I have to ask; Why not run pg_upgrade on standby, after verifying that it's 
> > in 
> > sync with primary and promoting it to primary if necessary and then making 
> > it 
> > standby again after pg_upgrade is finished?
> 
> I don't think that we could be guaranteed that the catalog tables would
> be the same on the replica as on the primary if they were actually
> created by pg_upgrade.

FYI, the other problem is that standby can't go into write mode or it
would diverge from the primary.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL logging problem in 9.4.3?

2017-09-12 Thread Thomas Munro
On Wed, Sep 13, 2017 at 1:04 PM, Kyotaro HORIGUCHI
 wrote:
> The CF status of this patch turned into "Waiting on Author" by
> automated CI checking. However, I still don't get any error even
> on the current master (69835bc) after make distclean. Also I
> don't see any difference between the "problematic" patch and my
> working branch has nothing different other than patching line
> shifts. (So I haven't post a new one.)
>
> I looked on the location heapam.c:2502 where the CI complains at
> in my working branch and I found a different code with the
> complaint.
>
> https://travis-ci.org/postgresql-cfbot/postgresql/builds/27450
>
> 1363 heapam.c:2502:18: error: ‘HEAP_INSERT_SKIP_WAL’ undeclared (first use in 
> this function)
> 1364   if (!(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation))
>
> heapam.c:2502@work branch
> 2502:   /* XLOG stuff */
> 2503:   if (BufferNeedsWAL(relation, buffer))
>
> So I conclude that the CI mechinery failed to applly the patch
> correctly.

Hi Horiguchi-san,

Hmm.  Here is that line in heamap.c in unpatched master:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/heapam.c;h=d20f0381f3bc23f99c505ef8609d63240ac5d44b;hb=HEAD#l2485

It says:

2485 if (!(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation))

After applying fix-wal-level-minimal-michael-horiguchi-3.patch from
this message:

https://www.postgresql.org/message-id/20170912.131441.20602611.horiguchi.kyotaro%40lab.ntt.co.jp

... that line is unchanged, although it has moved to line number 2502.
It doesn't compile for me, because your patch removed the definition
of HEAP_INSERT_SKIP_WAL but hasn't removed that reference to it.

I'm not sure what happened.  Is it possible that your patch was not
created by diffing against master?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race between SELECT and ALTER TABLE NO INHERIT

2017-09-12 Thread Simon Riggs
On 26 June 2017 at 10:16, Amit Langote  wrote:

> BTW, in the partitioned table case, the parent is always locked first
> using an AccessExclusiveLock.  There are other considerations in that case
> such as needing to recreate the partition descriptor upon termination of
> inheritance (both the DETACH PARTITION and also DROP TABLE child cases).

Is this requirement documented or in comments anywhere?

I can't see anything about that, which is a fairly major usage point.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-12 Thread Amit Langote
Thanks for the review.

On 2017/09/12 23:27, Amit Kapila wrote:
> On Tue, Sep 12, 2017 at 3:51 PM, Amit Langote wrote:
>> I updated the patches for GIN, BRIN, and SP-GiST to include the following
>> changes:
>>
>> 1. Pass REGBUF_STNADARD flag when registering the metapage buffer
>>
> 
> I have looked into brin patch and it seems you have not considered all
> usages of meta page.  The structure BrinRevmap also contains a
> reference to meta page buffer and when that is modified (ex. in
> revmap_physical_extend), then also I think you need to consider using
> REGBUF_STNADARD flag.

Fixed.

>> Did I miss something from the discussion?
>>
> 
> I think one point which might be missed is that the patch needs to
> modify pd_lower for all usages of metapage, not only when it is first
> time initialized.

Maybe I'm missing something, but isn't the metadata size fixed and hence
pd_lower won't change once it's initialized?  Maybe, it's not true for all
index types?

Thanks,
Amit
From c73183871632b368e2662ff5a35bfb6b3eaaade1 Mon Sep 17 00:00:00 2001
From: amit 
Date: Fri, 23 Jun 2017 11:20:41 +0900
Subject: [PATCH 1/3] Set pd_lower correctly in the GIN metapage.

Also tell xlog.c to treat the metapage like a standard page, so any
hole in it is compressed.
---
 src/backend/access/gin/gininsert.c |  4 ++--
 src/backend/access/gin/ginutil.c   |  9 +
 src/backend/access/gin/ginxlog.c   | 24 +---
 3 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/src/backend/access/gin/gininsert.c 
b/src/backend/access/gin/gininsert.c
index 5378011f50..c9aa4ee147 100644
--- a/src/backend/access/gin/gininsert.c
+++ b/src/backend/access/gin/gininsert.c
@@ -348,7 +348,7 @@ ginbuild(Relation heap, Relation index, IndexInfo 
*indexInfo)
Pagepage;
 
XLogBeginInsert();
-   XLogRegisterBuffer(0, MetaBuffer, REGBUF_WILL_INIT);
+   XLogRegisterBuffer(0, MetaBuffer, REGBUF_WILL_INIT | 
REGBUF_STANDARD);
XLogRegisterBuffer(1, RootBuffer, REGBUF_WILL_INIT);
 
recptr = XLogInsert(RM_GIN_ID, XLOG_GIN_CREATE_INDEX);
@@ -447,7 +447,7 @@ ginbuildempty(Relation index)
START_CRIT_SECTION();
GinInitMetabuffer(MetaBuffer);
MarkBufferDirty(MetaBuffer);
-   log_newpage_buffer(MetaBuffer, false);
+   log_newpage_buffer(MetaBuffer, true);
GinInitBuffer(RootBuffer, GIN_LEAF);
MarkBufferDirty(RootBuffer);
log_newpage_buffer(RootBuffer, false);
diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c
index 136ea27718..e926649dd2 100644
--- a/src/backend/access/gin/ginutil.c
+++ b/src/backend/access/gin/ginutil.c
@@ -374,6 +374,15 @@ GinInitMetabuffer(Buffer b)
metadata->nDataPages = 0;
metadata->nEntries = 0;
metadata->ginVersion = GIN_CURRENT_VERSION;
+
+   /*
+* Set pd_lower just past the end of the metadata.  This is not 
essential
+* but it makes the page look compressible to xlog.c, as long as the
+* buffer containing the page is passed to XLogRegisterBuffer() as a
+* REGBUF_STANDARD page.
+*/
+   ((PageHeader) page)->pd_lower =
+   ((char *) metadata + sizeof(GinMetaPageData)) - (char 
*) page;
 }
 
 /*
diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c
index 7ba04e324f..f5c11b2d9a 100644
--- a/src/backend/access/gin/ginxlog.c
+++ b/src/backend/access/gin/ginxlog.c
@@ -514,7 +514,7 @@ ginRedoUpdateMetapage(XLogReaderState *record)
Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO);
metapage = BufferGetPage(metabuffer);
 
-   GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer));
+   GinInitMetabuffer(metabuffer);
memcpy(GinPageGetMeta(metapage), >metadata, 
sizeof(GinMetaPageData));
PageSetLSN(metapage, lsn);
MarkBufferDirty(metabuffer);
@@ -656,7 +656,7 @@ ginRedoDeleteListPages(XLogReaderState *record)
Assert(BufferGetBlockNumber(metabuffer) == GIN_METAPAGE_BLKNO);
metapage = BufferGetPage(metabuffer);
 
-   GinInitPage(metapage, GIN_META, BufferGetPageSize(metabuffer));
+   GinInitMetabuffer(metabuffer);
 
memcpy(GinPageGetMeta(metapage), >metadata, 
sizeof(GinMetaPageData));
PageSetLSN(metapage, lsn);
@@ -768,6 +768,7 @@ void
 gin_mask(char *pagedata, BlockNumber blkno)
 {
Pagepage = (Page) pagedata;
+   PageHeader  pagehdr = (PageHeader) page;
GinPageOpaque opaque;
 
mask_page_lsn(page);
@@ -776,18 +777,11 @@ gin_mask(char *pagedata, BlockNumber blkno)
mask_page_hint_bits(page);
 
/*
-* GIN metapage doesn't use pd_lower/pd_upper. Other page types do. 
Hence,
-* we need to apply masking for those pages.
+* For GIN_DELETED page, the page is initialized to empty. Hence, mask
+* the page content.

Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 12:31 AM, Bossart, Nathan  wrote:
> Sorry for the spam.  I am re-sending these patches with modified names so that
> the apply order is obvious to the new automated testing framework (and to
> everybody else).

- * relid, if not InvalidOid, indicate the relation to process; otherwise,
- * the RangeVar is used.  (The latter must always be passed, because it's
- * used for error messages.)
[...]
+typedef struct VacuumRelation
+{
+   NodeTag  type;
+   RangeVar*relation;  /* single table to process */
+   List*va_cols;   /* list of column names, or NIL for all */
+   Oid  oid;   /* corresponding OID (filled in by [auto]vacuum.c) */
+} VacuumRelation;
We lose a bit of information here. I think that it would be good to
mention in the declaration of VacuumRelation that the RangeVar is used
for error processing, and needs to be filled. I have complained about
that upthread already, perhaps this has slipped away when rebasing.

+   int i = attnameAttNum(rel, col, false);
+
+   if (i != InvalidAttrNumber)
+   continue;
Nit: allocating "i" makes little sense here. You are not using it for
any other checks.

 /*
- * Build a list of Oids for each relation to be processed
+ * Determine the OID for each relation to be processed
  *
  * The list is built in vac_context so that it will survive across our
  * per-relation transactions.
  */
-static List *
-get_rel_oids(Oid relid, const RangeVar *vacrel)
+static void
+get_rel_oids(List **vacrels)
Yeah, that's not completely correct either. This would be more like
"Fill in the list of VacuumRelation entries with their corresponding
OIDs, adding extra entries for partitioned tables".

Those are minor points. The patch seems to be in good shape, and
passes all my tests, including some pgbench'ing to make sure that
nothing goes weird. So I'll be happy to finally switch both patches to
"ready for committer" once those minor points are addressed.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restricting maximum keep segments by repslots

2017-09-12 Thread Kyotaro HORIGUCHI
At Thu, 07 Sep 2017 21:59:56 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170907.215956.110216588.horiguchi.kyot...@lab.ntt.co.jp>
> Hello,
> 
> At Thu, 07 Sep 2017 14:12:12 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
>  wrote in 
> <20170907.141212.227032666.horiguchi.kyot...@lab.ntt.co.jp>
> > > I would like a flag in pg_replication_slots, and possibly also a
> > > numerical column that indicates how far away from the critical point
> > > each slot is.  That would be great for a monitoring system.
> > 
> > Great! I'll do that right now.
> 
> Done.

The CF status of this patch turned into "Waiting on Author".
This is because the second patch is posted separately from the
first patch. I repost them together after rebasing to the current
master.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
***
*** 105,110  int			wal_level = WAL_LEVEL_MINIMAL;
--- 105,111 
  int			CommitDelay = 0;	/* precommit delay in microseconds */
  int			CommitSiblings = 5; /* # concurrent xacts needed to sleep */
  int			wal_retrieve_retry_interval = 5000;
+ int			max_slot_wal_keep_size_mb = 0;
  
  #ifdef WAL_DEBUG
  bool		XLOG_DEBUG = false;
***
*** 9365,9373  KeepLogSeg(XLogRecPtr recptr, XLogSegNo *logSegNo)
--- 9366,9397 
  	if (max_replication_slots > 0 && keep != InvalidXLogRecPtr)
  	{
  		XLogSegNo	slotSegNo;
+ 		int			slotlimitsegs = ConvertToXSegs(max_slot_wal_keep_size_mb);
  
  		XLByteToSeg(keep, slotSegNo);
  
+ 		/*
+ 		 * ignore slots if too many wal segments are kept.
+ 		 * max_slot_wal_keep_size is just accumulated on wal_keep_segments.
+ 		 */
+ 		if (max_slot_wal_keep_size_mb > 0 && slotSegNo + slotlimitsegs < segno)
+ 		{
+ 			segno = segno - slotlimitsegs; /* must be positive */
+ 
+ 			/*
+ 			 * warn only if the checkpoint flushes the required segment.
+ 			 * we assume here that *logSegNo is calculated keep location.
+ 			 */
+ 			if (slotSegNo < *logSegNo)
+ ereport(WARNING,
+ 	(errmsg ("restart LSN of replication slots is ignored by checkpoint"),
+ 	 errdetail("Some replication slots have lost required WAL segnents to continue by up to %ld segments.",
+ 	   (segno < *logSegNo ? segno : *logSegNo) - slotSegNo)));
+ 
+ 			/* emergency vent */
+ 			slotSegNo = segno;
+ 		}
+ 
  		if (slotSegNo <= 0)
  			segno = 1;
  		else if (slotSegNo < segno)
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
***
*** 2371,2376  static struct config_int ConfigureNamesInt[] =
--- 2371,2387 
  	},
  
  	{
+ 		{"max_slot_wal_keep_size", PGC_SIGHUP, REPLICATION_SENDING,
+ 			gettext_noop("Sets the maximum size of extra WALs kept by replication slots."),
+ 		 NULL,
+ 		 GUC_UNIT_MB
+ 		},
+ 		_slot_wal_keep_size_mb,
+ 		0, 0, INT_MAX,
+ 		NULL, NULL, NULL
+ 	},
+ 
+ 	{
  		{"wal_sender_timeout", PGC_SIGHUP, REPLICATION_SENDING,
  			gettext_noop("Sets the maximum time to wait for WAL replication."),
  			NULL,
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***
*** 235,240 
--- 235,241 
  #max_wal_senders = 10		# max number of walsender processes
  # (change requires restart)
  #wal_keep_segments = 0		# in logfile segments, 16MB each; 0 disables
+ #max_slot_wal_keep_size = 0	# measured in bytes; 0 disables
  #wal_sender_timeout = 60s	# in milliseconds; 0 disables
  
  #max_replication_slots = 10	# max number of replication slots
*** a/src/include/access/xlog.h
--- b/src/include/access/xlog.h
***
*** 97,102  extern bool reachedConsistency;
--- 97,103 
  extern int	min_wal_size_mb;
  extern int	max_wal_size_mb;
  extern int	wal_keep_segments;
+ extern int	max_slot_wal_keep_size_mb;
  extern int	XLOGbuffers;
  extern int	XLogArchiveTimeout;
  extern int	wal_retrieve_retry_interval;
*** a/src/backend/access/transam/xlog.c
--- b/src/backend/access/transam/xlog.c
***
*** 9336,9341  CreateRestartPoint(int flags)
--- 9336,9420 
  }
  
  /*
+  * Check if the record on the given lsn will be preserved at the next
+  * checkpoint.
+  *
+  * Returns true if it will be preserved. If distance is given, the distance
+  * from origin to the beginning of the first segment kept at the next
+  * checkpoint. It means margin when this function returns true and gap of lost
+  * records when false.
+  *
+  * This function should return the consistent result with KeepLogSeg.
+  */
+ bool
+ GetMarginToSlotSegmentLimit(XLogRecPtr restartLSN, uint64 *distance)
+ {
+ 	XLogRecPtr currpos;
+ 	XLogRecPtr tailpos;
+ 	uint64 currSeg;
+ 	uint64 restByteInSeg;
+ 	uint64 restartSeg;
+ 	uint64 tailSeg;
+ 	uint64 keepSegs;
+ 
+ 	currpos = GetXLogWriteRecPtr();
+ 
+ 	LWLockAcquire(ControlFileLock, LW_SHARED);
+ 	tailpos = ControlFile->checkPointCopy.redo;
+ 	

Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-12 Thread Ashutosh Bapat
Thanks Amit for taking care of this.

On Wed, Sep 13, 2017 at 6:31 AM, Amit Langote
 wrote:
> On 2017/09/06 19:14, Amit Langote wrote:
>> On 2017/09/06 18:46, Rushabh Lathia wrote:
>>> Okay, I have marked this as ready for committer.
>>
>> Thanks Ashutosh and Rushabh for rebasing and improving the patch.  Looks
>> good to me too.
>
> Patch needed to be rebased after the default partitions patch went in, so
> done.  Per build status on http://commitfest.cputube.org :)
>
> Thanks,
> Amit



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some subscriptions fail (while some succeed) with pglogical

2017-09-12 Thread xiaolongc
Never mind.  The problem was actually a minor one -- I need to truncate the
table before re-subscribe (otherwise table copy would fail when syncing data
with master).





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race between SELECT and ALTER TABLE NO INHERIT

2017-09-12 Thread Kyotaro HORIGUCHI
At Mon, 28 Aug 2017 18:28:07 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170828.182807.98097766.horiguchi.kyot...@lab.ntt.co.jp>
> I'll add this to CF2017-09.

This patch got deadly atack from the commit 30833ba. I changed
the signature of expand_single_inheritance_child in addition to
make_inh_translation_list to notify that the specified child is
no longer a child of the parent.

This passes regular regression test and fixed the the problem.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
*** a/src/backend/catalog/pg_inherits.c
--- b/src/backend/catalog/pg_inherits.c
***
*** 42,47  typedef struct SeenRelsEntry
--- 42,49 
  	ListCell   *numparents_cell;	/* corresponding list cell */
  } SeenRelsEntry;
  
+ static bool is_descendent_of_internal(Oid parentId, Oid childId,
+ 	  HTAB *seen_rels);
  /*
   * find_inheritance_children
   *
***
*** 400,402  typeInheritsFrom(Oid subclassTypeId, Oid superclassTypeId)
--- 402,472 
  
  	return result;
  }
+ 
+ /*
+  * Check if the child is really a descendent of the parent
+  */
+ bool
+ is_descendent_of(Oid parentId, Oid childId)
+ {
+ 	HTAB	   *seen_rels;
+ 	HASHCTL		ctl;
+ 	bool		ischild = false;
+ 
+ 	memset(, 0, sizeof(ctl));
+ 	ctl.keysize = sizeof(Oid);
+ 	ctl.entrysize = sizeof(Oid);
+ 	ctl.hcxt = CurrentMemoryContext;
+ 
+ 	seen_rels = hash_create("is_descendent_of temporary table",
+ 			32, /* start small and extend */
+ 			,
+ 			HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
+ 
+ 	ischild = is_descendent_of_internal(parentId, childId, seen_rels);
+ 
+ 	hash_destroy(seen_rels);
+ 
+ 	return ischild;
+ }
+ 
+ static bool
+ is_descendent_of_internal(Oid parentId, Oid childId, HTAB *seen_rels)
+ {
+ 	Relation	inhrel;
+ 	SysScanDesc scan;
+ 	ScanKeyData key[1];
+ 	bool		ischild = false;
+ 	HeapTuple	inheritsTuple;
+ 
+ 	inhrel = heap_open(InheritsRelationId, AccessShareLock);
+ 	ScanKeyInit([0], Anum_pg_inherits_inhparent,
+ BTEqualStrategyNumber, F_OIDEQ,	ObjectIdGetDatum(parentId));
+ 	scan = systable_beginscan(inhrel, InheritsParentIndexId, true,
+ 			  NULL, 1, key);
+ 
+ 	while ((inheritsTuple = systable_getnext(scan)) != NULL)
+ 	{
+ 		bool found;
+ 		Oid inhrelid = ((Form_pg_inherits) GETSTRUCT(inheritsTuple))->inhrelid;
+ 
+ 		hash_search(seen_rels, , HASH_ENTER, );
+ 
+ 		/*
+ 		 * Recursively check into children. Although there can't theoretically
+ 		 * be any cycles in the inheritance graph, check the cycles following
+ 		 * find_all_inheritors.
+ 		 */
+ 		if (inhrelid == childId ||
+ 			(!found && is_descendent_of_internal(inhrelid, childId, seen_rels)))
+ 		{
+ 			ischild = true;
+ 			break;
+ 		}
+ 	}
+ 
+ 	systable_endscan(scan);
+ 	heap_close(inhrel, AccessShareLock);
+ 
+ 	return ischild;
+ }
*** a/src/backend/optimizer/prep/prepunion.c
--- b/src/backend/optimizer/prep/prepunion.c
***
*** 108,123  static void expand_partitioned_rtentry(PlannerInfo *root,
  		   LOCKMODE lockmode,
  		   bool *has_child, List **appinfos,
  		   List **partitioned_child_rels);
! static void expand_single_inheritance_child(PlannerInfo *root,
  RangeTblEntry *parentrte,
  Index parentRTindex, Relation parentrel,
  PlanRowMark *parentrc, Relation childrel,
  bool *has_child, List **appinfos,
  List **partitioned_child_rels);
! static void make_inh_translation_list(Relation oldrelation,
  		  Relation newrelation,
! 		  Index newvarno,
! 		  List **translated_vars);
  static Bitmapset *translate_col_privs(const Bitmapset *parent_privs,
  	List *translated_vars);
  static Node *adjust_appendrel_attrs_mutator(Node *node,
--- 108,122 
  		   LOCKMODE lockmode,
  		   bool *has_child, List **appinfos,
  		   List **partitioned_child_rels);
! static bool expand_single_inheritance_child(PlannerInfo *root,
  RangeTblEntry *parentrte,
  Index parentRTindex, Relation parentrel,
  PlanRowMark *parentrc, Relation childrel,
  bool *has_child, List **appinfos,
  List **partitioned_child_rels);
! static List *make_inh_translation_list(Relation oldrelation,
  		  Relation newrelation,
! 		  Index newvarno);
  static Bitmapset *translate_col_privs(const Bitmapset *parent_privs,
  	List *translated_vars);
  static Node *adjust_appendrel_attrs_mutator(Node *node,
***
*** 1476,1481  expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti)
--- 1475,1482 
  		 * in which they appear in the PartitionDesc.  But first, expand the
  		 * parent itself.
  		 */
+ 
+ 		/* ignore the return value since this doesn't exclude the parent */
  		expand_single_inheritance_child(root, rte, rti, oldrelation, oldrc,
  		oldrelation,
  		_child, ,
***
*** 1497,1502  expand_inherited_rtentry(PlannerInfo *root, RangeTblEntry *rte, Index rti)
--- 1498,1504 

Re: [HACKERS] no test coverage for ALTER FOREIGN DATA WRAPPER name HANDLER ...

2017-09-12 Thread Amit Langote
On 2017/09/12 20:17, Ashutosh Bapat wrote:
> On Tue, Sep 12, 2017 at 2:27 PM, Amit Langote
>  wrote:
>> Thanks Ashutosh for taking a look at this.
>>
>> On 2017/09/05 21:16, Ashutosh Bapat wrote:
>>> The patch needs a rebase.
>>
>> Attached rebased patch.
> 
> Thanks for rebased patch.

Thanks for the review.

> We could annotate each ERROR with an explanation as to why that's an
> error, but then this file doesn't do that for other commands, so may
> be the patch is just fine.

Agreed.  Note that this patch is just about adding the tests, not
modifying foreigncmds.c to change error handling around HANDLER functions.

> Also, I am wondering whether we should create the new handler function
> in foreign.c similar to postgresql_fdw_validator(). The prologue has a
> caution
> 
> 606  * Caution: this function is deprecated, and is now meant only for testing
> 607  * purposes, because the list of options it knows about doesn't 
> necessarily
> 608  * square with those known to whichever libpq instance you might be using.
> 609  * Inquire of libpq itself, instead.
> 
> So, may be we don't want to add it there. But adding the handler
> function in create_function_1 doesn't seem good. If that's the correct
> place, then at least it should be moved before " -- Things that
> shouldn't work:"; it doesn't belong to functions that don't work.

In the attached updated patch, I created separate .source files in
src/test/regress/input and output directories called fdw_handler.source
and put the test_fdw_handler function definition there.  When I had
originally thought of it back when I wrote the patch, it seemed to be an
overkill, because we're just normally defining a single C function there
to be used in the newly added foreign_data tests.  In any case, we need to
go the .source file way, because that's the only way to refer to paths to
.so library when defining C language functions.

Thanks,
Amit
From 510987531bfdf22df0bc8eef27f232e580d415b1 Mon Sep 17 00:00:00 2001
From: amit 
Date: Wed, 10 May 2017 10:37:42 +0900
Subject: [PATCH] Add some FDW HANDLER DDL tests

---
 src/test/regress/expected/foreign_data.out | 28 ++--
 src/test/regress/input/fdw_handler.source  |  5 +
 src/test/regress/output/fdw_handler.source |  5 +
 src/test/regress/parallel_schedule |  2 +-
 src/test/regress/regress.c |  7 +++
 src/test/regress/serial_schedule   |  1 +
 src/test/regress/sql/.gitignore|  1 +
 src/test/regress/sql/foreign_data.sql  | 13 +
 8 files changed, 55 insertions(+), 7 deletions(-)
 create mode 100644 src/test/regress/input/fdw_handler.source
 create mode 100644 src/test/regress/output/fdw_handler.source

diff --git a/src/test/regress/expected/foreign_data.out 
b/src/test/regress/expected/foreign_data.out
index c6e558b07f..331f7a911f 100644
--- a/src/test/regress/expected/foreign_data.out
+++ b/src/test/regress/expected/foreign_data.out
@@ -89,6 +89,14 @@ CREATE FOREIGN DATA WRAPPER foo VALIDATOR 
postgresql_fdw_validator;
  postgresql | regress_foreign_data_user | -   | postgresql_fdw_validator | 
  | | 
 (3 rows)
 
+-- HANDLER related checks
+CREATE FUNCTION invalid_fdw_handler() RETURNS int LANGUAGE SQL AS 'SELECT 1;';
+CREATE FOREIGN DATA WRAPPER test_fdw HANDLER invalid_fdw_handler;  -- ERROR
+ERROR:  function invalid_fdw_handler must return type fdw_handler
+CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler HANDLER 
invalid_fdw_handler;  -- ERROR
+ERROR:  conflicting or redundant options
+CREATE FOREIGN DATA WRAPPER test_fdw HANDLER test_fdw_handler;
+DROP FOREIGN DATA WRAPPER test_fdw;
 -- ALTER FOREIGN DATA WRAPPER
 ALTER FOREIGN DATA WRAPPER foo; -- ERROR
 ERROR:  syntax error at or near ";"
@@ -188,18 +196,26 @@ ALTER FOREIGN DATA WRAPPER foo RENAME TO foo1;
 (3 rows)
 
 ALTER FOREIGN DATA WRAPPER foo1 RENAME TO foo;
+-- HANDLER related checks
+ALTER FOREIGN DATA WRAPPER foo HANDLER invalid_fdw_handler;  -- ERROR
+ERROR:  function invalid_fdw_handler must return type fdw_handler
+ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler HANDLER anything;  -- 
ERROR
+ERROR:  conflicting or redundant options
+ALTER FOREIGN DATA WRAPPER foo HANDLER test_fdw_handler;
+WARNING:  changing the foreign-data wrapper handler can change behavior of 
existing foreign tables
+DROP FUNCTION invalid_fdw_handler();
 -- DROP FOREIGN DATA WRAPPER
 DROP FOREIGN DATA WRAPPER nonexistent;  -- ERROR
 ERROR:  foreign-data wrapper "nonexistent" does not exist
 DROP FOREIGN DATA WRAPPER IF EXISTS nonexistent;
 NOTICE:  foreign-data wrapper "nonexistent" does not exist, skipping
 \dew+
-List of foreign-data 
wrappers
-Name|   Owner   | Handler |Validator | 
Access privileges | FDW options  | 

Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Kyotaro HORIGUCHI
Hello, aside from the discussion on the policy of usage of
automation CI, it seems having trouble applying patches.

https://travis-ci.org/postgresql-cfbot/postgresql/builds/27450
>1363  heapam.c:2502:18: error: ‘HEAP_INSERT_SKIP_WAL’ undeclared (first use in 
>this function)
>1364  if (!(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation))

These lines shows that the patch is applied halfway.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generated columns

2017-09-12 Thread Simon Riggs
On 31 August 2017 at 05:16, Peter Eisentraut
 wrote:
> Here is another attempt to implement generated columns.  This is a
> well-known SQL-standard feature, also available for instance in DB2,
> MySQL, Oracle.  A quick example:
>
>   CREATE TABLE t1 (
> ...,
> height_cm numeric,
> height_in numeric GENERATED ALWAYS AS (height_cm * 2.54)
>   );

Cool

> - pg_dump produces a warning about a dependency loop when dumping these.
>  Will need to be fixed at some point, but it doesn't prevent anything
> from working right now.
>
> Open design issues:
>
> - COPY behavior: Currently, generated columns are automatically omitted
> if there is no column list, and prohibited if specified explicitly.
> When stored generated columns are implemented, they could be copied out.
>  Some user options might be possible here.

If the values are generated immutably there would be no value in
including them in a dump. If you did dump them then they couldn't be
reloaded without error, so again, no point in dumping them.

COPY (SELECT...) already allows you options to include or exclude any
columns you wish, so I don't see the need for special handling here.

IMHO, COPY TO would exclude generated columns of either kind, ensuring
that the reload would just work.

> - Catalog storage: I store the generation expression in pg_attrdef, like
> a default.  For the most part, this works well.  It is not clear,
> however, what pg_attribute.atthasdef should say.  Half the code thinks
> that atthasdef means "there is something in pg_attrdef", the other half
> thinks "column has a DEFAULT expression".  Currently, I'm going with the
> former interpretation, because that is wired in quite deeply and things
> start to crash if you violate it, but then code that wants to know
> whether a column has a traditional DEFAULT expression needs to check
> atthasdef && !attgenerated or something like that.
>
> Missing/future functionality:
>
> - STORED variant

For me, this option would be the main feature. Presumably if STORED
then we wouldn't need the functions to be immutable, making it easier
to have columns like last_update_timestamp or last_update_username
etc..

I think an option to decide whether the default is STORED or VIRTUAL
would be useful.

> - various ALTER TABLE variants

Adding a column with GENERATED STORED would always be a full table rewrite.
Hmm, I wonder if its worth having a mixed mode: stored for new rows,
only virtual for existing rows; that way we could add GENERATED
columns easily.

> - index support (and related constraint support)

Presumably you can't index a VIRTUAL column. Or at least I don't think
its worth spending time trying to make it work.

> These can be added later once the basics are nailed down.

I imagine that if a column is generated then it is not possible to
have column level INSERT | UPDATE | DELETE privs on it. The generation
happens automatically as part of the write action if stored, or not
until select for virtual. It should be possible to have column level
SELECT privs.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Supporting huge pages on Windows

2017-09-12 Thread Tsunakawa, Takayuki
Hi Thomas, Magnus

From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Thomas Munro
> Since it only conflicts with c7b8998e because of pgindent whitespace
> movement, I applied it with "patch -p1 --ignore-whitespace" and created
> a new patch.  See attached.

Thanks, Thomas.  I've added your name in the CF entry so that your name will 
also be listed on the release note, because my patch is originally based on 
your initial try.  Please remove your name just in case you mind it.  BTW, your 
auto-reviewer looks very convenient.  Thank you again for your great work.

Magnus, it would be grateful if you could review and commit the patch while 
your memory is relatively fresh.

I've been in a situation which keeps me from doing development recently, but I 
think I can gradually rejoin the community activity soon.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Log LDAP "diagnostic messages"?

2017-09-12 Thread Thomas Munro
On Tue, Sep 12, 2017 at 11:23 PM, Ashutosh Bapat
 wrote:
> On Wed, Aug 16, 2017 at 11:13 AM, Ashutosh Bapat
>  wrote:
>> On Wed, Aug 16, 2017 at 8:44 AM, Alvaro Herrera
>>  wrote:
>>> Christoph Berg wrote:
 "Diagnostic message" doesn't really mean anything, and printing
 "DETAIL: Diagnostic message: " seems redundant to me. Maybe
 drop that prefix? It should be clear from the context that this is a
 message from the LDAP layer.
>>>
>>> I think making it visible that the message comes from LDAP (rather than
>>> Postgres or anything else) is valuable.  How about this?
>>>
>>> LOG:  could not start LDAP TLS session: Protocol error
>>> DETAIL:  LDAP diagnostics: unsupported extended operation.
>>>
>> +1, pretty neat.

Here is a new version adopting Alvaro's wording.  I'll set this back
to "Needs review" status.

-- 
Thomas Munro
http://www.enterprisedb.com


ldap-diagnostic-message-v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v1] Add and report the new "in_hot_standby" GUC pseudo-variable.

2017-09-12 Thread Thomas Munro
On Wed, Sep 13, 2017 at 3:48 AM, Elvis Pranskevichus  wrote:
> I incorporated those bits into your patch and rebased in onto master.
> Please see attached.
>
> FWIW, I think that mixing the standby status and the default
> transaction writability is suboptimal.  They are related, yes, but not
> the same thing.  It is possible to have a master cluster in the
> read-only mode, and with this patch it would be impossible to
> distinguish from a hot-standby replica without also polling
> pg_is_in_recovery(), which defeats the purpose of having to do no
> database roundtrips.

Hi Elvis,

FYI the recovery test 001_stream_rep.pl fails with this patch applied.
You can see that if you configure with --enable-tap-tests, build and
then cd into src/test/recovery and "make check".

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL logging problem in 9.4.3?

2017-09-12 Thread Kyotaro HORIGUCHI
Hello, (does this seem to be a top post?)

The CF status of this patch turned into "Waiting on Author" by
automated CI checking. However, I still don't get any error even
on the current master (69835bc) after make distclean. Also I
don't see any difference between the "problematic" patch and my
working branch has nothing different other than patching line
shifts. (So I haven't post a new one.)

I looked on the location heapam.c:2502 where the CI complains at
in my working branch and I found a different code with the
complaint.

https://travis-ci.org/postgresql-cfbot/postgresql/builds/27450

1363 heapam.c:2502:18: error: ‘HEAP_INSERT_SKIP_WAL’ undeclared (first use in 
this function)
1364   if (!(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation))

heapam.c:2502@work branch
2502:   /* XLOG stuff */
2503:   if (BufferNeedsWAL(relation, buffer))

So I conclude that the CI mechinery failed to applly the patch
correctly.


At Thu, 13 Apr 2017 15:29:35 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20170413.152935.100104316.horiguchi.kyot...@lab.ntt.co.jp>
> > > > I'll post new patch in this way soon.
> > > 
> > > Here it is.
> > 
> > It contained tariling space and missing test script.  This is the
> > correct patch.
> > 
> > > - Relation has new members no_pending_sync and pending_sync that
> > >   works as instant cache of an entry in pendingSync hash.
> > > 
> > > - Commit-time synchronizing is restored as Michael's patch.
> > > 
> > > - If relfilenode is replaced, pending_sync for the old node is
> > >   removed. Anyway this is ignored on abort and meaningless on
> > >   commit.
> > > 
> > > - TAP test is renamed to 012 since some new files have been added.
> > > 
> > > Accessing pending sync hash occured on every calling of
> > > HeapNeedsWAL() (per insertion/update/freeze of a tuple) if any of
> > > accessing relations has pending sync.  Almost of them are
> > > eliminated as the result.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_wal_write statistics view

2017-09-12 Thread Haribabu Kommi
On Tue, Sep 12, 2017 at 3:14 PM, Kuntal Ghosh 
wrote:

> On Tue, Sep 12, 2017 at 9:06 AM, Haribabu Kommi
>  wrote:
> >
> >
> > On Tue, Sep 12, 2017 at 2:04 AM, Kuntal Ghosh <
> kuntalghosh.2...@gmail.com>
> > wrote:
> >
> Thanks for the patch.
> + * Check whether the current process is a normal backend or not.
> + * This function checks for the background processes that does
> + * some WAL write activity only and other background processes
> + * are not considered. It considers all the background workers
> + * as WAL write activity workers.
> + *
> + * Returns false - when the current process is a normal backend
> + *true - when the current process a background process/worker
> + */
> +static bool
> +am_background_process()
> +{
> +   /* check whether current process is a background process/worker? */
> +   if (!AmBackgroundWriterProcess() ||
> +   !AmCheckpointerProcess() ||
> +   !AmStartupProcess() ||
> +   !IsBackgroundWorker ||
> +   !am_walsender ||
> +   !am_autovacuum_worker)
> +   {
> +   return false;
> +   }
> +
> +   return true;
> +}
> I think you've to do AND operation here instead of OR. Isn't it?
> Another point is that, the function description could start with
> 'Check whether the current process is a background process/worker.'
>

Yes, it should be AND, while correcting a review comment, I messed
up that function.

> There is an overhead with IO time calculation. Is the view is good
> > enough without IO columns?
> I'm not sure how IO columns are useful for tuning the WAL write/fsync
> performance from an user's perspective. But, it's definitely useful
> for developing/improving stuffs in XLogWrite.
>

I ran the latest performance tests with and without IO times, there is an
overhead involved with IO time calculation and didn't observe any
performance
overhead with normal stats. May be we can enable the IO stats only in the
development environment to find out the IO stats?


> >
> > And also during my tests, I didn't observe any other background
> > processes performing the xlogwrite operation, the values are always
> > zero. Is it fine to merge them with backend columns?
> >
> Apart from wal writer process, I don't see any reason why you should
> track other background processes separately from normal backends.
> However, I may be missing some important point.


I added the other background stats to find out how much WAL write is
carried out by the other background processes. Now I am able to collect
the stats for the other background processes also after the pgbench test.
So I feel now the separate background stats may be useful.

Attached latest patch, performance test results and stats details with
separate background stats and also combine them with backend including
the IO stats also.


Regards,
Hari Babu
Fujitsu Australia
stats with seperate background process stats info:

 writes | walwriter_writes | backend_writes | dirty_writes | 
walwriter_dirty_writes | backend_dirty_writes | write_blocks | 
walwriter_write_blocks | backend_write_blocks | write_time | 
walwriter_write_time | backend_write_time | sync_time | walwriter_sync_time | 
backend_sync_time |  stats_reset
+--++--++--+--++--++--++---+-+---+---
 256004 | 14223300 |  439408129 |0 |
  0 |65933 |  3018749 |  287733552 |   
1756612506 |  0 |0 |  0 | 0 
|   0 | 0 | 2017-09-12 19:21:03.103784+10
(1 row)


stats with background info with IO time:

 writes | walwriter_writes | backend_writes | dirty_writes | 
walwriter_dirty_writes | backend_dirty_writes | write_blocks | 
walwriter_write_blocks | backend_write_blocks | write_time | 
walwriter_write_time | backend_write_time | sync_time | walwriter_sync_time | 
backend_sync_time |  stats_reset
+--++--++--+--++--++--++---+-+---+---
 458362 | 27245324 |  881576768 |0 |
  0 |65933 |  3551641 |  304509489 |   
2767649450 |  0 |0 |  0 |   3366091 
|   173043798 |5855747060 | 2017-09-12 19:21:03.103784+10
(1 row)


stats info with combined background process info:

 walwriter_writes | backend_writes | walwriter_dirty_writes | 

Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-12 Thread Amit Langote
On 2017/09/06 19:14, Amit Langote wrote:
> On 2017/09/06 18:46, Rushabh Lathia wrote:
>> Okay, I have marked this as ready for committer.
> 
> Thanks Ashutosh and Rushabh for rebasing and improving the patch.  Looks
> good to me too.

Patch needed to be rebased after the default partitions patch went in, so
done.  Per build status on http://commitfest.cputube.org :)

Thanks,
Amit
From 0ac21ff604b5dccf818f9d69c945ff845d1771bf Mon Sep 17 00:00:00 2001
From: amit 
Date: Wed, 13 Sep 2017 09:56:34 +0900
Subject: [PATCH] Some enhancments for \d+ output of partitioned tables

---
 src/bin/psql/describe.c| 32 --
 src/test/regress/expected/create_table.out | 13 +++-
 src/test/regress/expected/foreign_data.out |  3 +++
 src/test/regress/expected/insert.out   | 17 
 src/test/regress/sql/create_table.sql  |  2 +-
 src/test/regress/sql/insert.sql|  4 
 6 files changed, 59 insertions(+), 12 deletions(-)

diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index d22ec68431..855e6870e9 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -2831,7 +2831,7 @@ describeOneTableDetails(const char *schemaname,
/* print child tables (with additional info if partitions) */
if (pset.sversion >= 10)
printfPQExpBuffer(,
- "SELECT 
c.oid::pg_catalog.regclass, pg_catalog.pg_get_expr(c.relpartbound, c.oid)"
+ "SELECT 
c.oid::pg_catalog.regclass, pg_get_expr(c.relpartbound, c.oid), c.relkind"
  " FROM 
pg_catalog.pg_class c, pg_catalog.pg_inherits i"
  " WHERE 
c.oid=i.inhrelid AND i.inhparent = '%s'"
  " ORDER BY 
c.oid::pg_catalog.regclass::pg_catalog.text;", oid);
@@ -2854,7 +2854,18 @@ describeOneTableDetails(const char *schemaname,
else
tuples = PQntuples(result);
 
-   if (!verbose)
+   /*
+* For a partitioned table with no partitions, always print the 
number
+* of partitions as zero, even when verbose output is expected.
+* Otherwise, we will not print "Partitions" section for a 
partitioned
+* table without any partitions.
+*/
+   if (tableinfo.relkind == RELKIND_PARTITIONED_TABLE && tuples == 
0)
+   {
+   printfPQExpBuffer(, _("Number of partitions: %d"), 
tuples);
+   printTableAddFooter(, buf.data);
+   }
+   else if (!verbose)
{
/* print the number of child tables, if any */
if (tuples > 0)
@@ -2886,12 +2897,21 @@ describeOneTableDetails(const char *schemaname,
}
else
{
+   char   *partitioned_note;
+
+   if (*(PQgetvalue(result, i, 2)) == 
RELKIND_PARTITIONED_TABLE)
+   partitioned_note = " is 
partitioned";
+   else
+   partitioned_note = "";
+
if (i == 0)
-   printfPQExpBuffer(, "%s: %s 
%s",
-   
  ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1));
+   printfPQExpBuffer(, "%s: %s 
%s%s",
+   
  ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1),
+   
  partitioned_note);
else
-   printfPQExpBuffer(, "%*s  
%s %s",
-   
  ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1));
+   printfPQExpBuffer(, "%*s  
%s %s%s",
+   
  ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1),
+   
  partitioned_note);
}
if (i < tuples - 1)
appendPQExpBufferChar(, ',');
diff --git a/src/test/regress/expected/create_table.out 

Re: [HACKERS] Arrays of domains

2017-09-12 Thread Tom Lane
I wrote:
> Attached is a patch series that allows us to create arrays of domain
> types.

Here's a rebased-up-to-HEAD version of this patch set.  The only
actual change is removal of a no-longer-needed hunk in pl_exec.c.

regards, tom lane

diff --git a/src/backend/optimizer/prep/preptlist.c b/src/backend/optimizer/prep/preptlist.c
index 9d75e86..d7db32e 100644
*** a/src/backend/optimizer/prep/preptlist.c
--- b/src/backend/optimizer/prep/preptlist.c
*** expand_targetlist(List *tlist, int comma
*** 306,314 
  		new_expr = coerce_to_domain(new_expr,
  	InvalidOid, -1,
  	atttype,
  	COERCE_IMPLICIT_CAST,
  	-1,
- 	false,
  	false);
  	}
  	else
--- 306,314 
  		new_expr = coerce_to_domain(new_expr,
  	InvalidOid, -1,
  	atttype,
+ 	COERCION_IMPLICIT,
  	COERCE_IMPLICIT_CAST,
  	-1,
  	false);
  	}
  	else
diff --git a/src/backend/parser/parse_coerce.c b/src/backend/parser/parse_coerce.c
index e79ad26..5a241bd 100644
*** a/src/backend/parser/parse_coerce.c
--- b/src/backend/parser/parse_coerce.c
***
*** 34,48 
  
  static Node *coerce_type_typmod(Node *node,
     Oid targetTypeId, int32 targetTypMod,
!    CoercionForm cformat, int location,
!    bool isExplicit, bool hideInputCoercion);
  static void hide_coercion_node(Node *node);
  static Node *build_coercion_expression(Node *node,
  		  CoercionPathType pathtype,
  		  Oid funcId,
  		  Oid targetTypeId, int32 targetTypMod,
! 		  CoercionForm cformat, int location,
! 		  bool isExplicit);
  static Node *coerce_record_to_complex(ParseState *pstate, Node *node,
  		 Oid targetTypeId,
  		 CoercionContext ccontext,
--- 34,49 
  
  static Node *coerce_type_typmod(Node *node,
     Oid targetTypeId, int32 targetTypMod,
!    CoercionContext ccontext, CoercionForm cformat,
!    int location,
!    bool hideInputCoercion);
  static void hide_coercion_node(Node *node);
  static Node *build_coercion_expression(Node *node,
  		  CoercionPathType pathtype,
  		  Oid funcId,
  		  Oid targetTypeId, int32 targetTypMod,
! 		  CoercionContext ccontext, CoercionForm cformat,
! 		  int location);
  static Node *coerce_record_to_complex(ParseState *pstate, Node *node,
  		 Oid targetTypeId,
  		 CoercionContext ccontext,
*** coerce_to_target_type(ParseState *pstate
*** 110,117 
  	 */
  	result = coerce_type_typmod(result,
  targettype, targettypmod,
! cformat, location,
! (cformat != COERCE_IMPLICIT_CAST),
  (result != expr && !IsA(result, Const)));
  
  	if (expr != origexpr)
--- 111,117 
  	 */
  	result = coerce_type_typmod(result,
  targettype, targettypmod,
! ccontext, cformat, location,
  (result != expr && !IsA(result, Const)));
  
  	if (expr != origexpr)
*** coerce_type(ParseState *pstate, Node *no
*** 355,361 
  			result = coerce_to_domain(result,
  	  baseTypeId, baseTypeMod,
  	  targetTypeId,
! 	  cformat, location, false, false);
  
  		ReleaseSysCache(baseType);
  
--- 355,362 
  			result = coerce_to_domain(result,
  	  baseTypeId, baseTypeMod,
  	  targetTypeId,
! 	  ccontext, cformat, location,
! 	  false);
  
  		ReleaseSysCache(baseType);
  
*** coerce_type(ParseState *pstate, Node *no
*** 417,436 
  
  			result = build_coercion_expression(node, pathtype, funcId,
  			   baseTypeId, baseTypeMod,
! 			   cformat, location,
! 			   (cformat != COERCE_IMPLICIT_CAST));
  
  			/*
  			 * If domain, coerce to the domain type and relabel with domain
! 			 * type ID.  We can skip the internal length-coercion step if the
! 			 * selected coercion function was a type-and-length coercion.
  			 */
  			if (targetTypeId != baseTypeId)
  result = coerce_to_domain(result, baseTypeId, baseTypeMod,
  		  targetTypeId,
! 		  cformat, location, true,
! 		  exprIsLengthCoercion(result,
! 			   NULL));
  		}
  		else
  		{
--- 418,434 
  
  			result = build_coercion_expression(node, pathtype, funcId,
  			   baseTypeId, baseTypeMod,
! 			   ccontext, cformat, location);
  
  			/*
  			 * If domain, coerce to the domain type and relabel with domain
! 			 * type ID, hiding the previous coercion node.
  			 */
  			if (targetTypeId != baseTypeId)
  result = coerce_to_domain(result, baseTypeId, baseTypeMod,
  		  targetTypeId,
! 		  ccontext, cformat, location,
! 		  true);
  		}
  		else
  		{
*** coerce_type(ParseState *pstate, Node *no
*** 444,450 
  			 * then we won't need a RelabelType node.
  			 */
  			result = coerce_to_domain(node, InvalidOid, -1, targetTypeId,
! 

Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Kyotaro HORIGUCHI
At Wed, 13 Sep 2017 08:13:08 +0900, Michael Paquier  
wrote in 

Re: [HACKERS] More flexible LDAP auth search filters?

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 8:04 AM, Thomas Munro
 wrote:
> I wonder if there is a reasonable way to indicate or determine whether
> you have slapd installed so that check-world could run this test...

Module::Install's requires_external_bin is one:
http://search.cpan.org/~ether/Module-Install-1.18/lib/Module/Install.pod#requires_external_bin
But the bar to add a new module dependency is high.

Another trick that you could use is to attempt to run it, see if it is
present by checking for 127 sounds fragile though, for Windows
particularly, and otherwise skip the test.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Some subscriptions fail (while some succeed) with pglogical

2017-09-12 Thread xiaolongc
Hey guys, I'm setting up logical replication with pglogical, and found out
that some of my subscriptions are working well but others are not.  They are
set up in the same way tho, and both master and replica are running pg9.5. 

Below is subscription status on the replica:

\# select subscription_name, status, replication_sets from
pglogical.show_subscription_status();

 parking_sub   | down| {parking_schema}
 public_sub| replicating | {public_schema}
 stripe_sub| replicating | {stripe_schema}
 zip_sub| down| {zip_schema}

Checked logs on the replica:

2017-09-13 00:00:14 UTC [2850-1] LOG:  starting apply for subscription
zip_sub
2017-09-13 00:00:14 UTC [2850-2] ERROR:  subscriber zip_sub initialization
failed during nonrecoverable step (d), please try the setup again
2017-09-13 00:00:14 UTC [2850-3] LOG:  apply worker [2850] at slot 3
generation 70 crashed
2017-09-13 00:00:14 UTC [1573-136] LOG:  worker process: pglogical apply
16384:743118875 (PID 2850) exited with exit code 1


I checked my replication set on the master and it looks valid.  All tables
in the replication set have a Primary Key constraint.  

What else do you think could have gone wrong? Thanks in advance!




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På onsdag 13. september 2017 kl. 01:54:15, skrev Stephen Frost <
sfr...@snowman.net >:
Andreas,

 * Andreas Joseph Krogh (andr...@visena.com) wrote:
 > I have to ask; Why not run pg_upgrade on standby, after verifying that it's 
in
 > sync with primary and promoting it to primary if necessary and then making 
it
 > standby again after pg_upgrade is finished?

 I don't think that we could be guaranteed that the catalog tables would
 be the same on the replica as on the primary if they were actually
 created by pg_upgrade.

 The catalog tables *must* be identical between the primary and the
 replica because they are updated subsequently through WAL replay, not
 through SQL commands (which is how pg_upgrade creates them in the first
 place).

 Perhaps we could have some mode for pg_upgrade where it handles the
 update to replicas (with the additional checks that I outlined and using
 the methodology discussed for rsync --hard-links), but that would still
 require solving the communicate-over-the-network problem between the
 primary and the replicas, which is the hard part.  Whether it's an
 independent utility or something built into pg_upgrade isn't really that
 big of a distinction, though it doesn't seem to me like there'd be much
 code reuse there.

 Thanks!

 Stephen
 
Thanks.
 
--
 Andreas Joseph Krogh
 




Re: [HACKERS] Small patch for pg_basebackup argument parsing

2017-09-12 Thread Daniel Gustafsson
> On 05 Jul 2017, at 08:32, Michael Paquier  wrote:
> 
> On Wed, Jul 5, 2017 at 2:57 PM, Ryan Murphy  wrote:
>> I tried to apply your patch to test it (though reading Robert's last comment 
>> it seems we wish to have it adjusted before committing)... but in any case I 
>> was not able to apply your patch to the tip of the master branch (my git 
>> apply failed).  I'm setting this to Waiting On Author for a new patch, and I 
>> also agree with Robert that the test can be simpler and can go in the other 
>> order.  If you don't have time to make another patch, let me know, I may be 
>> able to make one.
> 
> git is unhappy even if forcibly applied with patch -p1. You should
> check for whitespaces at the same time:
> $ git diff --check
> src/bin/pg_basebackup/pg_receivewal.c:483: indent with spaces.
> +char   *strtol_endptr = NULL
> And there are more than this one.

Like Michael said above, this patch no longer applies and have some whitespace
issues.  The conflicts in applying are quite trivial though, so it should be
easy to create a new version.  Do you plan to work on this during this
Commitfest so we can move this patch forward?

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication status in logical replication

2017-09-12 Thread Daniel Gustafsson
> On 30 May 2017, at 19:55, Peter Eisentraut  
> wrote:
> 
> On 5/29/17 22:56, Noah Misch wrote:
>> On Fri, May 19, 2017 at 11:33:48AM +0900, Masahiko Sawada wrote:
>>> On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs  wrote:
 Looks like a bug that we should fix in PG10, with backpatch to 9.4 (or
 as far as it goes).
 
 Objections to commit?
 
>>> 
>>> Seems we still have this issue. Any update or comment on this? Barring
>>> any objections, I'll add this to the open item so it doesn't get
>>> missed.
>> 
>> [Action required within three days.  This is a generic notification.]
>> 
>> The above-described topic is currently a PostgreSQL 10 open item.  Peter,
>> since you committed the patch believed to have created it, you own this open
>> item.  If some other commit is more relevant or if this does not belong as a
>> v10 open item, please let us know.
> 
> I would ask Simon to go ahead with this patch if he feels comfortable
> with it.
> 
> I'm disclaiming this open item, since it's an existing bug from previous
> releases (and I have other open items to focus on).

I’m not entirely sure why this was flagged as "Waiting for Author” by the
automatic run, the patch applies for me and builds so resetting back to “Needs
review”.

Simon: do you think you will have time to look at this patch in this CF?

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Daniel Gustafsson
> On 12 Sep 2017, at 23:00, Tom Lane  wrote:
> 
> Chapman Flack  writes:
>> On 09/12/2017 03:41 PM, Tom Lane wrote:
>>> So the conclusion at the end of the last commitfest was that this patch
>>> should be marked Returned With Feedback, and no new work appears to have
>>> been done on it since then.  Why is it in this fest at all?  There
>>> certainly doesn't seem to be any reason to review it again.
> 
>> I'm not sure how to read the history of the CF entry. Could it
>> have rolled over to 2017-09 by default if its status was simply
>> never changed to Returned with Feedback as intended in the last
>> one? The history doesn't seem to show anything since 2017-03-19.
> 
> Maybe, or whoever was closing out the last CF didn't notice Andres'
> recommendation to mark it RWF.

It doesn’t seem to have been moved to this CF but was actually created here in
the first place.  Reading this thread it seems like there is clear concensus on
the status though so changing to RWF.

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Stephen Frost
Andreas,

* Andreas Joseph Krogh (andr...@visena.com) wrote:
> I have to ask; Why not run pg_upgrade on standby, after verifying that it's 
> in 
> sync with primary and promoting it to primary if necessary and then making it 
> standby again after pg_upgrade is finished?

I don't think that we could be guaranteed that the catalog tables would
be the same on the replica as on the primary if they were actually
created by pg_upgrade.

The catalog tables *must* be identical between the primary and the
replica because they are updated subsequently through WAL replay, not
through SQL commands (which is how pg_upgrade creates them in the first
place).

Perhaps we could have some mode for pg_upgrade where it handles the
update to replicas (with the additional checks that I outlined and using
the methodology discussed for rsync --hard-links), but that would still
require solving the communicate-over-the-network problem between the
primary and the replicas, which is the hard part.  Whether it's an
independent utility or something built into pg_upgrade isn't really that
big of a distinction, though it doesn't seem to me like there'd be much
code reuse there.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Automatic testing of patches in commit fest

2017-09-12 Thread Tom Lane
Thomas Munro  writes:
> On Wed, Sep 13, 2017 at 2:34 AM, Alvaro Herrera  
> wrote:
>> Tom Lane wrote:
>>> Can you clarify what went wrong for you on that one?  I went to rebase it,
>>> but I end up with the identical patch except for a few line-numbering
>>> variations.

> It seems to be a legitimate complaint.  The rejected hunk is trying to
> replace this line:
> !   return exec_simple_check_node((Node *) ((ArrayCoerceExpr
> *) node)->arg);

> But you removed exec_simple_check_node in
> 00418c61244138bd8ac2de58076a1d0dd4f539f3, so this 02 patch needs to be
> rebased.

Hm.  My bad I guess --- apparently, the copy I had of this patch was
already rebased over that, but I'd not reposted it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På onsdag 13. september 2017 kl. 01:38:40, skrev Stephen Frost <
sfr...@snowman.net >:
Bruce, all,
 [snip]

 Further, really, I think we should provide a utility to do all of the
 above instead of using rsync- and that utility should do some additional
 things, such as:

 - Check that the control file on the primary and replica show that they
   reached the same point prior to the pg_upgrade.  If they didn't, then
   things could go badly as there's unplayed WAL that the primary got
   through and the replica didn't.

 - Not copy over unlogged data, or any other information that shouldn't
   be copied across.

 - Allow the directory structures to be more different between the
   primary and the replica than rsync allows (wouldn't have to have a
   common subdirectory on the replica).

 - Perhaps other validation checks or similar.

 Unfortunately, this is a bit annoying as it necessairly involves running
 things on both the primary and the replica from the same tool, without
 access to PG, meaning we'd have to work through something else (such as
 SSH, like rsync does, but then what would we do for Windows...?).

 > > 3. What if the directory-layout isn't the same on primary and standby, ie.
 > > tablespaces are located differently?
 >
 > The way we reconfigured the location of tablespaces in PG 9.0 is that
 > each major version of Postgres places its tablespace in a subdirectory
 > of the tablespace directory, so there is tbldir/9.5 and tbldir/9.6.  If
 > your tbldir is different on the primary and standby, rsync will  still
 > work.  Everything _under_ the standby dir must be laid out the same, but
 > the directories above it can be different.

 That's correct, the directory to use for the tablespace actually *is*
 the tablespace directory (unlike the base directories, it doesn't need
 to be a directory above the tablespace directory, the documentation
 could probably be clearer on this point).

 As for all of the people raising concerns about if this process is
 correct or valid- I contend that the method used above, if done
 properly, isn't materially different from what pg_upgrade itself does.
 If we can't consider this safe then I'm not sure how we consider
 pg_upgrade safe.  (yes, I know there are some who don't, and that's a
 fair position to take also, but I consider the process above, when
 implemented correctly, is essentially the same).

 All that said, I honestly hadn't expected this method to end up in the
 documentation.  Not because I don't trust it or because I wanted to
 hoard the process, but because it takes a great deal of care and there's
 really additional validation that should be done (as discussed above)
 and those are things that I feel reasonable confident I'd remember to do
 when using such a procedure but which I wouldn't expect someone new to
 PG to realize they should do.

 Thanks!

 Stephen
 
 
Thanks for th explaination.
 
I have to ask; Why not run pg_upgrade on standby, after verifying that it's in 
sync with primary and promoting it to primary if necessary and then making it 
standby again after pg_upgrade is finished?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [HACKERS] plpgsql - additional extra checks

2017-09-12 Thread Daniel Gustafsson
> On 08 Apr 2017, at 15:46, David Steele  wrote:
> 
>> On 1/13/17 6:55 AM, Marko Tiikkaja wrote:
>>> On Fri, Jan 13, 2017 at 2:46 AM, Jim Nasby >> > wrote:
>>> 
>>>On 1/11/17 5:54 AM, Pavel Stehule wrote:
>>> 
>>>+too_many_rows
>>>+
>>>+ 
>>>+  When result is assigned to a variable by
>>>INTO clause,
>>>+  checks if query returns more than one row. In this case
>>>the assignment
>>>+  is not deterministic usually - and it can be signal some
>>>issues in design.
>>> 
>>> 
>>>Shouldn't this also apply to
>>> 
>>>var := blah FROM some_table WHERE ...;
>>> 
>>>?
>>> 
>>>AIUI that's one of the beefs the plpgsql2 project has.
>>> 
>>> 
>>> No, not at all.  That syntax is undocumented and only works because
>>> PL/PgSQL is a hack internally.  We don't use it, and frankly I don't
>>> think anyone should.
> 
> This submission has been moved to CF 2017-07.

This patch was automatically marked as “Waiting for author” since it needs to
be updated with the macro changes in 2cd70845240087da205695baedab6412342d1dbe
to compile.  Changing to using TupleDescAttr(); makes it compile again.  Can
you submit an updated version with that fix Pavel?

Stephen, you signed up to review this patch in the previous Commitfest, do you
still intend to work on this?

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Tom Lane
Fabien COELHO  writes:
> See v9 attached.

I've pushed this with some editorialization.

> I put back SetResultVariables function which is called twice, for SQL 
> queries and the new descriptions. It worked out of the box with DECLARE 
> which is just another SQL statement, so maybe I did not understood the 
> cursor issue you were signaling...

No, I was concerned about ExecQueryUsingCursor(), which is used when
FETCH_COUNT is enabled.  It's sort of a pain because you have to
accumulate the row count across multiple PGresults.  If you don't,
then FETCH_COUNT mode isn't transparent, which it's supposed to be.

I did some performance testing of my own, based on this possibly-silly
test case:

perl -e 'for($i=0;$i<999;$i++) {print "set enable_seqscan=0;\n";}' | psql -q

The idea was to run a trivial query and minimize all other psql overhead,
particularly results-printing.  With this, "perf" told me that
SetResultVariables and its called functions accounted for 1.5% of total
CPU (including the server processes).  That's kind of high, but it's
probably tolerable considering that any real application would involve
both far more server work per query and far more psql work (at least for
SELECTs).

One thing we could think about if this seems too high is to drop
ROW_COUNT.  I'm unconvinced that it has a real use-case, and it seems
to be taking more than its share of the work in non-error cases, because
it turns out that PQcmdTuples() is not an amazingly cheap function.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Stephen Frost
Bruce, all,

* Bruce Momjian (br...@momjian.us) wrote:
> On Wed, Sep 13, 2017 at 12:40:32AM +0200, Andreas Joseph Krogh wrote:
> > På tirsdag 12. september 2017 kl. 23:52:02, skrev Bruce Momjian <
> > br...@momjian.us>:
> > 
> > On Tue, Sep 12, 2017 at 08:59:05PM +0200, Andreas Joseph Krogh wrote:
> > >     Improvements?
> > >
> > > Thanks, that certainly improves things.
> > > But; I still find the rsync-command in f) confusing;
> > > 1. Why --size-only? From rsync manual: "skip files that match in 
> > size",
> > is this
> > > safe??
> > 
> > 
> > > 2. Why is old_pgdata in the rsync-command, why is it needed to sync 
> > it?
> > 
> > If the file exists under the same name, it doesn't need to be checked at
> > all --- it is the same.  We don't want to check the file modification
> > time because it will probably be different because of replay delay or
> > clock drift.  We could use checksums, but there is no need since there 
> > is
> > no way the file contents could be different.
> > 
> >  
> >  
> > So you're saying that if the file exists (has the same name) on the standby 
> > (in
> > old_pgdata), and has the same size, then you're safe that it contains the 
> > same
> > data, hence --size-only?
> > Does this apply when not using --link mode for pg_upgrade?
> 
> Well, it is really true in every case.  For link mode, we have to use an
> rsync command that lists both the old and new clusters on the command
> line (since we need rsync to see those hard links to reproduce them). If
> we don't use --size-only, we are going to checksum check the _old_ data
> cluster.  The new cluster will be empty so we will copy all of that (no
> need for a checksum there since there are no files).  I think you need
> size-only even without link since that old cluster is going to be listed
> for rsync.

The above is correct- the old and new are required to get rsync to build
the same hard-link tree on the replica as exists on the primary, post
pg_upgrade.  Also, if --link isn't used with pg_upgrade then you'd want
--size-only with the existing command or you'd end up probably copying
both the old and new clusters and that'd be a lot of additional work.

Other points of clarification here:

Rsync, by default, does *not* use checksums.

The data files on the replica and the data files on the primary do *not*
match bit-for-bit, --checksum will never work (or, rather, it'll always
end up copying everything except in extremely rare circumstances that
would be pure luck).  What matters, however, is that the differences
aren't interesting to PG, any more than they are when it comes to doing
WAL replay.

If --link is *not* used with pg_upgrade, then there's not much point in
using this rsync as it shouldn't be particularly different from just
doing the typical:

rsync --archive new_pgdata remote_dir

post pg_upgrade, though of course that would incur a large amount of
data transfer across the network.

I wouldn't suggest trying to copy the old data dir on the remote to the
new data dir and then doing an rsync- that way lies madness as you would
be copying over catalog files from the old data dir and those could end
up having the same size as the same catalog files post-upgrade on the
primary and then you end up with some odd mix between the two.  That's
bad.  You'd have to identify the catalog files independently and be sure
to exclude them from the copy and that isn't something I would encourage
anyone to try and do.  The rsync --hard-link method with pg_upgrade
--link will get this correct, to be clear.

> Now, what you could do, if you are _not_ using link mode, is to rsync
> only the new cluster, but the instructions we give work the same for
> link and non-link mode and produce the same results in the same time
> even if we had a non-link-mode example, so it seems we might as well
> just give one set of instructions.

For my 2c, at least, I would have specifically pointed out that this
method is really only for when you're using --link mode with pg_upgrade.
If you're not using --link then there's other ways to do this which
would be more efficient than an rsync and which could be done after the
primary is back online (such as doing a backup/restore to rebuild the
replica, or similar).

> > > There are many ways to do/configure things it seems, resulting in many
> > ifs and
> > > buts which makes section 10 rather confusing. I really think a 
> > complete
> > > example, with absolute paths, would be clarifying.
> > 
> > You mean a full rsync command, e.g.:
> > 
> >   rsync --archive --delete --hard-links --size-only \
> >       /opt/PostgreSQL/9.5 /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL
> > 
> > Does that help?
> > 
> >  
> >  
> > It seems some non-obvious assumptions (to me at least) are made here.
> > This example seems only valid when using pg_upgrade --link, correct? If so 
> > it
> > would be clearer to the reader if explicitly 

Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På onsdag 13. september 2017 kl. 01:00:20, skrev Bruce Momjian >:
On Wed, Sep 13, 2017 at 12:40:32AM +0200, Andreas Joseph Krogh wrote:
 > På tirsdag 12. september 2017 kl. 23:52:02, skrev Bruce Momjian <
 > br...@momjian.us>:
 >
 >     On Tue, Sep 12, 2017 at 08:59:05PM +0200, Andreas Joseph Krogh wrote:
 >     >     Improvements?
 >     >
 >     > Thanks, that certainly improves things.
 >     > But; I still find the rsync-command in f) confusing;
 >     > 1. Why --size-only? From rsync manual: "skip files that match in 
size",
 >     is this
 >     > safe??
 >
 >
 >     > 2. Why is old_pgdata in the rsync-command, why is it needed to sync 
it?
 >
 >     If the file exists under the same name, it doesn't need to be checked at
 >     all --- it is the same.  We don't want to check the file modification
 >     time because it will probably be different because of replay delay or
 >     clock drift.  We could use checksums, but there is no need since there 
is
 >     no way the file contents could be different.
 >
 >  
 >  
 > So you're saying that if the file exists (has the same name) on the standby 
(in
 > old_pgdata), and has the same size, then you're safe that it contains the 
same
 > data, hence --size-only?
 > Does this apply when not using --link mode for pg_upgrade?

 Well, it is really true in every case.  For link mode, we have to use an
 rsync command that lists both the old and new clusters on the command
 line (since we need rsync to see those hard links to reproduce them). If
 we don't use --size-only, we are going to checksum check the _old_ data
 cluster.  The new cluster will be empty so we will copy all of that (no
 need for a checksum there since there are no files).  I think you need
 size-only even without link since that old cluster is going to be listed
 for rsync.

 Now, what you could do, if you are _not_ using link mode, is to rsync
 only the new cluster, but the instructions we give work the same for
 link and non-link mode and produce the same results in the same time
 even if we had a non-link-mode example, so it seems we might as well
 just give one set of instructions.

 >     > There are many ways to do/configure things it seems, resulting in many
 >     ifs and
 >     > buts which makes section 10 rather confusing. I really think a 
complete
 >     > example, with absolute paths, would be clarifying.
 >
 >     You mean a full rsync command, e.g.:
 >
 >       rsync --archive --delete --hard-links --size-only \
 >           /opt/PostgreSQL/9.5 /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL
 >
 >     Does that help?
 >
 >  
 >  
 > It seems some non-obvious assumptions (to me at least) are made here.
 > This example seems only valid when using pg_upgrade --link, correct? If so 
it
 > would be clearer to the reader if explicitly stated.

 Well, as I stated above, --hard-links is only going to recreate hard
 links on the standby that exist on the primary, and if you didn't use
 pg_upgrade's --link mode, there will be none, so it is harmless if
 pg_upgrade --link mode was not used.

 > 1. Why do you have to rsync both /opt/PostgreSQL/9.5 AND 
/opt/PostgreSQL/9.6,
 > wouldn't /opt/PostgreSQL/9.6 suffice? Or does this assume "pg_upgrade 
--link"
 > AND "rsync --hard-links" and therefore it somewhat needs to transfer less 
data?

 As I stated above, rsync has to see _both_ hard links on the primary to
 recreate them on the standby.  I thought the doc patch was clear on
 that, but obviously not.  :-(  Suggestions?  (Yes, I admit that using
 rsync in this way is super-crafty, and I would _love_ to take credit for
 the idea, but I think the award goes to Stephen Frost.)

 > 2. What would the rsync command look like if pg_upgrade wasn't issued with
 > --link?

 It would look like:

   rsync --archive /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL/9.6

 but effectively there isn't anything _in_ standby:/opt/PostgreSQL/9.6,
 so you are really just using rsync as cp, and frankly I have found 'cp'
 is faster than rsync when nothing exists on the other side so it really
 becomes "just copy the cluster when the server is down", but I don't
 think people even need instructions for that.

 Maybe we should recommend rsync only for pg_upgrade --link mode?

 > 3. What if the directory-layout isn't the same on primary and standby, ie.
 > tablespaces are located differently?

 The way we reconfigured the location of tablespaces in PG 9.0 is that
 each major version of Postgres places its tablespace in a subdirectory
 of the tablespace directory, so there is tbldir/9.5 and tbldir/9.6.  If
 your tbldir is different on the primary and standby, rsync will  still
 work.  Everything _under_ the standby dir must be laid out the same, but
 the directories above it can be different.
 
 
(I know this isn't exactly -hackers food, but it seems natural to end this 
thread here)
 
Ok, thanks.
It is clearer what happens now that you've explained that there's a clever 

Re: [HACKERS] PATCH: psql show index with type info

2017-09-12 Thread Daniel Gustafsson
> On 18 Apr 2017, at 05:13, Amos Bird  wrote:
> 
> Ah, thanks for the suggestions. I'll revise this patch soon :)

Have you had a chance to revise the patch to address the review comments such
that the patch can move forward during this Commitfest?

cheers ./daniel


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Other formats in pset like markdown, rst, mediawiki

2017-09-12 Thread Daniel Gustafsson
> On 08 May 2017, at 12:02, Fabien COELHO  
> wrote:
> 
> Hello Jan,
> 
> Please give a number to submitted patches. I think that this was v3.
> 
> The patch does NOT fix various issues I pointed out in my previous review:
> 
> - tabs introduced in "doc/src/sgml/ref/psql-ref.sgml"
> - too long help line in "src/bin/psql/help.c"
> - spurious space after a comma in "src/fe_utils/print.c"
>   and possibly elsewhere.
> 
> On Sun, 23 Apr 2017, Jan Michálek wrote:
> 
 Markdown include characters/sequences which are interpreted as markers:
 _Italic_, **Bold**, *** => horizontal rules, > block quote... `inline
 code`... If they are interpreted within a table cell then probably they
 should be escaped somehow.
>> 
>> I have treated "_*|<>"
> 
> Probably not enough, see below. Note the escaping chars should also be 
> escaped.
> 
>>> I`m able to sanitize characters, but complex sequences will be problem. I
>>> will look on this, but I don`t know, if I`m able to do this.
> 
> I do not know whether only those are necessary. Have you checked? Guessing is 
> probably not the right approach.
> 
> 
> Concerning MARKDOWN, and according to the following source about github 
> markdown implementation:
> 
>   https://enterprise.github.com/downloads/en/markdown-cheatsheet.pdf
> 
> The following characters may need to be backslash escaped, although it does 
> not cover HTML stuff.
> 
>   \   backslash
>   `   backtick
>   *   asterisk
>   _   underscore
>   {}  curly braces
>   []  square brackets
>   ()  parentheses
>   #   hash mark
>   +   plus sign
>   -   minus sign (hyphen)
>   .   dot
>   !   exclamation
> 
> Another source https://genius.com/3057216 suggests (* # / ( ) [ ] < >),
> which should protect HTML.
> 
> However, the escaping seems to be the backslash character, NOT using html 
> encoding  as done in your version.
> 
> Where did you find the precise escaping rules for markdown? I do not think 
> that it should be invented...
> 
> 
> I have looked at RST, according to this reference:
> 
>   
> http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#grid-tables
> 
> The good news is that you do not need to handle a special | case because you 
> would only produce clean tables.
> 
> I've tested UTF-8 with plane 1 (你好!) and plane 2 (!) and the alignment seems 
> to worked well, incredible!
> 
>>> My main interest on this was in rst. I`m using markdown only in github 
>>> issues and my knowldge about md is poor.
> 
> Then maybe only do RST?!
> 
> It looks much simpler anyway, and if you do MARKDOWN the support needs to be 
> clean.
> 
> About the code:
> 
> I'm still at odds with the code which needs to test for markdown to call for 
> different functions in multiple places. If you keep md and in order to avoid 
> that, I would suggest to extend the pg_wcs* functions with a list of 
> caracters which may have different sizes with additionnal args, say:
> 
>  pg_wcssize(// same args, plus:
> char * escaped_chars, // will require escaping
> int escape_len, // how many chars added when escaping
> int nllen // len of newline if substituted
> );
> 
> So that pg_wcssize(..., "\r", 1, 1) would behave as before (\n and \t are 
> rather special cases), and the various constants could be held in the format 
> description so the whole thing would be parametric.
> 
> Same approach with format.
> 
> That would allow to simplify the code significantly and to share it between 
> MARKDOWN and others. Also, the multiple "else if" list would be simplified by 
> using strchr or the escaped_chars string.

This patch was moved into the current Commitfest marked “Waiting for author”
with the above review.  Have you had a chance to work on it addressing the
review comments such that we can expect a new version within this CF?

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 7:39 AM, Daniel Gustafsson  wrote:
>> On 12 Sep 2017, at 23:54, Tomas Vondra  wrote:
>> With all due respect, it's hard not to see this as a disruption of the
>> current CF. I agree automating the patch processing is a worthwhile
>> goal, but we're not there yet and it seems somewhat premature.
>>
>> Let me explain why I think so:
>>
>> (1) You just changed the status of 10-15% open patches. I'd expect
>> things like this to be consulted with the CF manager, yet I don't see
>> any comments from Daniel. Considering he's been at the Oslo PUG meetup
>> today, I doubt he was watching hackers very closely.
>
> Correct, I’ve been travelling and running a meetup today so had missed this on
> -hackers.

FWIW, I tend to think that the status of a patch ought to be changed
by either a direct lookup at the patch itself or the author depending
on how the discussion goes on, not an automatic processing. Or at
least have more delay to allow people to object as some patches can be
applied, but do not apply automatically because of naming issues.
There are as well people sending test patches to allow Postgres to
fail on purpose, for example see the replication slot issue not able
to retain a past segment because the beginning of a record was not
tracked correctly on the receiver-side. This can make the recovery
tests fail, but we want them to fail to reproduce easily the wanted
failure.

>> (2) You gave everyone about 4 hours to object, ending 3PM UTC, which
>> excludes about one whole hemisphere where it's either too early or too
>> late for people to respond. I'd say waiting for >24 hours would be more
>> appropriate.
>
> Agreed.

Definitely. Any batch updates have to involve the CFM authorization at
least, in this case Daniel.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] RE: DECLARE STATEMENT setting up a connection in ECPG

2017-09-12 Thread Daniel Gustafsson
> On 30 Mar 2017, at 09:11, Ideriha, Takeshi  
> wrote:
> 
> Thank you for prompt check!
>  
> >As per above test steps, it doesn't produce the results and doesn't
> >generate the error also. I feel this needs to be fixed.
>  
> >As we are at the end of commitfest, it is better you can move it
> >to next one commitfest and provide an updated patch to solve the
> >above problem.
>  
> I tottaly agreed.
> I moved to next CF with waiting on author.

This patch was moved to the current commitfest (and to the previous one from
the 201701 CF).  Have you had the chance to address the review comments such
that there is an update expected within this CF?

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More flexible LDAP auth search filters?

2017-09-12 Thread Thomas Munro
On Wed, Sep 13, 2017 at 1:55 AM, Peter Eisentraut
 wrote:
> On 9/11/17 23:58, Thomas Munro wrote:
>> Sounds good.  Here it is with $username.  It's nice not to have to
>> escape any characters in URLs.  I suppose more keywords could be added
>> in follow-up patches if someone thinks that would be useful
>> ($hostname, $dbname, ...?).  I got sick of that buffer sizing code and
>> changed it to use StringInfo.  Here also are your test patches tweaked
>> slightly: 0002 just adds FreeBSD support as per previous fixup and
>> 0003 changes to $username.
>
> Committed the feature patch.

Thanks!

> Any further thoughts on the test suite?  Otherwise I'll commit it as we
> have it, for manual use.

I wonder if there is a reasonable way to indicate or determine whether
you have slapd installed so that check-world could run this test...

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [JDBC] [HACKERS] Channel binding support for SCRAM-SHA-256

2017-09-12 Thread Michael Paquier
On Tue, Sep 12, 2017 at 11:38 PM, Peter Eisentraut
 wrote:
> It seems we should start by sorting out the mechanism by which the
> client can control what authentication mechanisms it accepts.  In your
> patch set you introduce a connection parameter saslname.  I think we
> should expand that to non-SASL mechanisms and have it be some kind of
> whitelist or blacklist.  It might be reasonable for a client to require
> "gssapi" or "cert" for example or do an exclusion like "!password !md5
> !ldap".
>
> Thoughts?

That looks like a sensible approach to begin with at the end: there
have been complains that a client can be tricked into using MD5 by a
rogue server even if it was willing to use SCRAM. So what about a
parameter called pgauthfilter, which uses a comma-separated list of
keywords. As you say, using an exclamation point to negate an
authentication method is fine for me. For SCRAM, we could just use
"scram-sha-256" as keyword.

Once channel binding is involved though.. This needs to be extended
and this needs careful thoughts:
* "scram-sha-256" means that the version without channel binding is
accepted. "!scram-sha-256" means that scram without channel binding is
refused.
* "scram-sha-256-plus" means that all channel bindings are accepted.
"!scram-sha-256-plus" means that no channel binding are accepted.
After that there is some filtering per channel binding name. Do we
want a separate parameter or just filter with longer names like
"scram-sha-256-plus-tls-unique" and
"scram-sha-256-plus-tls-server-end-point"? The last one gets
particularly long, this does not help users with typos :)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Bruce Momjian
On Wed, Sep 13, 2017 at 12:40:32AM +0200, Andreas Joseph Krogh wrote:
> På tirsdag 12. september 2017 kl. 23:52:02, skrev Bruce Momjian <
> br...@momjian.us>:
> 
> On Tue, Sep 12, 2017 at 08:59:05PM +0200, Andreas Joseph Krogh wrote:
> >     Improvements?
> >
> > Thanks, that certainly improves things.
> > But; I still find the rsync-command in f) confusing;
> > 1. Why --size-only? From rsync manual: "skip files that match in size",
> is this
> > safe??
> 
> 
> > 2. Why is old_pgdata in the rsync-command, why is it needed to sync it?
> 
> If the file exists under the same name, it doesn't need to be checked at
> all --- it is the same.  We don't want to check the file modification
> time because it will probably be different because of replay delay or
> clock drift.  We could use checksums, but there is no need since there is
> no way the file contents could be different.
> 
>  
>  
> So you're saying that if the file exists (has the same name) on the standby 
> (in
> old_pgdata), and has the same size, then you're safe that it contains the same
> data, hence --size-only?
> Does this apply when not using --link mode for pg_upgrade?

Well, it is really true in every case.  For link mode, we have to use an
rsync command that lists both the old and new clusters on the command
line (since we need rsync to see those hard links to reproduce them). If
we don't use --size-only, we are going to checksum check the _old_ data
cluster.  The new cluster will be empty so we will copy all of that (no
need for a checksum there since there are no files).  I think you need
size-only even without link since that old cluster is going to be listed
for rsync.

Now, what you could do, if you are _not_ using link mode, is to rsync
only the new cluster, but the instructions we give work the same for
link and non-link mode and produce the same results in the same time
even if we had a non-link-mode example, so it seems we might as well
just give one set of instructions.

> > There are many ways to do/configure things it seems, resulting in many
> ifs and
> > buts which makes section 10 rather confusing. I really think a complete
> > example, with absolute paths, would be clarifying.
> 
> You mean a full rsync command, e.g.:
> 
>   rsync --archive --delete --hard-links --size-only \
>       /opt/PostgreSQL/9.5 /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL
> 
> Does that help?
> 
>  
>  
> It seems some non-obvious assumptions (to me at least) are made here.
> This example seems only valid when using pg_upgrade --link, correct? If so it
> would be clearer to the reader if explicitly stated.

Well, as I stated above, --hard-links is only going to recreate hard
links on the standby that exist on the primary, and if you didn't use
pg_upgrade's --link mode, there will be none, so it is harmless if
pg_upgrade --link mode was not used.

> 1. Why do you have to rsync both /opt/PostgreSQL/9.5 AND /opt/PostgreSQL/9.6,
> wouldn't /opt/PostgreSQL/9.6 suffice? Or does this assume "pg_upgrade --link"
> AND "rsync --hard-links" and therefore it somewhat needs to transfer less 
> data?

As I stated above, rsync has to see _both_ hard links on the primary to
recreate them on the standby.  I thought the doc patch was clear on
that, but obviously not.  :-(  Suggestions?  (Yes, I admit that using
rsync in this way is super-crafty, and I would _love_ to take credit for
the idea, but I think the award goes to Stephen Frost.)

> 2. What would the rsync command look like if pg_upgrade wasn't issued with
> --link?

It would look like:

  rsync --archive /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL/9.6

but effectively there isn't anything _in_ standby:/opt/PostgreSQL/9.6,
so you are really just using rsync as cp, and frankly I have found 'cp'
is faster than rsync when nothing exists on the other side so it really
becomes "just copy the cluster when the server is down", but I don't
think people even need instructions for that.

Maybe we should recommend rsync only for pg_upgrade --link mode?

> 3. What if the directory-layout isn't the same on primary and standby, ie.
> tablespaces are located differently?

The way we reconfigured the location of tablespaces in PG 9.0 is that
each major version of Postgres places its tablespace in a subdirectory
of the tablespace directory, so there is tbldir/9.5 and tbldir/9.6.  If
your tbldir is different on the primary and standby, rsync will  still
work.  Everything _under_ the standby dir must be laid out the same, but
the directories above it can be different.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På tirsdag 12. september 2017 kl. 23:52:02, skrev Bruce Momjian <
br...@momjian.us >:
On Tue, Sep 12, 2017 at 08:59:05PM +0200, Andreas Joseph Krogh wrote:
 >     Improvements?
 >
 > Thanks, that certainly improves things.
 > But; I still find the rsync-command in f) confusing;
 > 1. Why --size-only? From rsync manual: "skip files that match in size", is 
this
 > safe??


 > 2. Why is old_pgdata in the rsync-command, why is it needed to sync it?

 If the file exists under the same name, it doesn't need to be checked at
 all --- it is the same.  We don't want to check the file modification
 time because it will probably be different because of replay delay or
 clock drift.  We could use checksums, but there is no need since there is
 no way the file contents could be different.
 
 
So you're saying that if the file exists (has the same name) on the standby 
(in old_pgdata), and has the same size, then you're safe that it contains the 
same data, hence --size-only?
Does this apply when not using --link mode for pg_upgrade?
 
 
> There are many ways to do/configure things it seems, resulting in many ifs 
and
 > buts which makes section 10 rather confusing. I really think a complete
 > example, with absolute paths, would be clarifying.

 You mean a full rsync command, e.g.:

   rsync --archive --delete --hard-links --size-only \
       /opt/PostgreSQL/9.5 /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL

 Does that help?
 
 
It seems some non-obvious assumptions (to me at least) are made here.
This example seems only valid when using pg_upgrade --link, correct? If so it 
would be clearer to the reader if explicitly stated.
 
1. Why do you have to rsync both /opt/PostgreSQL/9.5 AND /opt/PostgreSQL/9.6, 
wouldn't /opt/PostgreSQL/9.6 suffice? Or does this assume "pg_upgrade --link" 
AND "rsync --hard-links" and therefore it somewhat needs to transfer less data?
2. What would the rsync command look like if pg_upgrade wasn't issued with 
--link?
3. What if the directory-layout isn't the same on primary and standby, ie. 
tablespaces are located differently?
 
Thanks.
 
--
 Andreas Joseph Krogh
 




Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Daniel Gustafsson
> On 12 Sep 2017, at 23:54, Tomas Vondra  wrote:
> 
> On 09/12/2017 04:14 PM, Aleksander Alekseev wrote:
>> Hello, hackers!
>> 
>> Thanks to the work of Thomas Munro now we have a CI for the patches on
>> the commitfest [1]. Naturally there is still room for improvement, but
>> in any case it's much, much better than nothing.
>> 
>> After a short discussion [2] we agreed (or at least no one objected)
>> to determine the patches that don't apply / don't compile / don't
>> pass regression tests and have "Needs Review" status, change the
>> status of these patches to "Waiting on Author" and write a report
>> (this one) with a CC to the authors. As all we know, we are short on
>> reviewers and this action will save them a lot of time. Here [3] you
>> can find a script I've been using to find such patches.
>> 
>> I rechecked the list manually and did my best to exclude the patches 
>> that were updated recently or that depend on other patches. However 
>> there still a chance that your patch got to the list by a mistake.
>> In this case please let me know.>
> 
> With all due respect, it's hard not to see this as a disruption of the
> current CF. I agree automating the patch processing is a worthwhile
> goal, but we're not there yet and it seems somewhat premature.
> 
> Let me explain why I think so:
> 
> (1) You just changed the status of 10-15% open patches. I'd expect
> things like this to be consulted with the CF manager, yet I don't see
> any comments from Daniel. Considering he's been at the Oslo PUG meetup
> today, I doubt he was watching hackers very closely.

Correct, I’ve been travelling and running a meetup today so had missed this on
-hackers.

> (2) You gave everyone about 4 hours to object, ending 3PM UTC, which
> excludes about one whole hemisphere where it's either too early or too
> late for people to respond. I'd say waiting for >24 hours would be more
> appropriate.

Agreed.

> I object to changing the patch status merely based on the script output.
> It's a nice goal, but we need to do the legwork first, otherwise it'll
> be just annoying and disrupting.

I too fear that automating the state change will move patches away from “Needs
review” in too many cases unless there is manual inspection step.  Colliding on
Oids in pg_proc comes to mind as a case where the patch won’t build, but the
reviewer can trivially fix that locally and keep reviewing.

> I suggest we inspect the reported patches manually, investigate whether
> the failures are legitimate or not, and eliminate as many false
> positives as possible. Once we are happy with the accuracy, we can
> enable it again.

This seems to summarize the sentiment in the other thread, this is absolutely a
step in the right direction, we just need to tweak it with human knowledge
before it can be made fully automatic to avoid false positives.  The last thing
we want is for the community to consider CF status changes/updates to be crying
wolf, there are few enough reviewers as there is.

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Renaming PG_GETARG functions (was Re: PG_GETARG_GISTENTRY?)

2017-09-12 Thread Mark Dilger

> On Sep 12, 2017, at 1:07 PM, Tom Lane  wrote:
> 
> [ changing subject line to possibly draw more attention ]
> 
> Mark Dilger  writes:
>>> On Apr 5, 2017, at 9:23 AM, Tom Lane  wrote:
>>> In short, if you are supposed to write
>>> FOO  *val = PG_GETARG_FOO(n);
>>> then the macro designer blew it, because the name implies that it
>>> returns FOO, not pointer to FOO.  This should be
>>> FOO  *val = PG_GETARG_FOO_P(n);
> 
>> I have written a patch to fix these macro definitions across src/ and
>> contrib/.
> 

Thanks, Tom, for reviewing my patch.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Automatic testing of patches in commit fest

2017-09-12 Thread Thomas Munro
On Tue, Sep 12, 2017 at 12:45 AM, Tomas Vondra
 wrote:
> That won't work until (2) is reliable enough. There are patches (for
> example my "multivariate MCV lists and histograms") which fails to apply
> only because the tool picks the wrong patch. Possibly because it does
> not recognize compressed patches, or something.

FWIW I told it how to handle your .patch.gz files and Alexander
Lakhin's .tar.bz2 files.  Your patch still didn't apply anyway due to
bitrot, but I see you've just posted a new one so it should hopefully
turn green soon.  (It can take a while because it rotates through the
submissions at a rate of one submission every 5 minutes after a new
commit to master is detected, since I don't want to get in trouble for
generating excessive load against the Commitfest, Github or (mainly)
Travis CI.  That's probably too cautious and over time we can revise
it.)

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Automatic testing of patches in commit fest

2017-09-12 Thread Thomas Munro
On Wed, Sep 13, 2017 at 2:34 AM, Alvaro Herrera  wrote:
> Tom Lane wrote:
>> Aleksander Alekseev  writes:
>> > I've ended up with this script [1]. It just generates a list of patches
>> > that are in "Needs Review" status but don't apply or don't compile. Here
>> > is the current list:
>>
>> > === Apply Failed: 29 ===
>> > https://commitfest.postgresql.org/14/1235/ (Support arrays over domain 
>> > types)
>>
>> Can you clarify what went wrong for you on that one?  I went to rebase it,
>> but I end up with the identical patch except for a few line-numbering
>> variations.
>
> I think "git apply" refuses to apply a patch if it doesn't apply
> exactly.  So you could use "git apply -3" (which merges) or just plain
> old "patch" and the patch would work fine.
>
> If the criteria is that strict, I think we should relax it a bit to
> avoid punting patches for pointless reasons.  IOW I think we should at
> least try "git apply -3".

The cfbot is not using git apply, it's using plain old GNU patch
invoked with "patch -p1".  From http://commitfest.cputube.org/ if you
click the "apply|failing" badge you can see the log from the failed
apply attempt.  It says:

== Fetched patches from message ID 3881.1502471872%40sss.pgh.pa.us
== Applying on top of commit 2d4a614e1ec34a746aca43d6a02aa3344dcf5fd4
== Applying patch 01-rationalize-coercion-APIs.patch...
== Applying patch 02-reimplement-ArrayCoerceExpr.patch...
1 out of 1 hunk FAILED -- saving rejects to file
src/pl/plpgsql/src/pl_exec.c.rej

It seems to be a legitimate complaint.  The rejected hunk is trying to
replace this line:

!   return exec_simple_check_node((Node *) ((ArrayCoerceExpr
*) node)->arg);

But you removed exec_simple_check_node in
00418c61244138bd8ac2de58076a1d0dd4f539f3, so this 02 patch needs to be
rebased.

> Also, at this point this should surely be just an experiment.

+1

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Tomas Vondra
On 09/12/2017 04:14 PM, Aleksander Alekseev wrote:
> Hello, hackers!
> 
> Thanks to the work of Thomas Munro now we have a CI for the patches on
> the commitfest [1]. Naturally there is still room for improvement, but
> in any case it's much, much better than nothing.
> 
> After a short discussion [2] we agreed (or at least no one objected)
> to determine the patches that don't apply / don't compile / don't
> pass regression tests and have "Needs Review" status, change the
> status of these patches to "Waiting on Author" and write a report
> (this one) with a CC to the authors. As all we know, we are short on
> reviewers and this action will save them a lot of time. Here [3] you
> can find a script I've been using to find such patches.
> 
> I rechecked the list manually and did my best to exclude the patches 
> that were updated recently or that depend on other patches. However 
> there still a chance that your patch got to the list by a mistake.
> In this case please let me know.>

With all due respect, it's hard not to see this as a disruption of the
current CF. I agree automating the patch processing is a worthwhile
goal, but we're not there yet and it seems somewhat premature.

Let me explain why I think so:

(1) You just changed the status of 10-15% open patches. I'd expect
things like this to be consulted with the CF manager, yet I don't see
any comments from Daniel. Considering he's been at the Oslo PUG meetup
today, I doubt he was watching hackers very closely.

(2) You gave everyone about 4 hours to object, ending 3PM UTC, which
excludes about one whole hemisphere where it's either too early or too
late for people to respond. I'd say waiting for >24 hours would be more
appropriate.

(3) The claim that "on one objected" is somewhat misleading, I guess. I
myself objected to automating this yesterday, and AFAICS Thomas Munro
shares the opinion that we're not ready for automating it.

(4) You say you rechecked the list manually - can you elaborate what you
checked? Per reports from others, some patches seem to "not apply"
merely because "git apply" is quite strict. Have you actually tried
applying / compiling the patches yourself?

(5) I doubt "does not apply" is actually sufficient to move the patch to
"waiting on author". For example my statistics patch was failing to
apply merely due to 821fb8cdbf lightly touching the SGML docs, changing
"type" to "kind" on a few places. Does that mean the patch can't get any
reviews until the author fixes it? Hardly. But after switching it to
"waiting on author" that's exactly what's going to happen, as people are
mostly ignoring patches in that state.

(6) It's generally a good idea to send a message the patch thread
whenever the status is changed, otherwise the patch authors may not
notice the change for a long time. I don't see any such messages,
certainly not in "my" patch thread.

I object to changing the patch status merely based on the script output.
It's a nice goal, but we need to do the legwork first, otherwise it'll
be just annoying and disrupting.

I suggest we inspect the reported patches manually, investigate whether
the failures are legitimate or not, and eliminate as many false
positives as possible. Once we are happy with the accuracy, we can
enable it again.


kind regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Bruce Momjian
On Tue, Sep 12, 2017 at 08:59:05PM +0200, Andreas Joseph Krogh wrote:
> Improvements?
> 
> Thanks, that certainly improves things.
> But; I still find the rsync-command in f) confusing;
> 1. Why --size-only? From rsync manual: "skip files that match in size", is 
> this
> safe??


> 2. Why is old_pgdata in the rsync-command, why is it needed to sync it?

If the file exists under the same name, it doesn't need to be checked at
all --- it is the same.  We don't want to check the file modification
time because it will probably be different because of replay delay or
clock drift.  We could use checksums, but there is no need since there is
no way the file contents could be different.

> There are many ways to do/configure things it seems, resulting in many ifs and
> buts which makes section 10 rather confusing. I really think a complete
> example, with absolute paths, would be clarifying.

You mean a full rsync command, e.g.:

  rsync --archive --delete --hard-links --size-only \
  /opt/PostgreSQL/9.5 /opt/PostgreSQL/9.6 standby:/opt/PostgreSQL

Does that help?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Chapman Flack
On 09/12/17 17:00, Tom Lane wrote:

> I did not see any reason given in the thread why we should need that.
> If you want to accumulate tuples ten at a time before you do something
> with them, you can do that now, by calling ExecutorRun with count=10.

Ah, that sounds easy enough. I'll withdraw the more-complicated suggestion.

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generated columns

2017-09-12 Thread Serge Rielau

> On Sep 12, 2017, at 12:35 PM, Jaime Casanova  
> wrote:
> 
> also is interesting that in triggers, both before and after, the
> column has a null. that seems reasonable in a before trigger but not
> in an after trigger
Why is a NULL reasonable for before triggers?
If I create a table with a column with default and I omit that column on INSERT
Is the column value also NULL in the before trigger? (I hope not)

BTW, the original idea behind generated columns was to materialize them.
Reason being to avoid expensive computations of frequently used expressions 
(and to support indexing in the absence of indexes with expressions)
 
You may find the following amusing:
https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/expression_generated_columns?lang=en
 


Cheers
Serge Rielau
salesforce.com




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2017-09-12 Thread Tomas Vondra
Attached is an updated version of the patch, dealing with fallout of
821fb8cdbf700a8aadbe12d5b46ca4e61be5a8a8 which touched the SGML
documentation for CREATE STATISTICS.

regards

On 09/07/2017 10:07 PM, Tomas Vondra wrote:
> Hi,
> 
> Attached is an updated version of the patch, fixing the issues reported
> by Adrien Nayrat, and also a bunch of issues pointed out by valgrind.
> 
> regards
> 

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists.patch.gz
Description: application/gzip


0002-multivariate-histograms.patch.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Tom Lane
Chapman Flack  writes:
> On 09/12/2017 03:41 PM, Tom Lane wrote:
>> So the conclusion at the end of the last commitfest was that this patch
>> should be marked Returned With Feedback, and no new work appears to have
>> been done on it since then.  Why is it in this fest at all?  There
>> certainly doesn't seem to be any reason to review it again.

> I'm not sure how to read the history of the CF entry. Could it
> have rolled over to 2017-09 by default if its status was simply
> never changed to Returned with Feedback as intended in the last
> one? The history doesn't seem to show anything since 2017-03-19.

Maybe, or whoever was closing out the last CF didn't notice Andres'
recommendation to mark it RWF.

> I would still advocate for a fast-callback/slow-callback distinction,
> as in
> https://www.postgresql.org/message-id/59813946.40508%40anastigmatix.net
> if that does not seem overcomplicated to more experienced hands.

I did not see any reason given in the thread why we should need that.
If you want to accumulate tuples ten at a time before you do something
with them, you can do that now, by calling ExecutorRun with count=10.
(plpgsql does something much like that IIRC.)  The only reason not to
just use count=1 is that ExecutorRun and ExecutePlan have accumulated
assorted startup/shutdown cruft on the assumption that their runtime
didn't particularly matter.  It still doesn't look that awful, but
it might be noticeable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patches that don't apply or don't compile: 2017-09-12

2017-09-12 Thread Thomas Munro
On Wed, Sep 13, 2017 at 2:55 AM, Andreas Karlsson  wrote:
> On 09/12/2017 04:14 PM, Aleksander Alekseev wrote:
>>
>> Title: Foreign Key Arrays
>> Author: Mark Rofail
>> URL:https://commitfest.postgresql.org/14/1252/
>
>
> I am currently reviewing this one and it applies, compiles, and passes the
> test suite. It could be the compilation warnings which makes the system
> think it failed, but I could not find the log of the failed build.

I guess you didn't run "make check-world", because it crashes in the
contrib regression tests:

https://travis-ci.org/postgresql-cfbot/postgresql/builds/274732512

Sorry that the build logs are a bit hard to find at the moment.
Starting from http://commitfest.cputube.org/ if you click the
"build|failing" badge you'll land at
https://travis-ci.org/postgresql-cfbot/postgresql/branches and then
you have to locate the right branch, in this case commitfest/14/1252,
and then click the latest build link which (in this case) looks like
"# 4603 failed".  Eventually I'll have time to figure out how to make
the "build|failing" badge take you there directly...   Eventually I'll
also teach it how to dump a backtrace out of gdb the tests leave a
smouldering core.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create replication slot in pg_basebackup if requested and not yet present

2017-09-12 Thread Michael Banck
Hi,

Am Dienstag, den 12.09.2017, 08:53 -0400 schrieb Peter Eisentraut:
> On 9/11/17 03:11, Michael Banck wrote:
> > > Is there a race condition here?  The slot is created after the checkpoint
> > > is completed.  But you have to start streaming from the LSN where the
> > > checkpoint started, so shouldn't the slot be created before the checkpoint
> > > is started?
> > 
> > So my patch only moves the slot creation slightly further forward,
> > AFAICT.
> > 
> > AIUI, wal streaming always begins at last checkpoint and from my tests
> > the restart_lsn of the created replication slot is also before that
> > checkpoint's lsn. However, I hope somebody more familiar with the
> > WAL/replication slot code could comment on that.  What I dropped in the
> > refactoring is the RESERVE_WAL that used to be there when the temporary
> > slot gets created, I have readded that now.
> 
> Maybe there is an argument to be made here about whether this is correct
> or not, but why bother and risk the fragility?  Why not create the
> replication slot first thing.  I would put it after the server version
> checks and before we write recovery.conf.

The replication slots are created via the replication protocol through
the second background connection that is used for WAL streaming in
StartLogStreamer().

By their nature temporary replication slots must be created by that WAL
streamer using them and cannot be created by the main connection which
initiates the snapshot (or else you get a "replication slot
"pg_basebackup_XXX" is active for PID XXX" error in the WAL streamer).
So ISTM we cannot rip out CreateReplicationSlot() (or the manual
CREATE_REPLICATION_SLOT that is currently in master) from
StartLogStreamer() at least for temporary slots.

We could split up the logic here and create the optional physical
replication slot in the main connection and the temporary one in the WAL
streamer connection, but this would keep any fragility around for
(likely more frequently used) temporary replication slots. It would make
the patch much smaller though if I revert touching temporary slots at
all.

The alternative would be to call StartLogStreamer() earlier, but it
requires xlogstart as argument so we cannot move its call before the
checkpoint is taken and xlogstart is determined, the earliest I managed
was when starttli is determined, which is just few instructions earlier
than now.

Thoughts?


Michael 

-- 
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax:  +49 2166 9901-100
Email: michael.ba...@credativ.de

credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Chapman Flack
On 09/12/2017 03:41 PM, Tom Lane wrote:
> So the conclusion at the end of the last commitfest was that this patch
> should be marked Returned With Feedback, and no new work appears to have
> been done on it since then.  Why is it in this fest at all?  There
> certainly doesn't seem to be any reason to review it again.

I'm not sure how to read the history of the CF entry. Could it
have rolled over to 2017-09 by default if its status was simply
never changed to Returned with Feedback as intended in the last
one? The history doesn't seem to show anything since 2017-03-19.

I would still advocate for a fast-callback/slow-callback distinction,
as in
https://www.postgresql.org/message-id/59813946.40508%40anastigmatix.net
if that does not seem overcomplicated to more experienced hands.

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Renaming PG_GETARG functions (was Re: PG_GETARG_GISTENTRY?)

2017-09-12 Thread Tom Lane
[ changing subject line to possibly draw more attention ]

Mark Dilger  writes:
>> On Apr 5, 2017, at 9:23 AM, Tom Lane  wrote:
>> In short, if you are supposed to write
>>  FOO  *val = PG_GETARG_FOO(n);
>> then the macro designer blew it, because the name implies that it
>> returns FOO, not pointer to FOO.  This should be
>>  FOO  *val = PG_GETARG_FOO_P(n);

> I have written a patch to fix these macro definitions across src/ and
> contrib/.

So to summarize, this patch proposes to rename some DatumGetFoo,
PG_GETARG_FOO, and PG_RETURN_FOO macros for these datatypes:

NDBOX (contrib/cube)
HSTORE
LTREE and other contrib/ltree types

PG_GETARG_ANY_ARRAY (and there are some related macros it maybe should
have touched, like PG_RETURN_EXPANDED_ARRAY)

JSONB

RANGE

The contrib types don't seem like much of a problem, but I wonder
whether anyone feels that rationalizing the names for array, JSON,
or range-type macros will break too much code.

One option if we do feel that way is that we could provide the
old names as alternatives, thus not breaking external modules.
But that seems like it's sabotaging the basic goal of improving
consistency of naming.

If there are not objections, I plan to push forward with this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] domain type smashing is expensive

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 3:16 PM, Tom Lane  wrote:
> I'd say that what you're proposing is the exact opposite of attacking
> the problem at the root.

I agree.  But if we're going to install a cache here, on a
cycle-for-cycle basis, it's going to be hard to beat "caching" the
knowledge that OIDs under 1 are not domains.  I don't find that to
be an optimal solution, but I don't find dumping a bunch of caching
code in there that involves writing more code to buy less performance
to be superior, either.  If we're going to install a point fix, I
think there's much to be said for installing one that works well.

If we want to revisit this more strategically, I think we should throw
the whole idea of having the executor compute slot descriptors from
the tlist out the window.  Every executor node is walking over a
linked list (uggh) of nodes and running not one but two fairly complex
functions (exprType, exprTypmod) on each one.  Then, each type OID has
to be looked up by TupleDescInitEntry to get
attlen/byval/align/storage/collation.  Now, suppose we instead had an
array of structures associated with each plan node, with each element
containing .  Then we wouldn't need
syscache lookups to initialize the individual executor nodes or to
build the RowDescription message, because we'd already have all the
relevant bits in hand.  Plus iterating over an array would probably be
faster than iterating over a list.

The downside is that we'd still need the tlists for other reasons, so
plans would get bigger.  But I don't think that's a huge problem if it
makes them run faster.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP SUBSCRIPTION hangs if sub is disabled in the same transaction

2017-09-12 Thread Tom Lane
Peter Eisentraut  writes:
> I think we are whacking things around a in circle now.  First we moved
> the worker killing to the end of the transaction to make subscription
> DDL transaction-capable.  Then we changed replication origin dropping to
> wait until the worker detaches.  If you do both of these things at once,
> you get this circular dependency.

> We can break this in any number of ways:

> - (your patch) Kill workers right away after ALTER SUBSCRIPTION DISABLE,
> thus breaking the appearance of transactional DDL somewhat.

> - Revert to the old behavior that the replication origin dropping fails
> if it is in use.  Then you would get an error instead of hanging.  But
> that was previously also reported as a bug.

> - Disallow DROP SUBSCRIPTION in a transaction under certain
> circumstances, for example if a transaction has previously manipulated
> the same subscription.

> - Have DROP SUBSCRIPTION attempt to kill workers if the subscription is
> disabled (and possibly, was changed in the same transaction), which
> would address this scenario very narrowly.

ISTM the second of those (refuse to drop an in-use subscription) is
by far the least surprising behavior.  However, I wonder if there aren't
race conditions involved here.  What if we haven't yet committed a
DROP SUBSCRIPTION, and some new worker starts up after we look for
workers?

If there aren't variants of that that will break all four options,
it's not very obvious why not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Tom Lane
So the conclusion at the end of the last commitfest was that this patch
should be marked Returned With Feedback, and no new work appears to have
been done on it since then.  Why is it in this fest at all?  There
certainly doesn't seem to be any reason to review it again.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: DROP SUBSCRIPTION hangs if sub is disabled in the same transaction

2017-09-12 Thread Peter Eisentraut
On 9/11/17 14:26, Peter Eisentraut wrote:
> On 9/10/17 12:14, Noah Misch wrote:
>> This PostgreSQL 10 open item is past due for your status update.  Kindly send
>> a status update within 24 hours, and include a date for your subsequent 
>> status
>> update.  Refer to the policy on open item ownership:
>> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
> 
> I'm looking into this now and will report tomorrow.

I need some community feedback on the possible solutions.  I will wait
until Thursday.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP SUBSCRIPTION hangs if sub is disabled in the same transaction

2017-09-12 Thread Peter Eisentraut
On 9/4/17 10:41, Arseny Sher wrote:
> node 2:
> create table t (i int);
> create subscription s CONNECTION 'port=5432' publication p;
> begin;
> alter subscription s disable ;
> alter subscription s set (slot_name = none);
> drop subscription s;
> end;
> 
> It hangs in replorigin_drop because we wait until ReplicationState is
> released. This should happen on exit of worker, but worker will not exit
> until transaction commit because he doesn't see that the sub was
> disabled.

I think we are whacking things around a in circle now.  First we moved
the worker killing to the end of the transaction to make subscription
DDL transaction-capable.  Then we changed replication origin dropping to
wait until the worker detaches.  If you do both of these things at once,
you get this circular dependency.

We can break this in any number of ways:

- (your patch) Kill workers right away after ALTER SUBSCRIPTION DISABLE,
thus breaking the appearance of transactional DDL somewhat.

- Revert to the old behavior that the replication origin dropping fails
if it is in use.  Then you would get an error instead of hanging.  But
that was previously also reported as a bug.

- Disallow DROP SUBSCRIPTION in a transaction under certain
circumstances, for example if a transaction has previously manipulated
the same subscription.

- Have DROP SUBSCRIPTION attempt to kill workers if the subscription is
disabled (and possibly, was changed in the same transaction), which
would address this scenario very narrowly.

Maybe there are more ideas.  But I think we have to pick a poison.

Thoughts?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generated columns

2017-09-12 Thread Jaime Casanova
On 10 September 2017 at 00:08, Jaime Casanova
 wrote:
>
> During my own tests, though, i found some problems:
>

a few more tests:

create table t1 (
 id serial,
 height_cm int,
 height_in int generated always as (height_cm * 10)
) ;


"""
postgres=# alter table t1 alter height_cm type numeric;
ERROR:  unexpected object depending on column: table t1 column height_in
"""
should i drop the column and recreate it after the fact? this seems
more annoying than the same problem with views (drop view & recreate),
specially after you implement STORED


"""
postgres=# alter table t1 alter height_in type numeric;
ERROR:  found unexpected dependency type 'a'
"""
uh!?


also is interesting that in triggers, both before and after, the
column has a null. that seems reasonable in a before trigger but not
in an after trigger
"""
create function f_trg1() returns trigger as $$
  begin
 raise notice '%', new.height_in;
 return new;
  end
$$ language plpgsql;

create trigger trg1 before insert on t1
for each row execute procedure f_trg1();

postgres=# insert into t1 values(default, 100);
NOTICE:  
INSERT 0 1

create trigger trg2 after insert on t1
for each row execute procedure f_trg1();

postgres=# insert into t1 values(default, 100);
NOTICE:  
NOTICE:  
INSERT 0 1
"""

the default value shouldn't be dropped.
"""
postgres=# alter table t1 alter height_in drop default;
ALTER TABLE
postgres=# \d t1
  Table "public.t1"
  Column   |  Type   | Collation | Nullable |Default
+-+---+--+
 id | integer |   | not null |
nextval('t1_id_seq'::regclass)
 height_cm | integer |   |  |
 height_in   | integer |   |  | generated always as ()
"""
-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Fabien COELHO



Well, if we provided a different SQLSTATE for each qualitatively
different type of libpq error, that might well be useful enough to
justify some risk of application breakage.  But replacing a constant
string that we've had for ~15 years with a different constraint string
isn't doing anything about the lack-of-information problem you're
complaining about.


True.  Well, the original point here was whether psql ought to be doing
something to mask libpq's (mis) behavior.  I'm inclined to think not:
if it doesn't get a SQLSTATE from the PGresult, it should just set the
sqlstate variables to empty strings.


See v9 attached.

--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index a74caf8..b994fcd 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3518,6 +3518,16 @@ bar
   
 
   
+   ERROR
+   
+
+ Whether the last query failed, as a boolean.
+ See also SQLSTATE.
+
+   
+  
+
+  
 FETCH_COUNT
 
 
@@ -3654,6 +3664,18 @@ bar
   
 
   
+   LAST_ERROR_SQLSTATE
+   LAST_ERROR_MESSAGE
+   
+
+ The error code and associated error message of the last
+ error, or "0" and empty strings if no error occured
+ since the beginning of the script.
+
+   
+  
+
+  
   
ON_ERROR_ROLLBACK

@@ -3722,6 +3744,25 @@ bar
   
 
   
+   ROW_COUNT
+   
+
+ How many rows were returned or affected by the last query.
+
+   
+  
+
+  
+   SQLSTATE
+   
+
+ The error code associated to the last query, or
+ 0 if no error occured.
+
+   
+  
+
+  
 QUIET
 
 
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index b997058..cc7e3aa 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -493,7 +493,6 @@ ResetCancelConn(void)
 #endif
 }
 
-
 /*
  * AcceptResult
  *
@@ -971,6 +970,45 @@ loop_exit:
 	return success;
 }
 
+/*
+ * Set special variables
+ * - ERROR: true/false, whether an error occurred
+ * - SQLSTATE: code of error, or "0", or ""
+ * - LAST_ERROR_SQLSTATE: same for last error
+ * - LAST_ERROR_MESSAGE: message of last error
+ * - ROW_COUNT: how many rows were returned or affected, or "0"
+ */
+static void
+SetResultVariables(PGresult *results, bool success)
+{
+	if (success)
+	{
+		char   *ntuples = PQcmdTuples(results);
+		SetVariable(pset.vars, "ERROR", "false");
+		SetVariable(pset.vars, "SQLSTATE", "0");
+		SetVariable(pset.vars, "ROW_COUNT", *ntuples ? ntuples : "0");
+	}
+	else
+	{
+		char 		   *code = PQresultErrorField(results, PG_DIAG_SQLSTATE);
+		char 		   *mesg = PQresultErrorField(results, PG_DIAG_MESSAGE_PRIMARY);
+
+		SetVariable(pset.vars, "ERROR", "true");
+
+		/*
+		 * if there is no code, use an empty string?
+		 * libpq may return such thing on internal errors
+		 * (lost connection, EOM).
+		 */
+		if (code == NULL)
+			code = "" ;
+
+		SetVariable(pset.vars, "SQLSTATE", code);
+		SetVariable(pset.vars, "LAST_ERROR_SQLSTATE", code);
+		SetVariable(pset.vars, "LAST_ERROR_MESSAGE", mesg ? mesg : "");
+		SetVariable(pset.vars, "ROW_COUNT", "0");
+	}
+}
 
 /*
  * ProcessResult: utility function for use by SendQuery() only
@@ -1107,6 +1145,8 @@ ProcessResult(PGresult **results)
 		first_cycle = false;
 	}
 
+	SetResultVariables(*results, success);
+
 	/* may need this to recover from conn loss during COPY */
 	if (!first_cycle && !CheckConnection())
 		return false;
@@ -1214,7 +1254,6 @@ PrintQueryResults(PGresult *results)
 	return success;
 }
 
-
 /*
  * SendQuery: send the query string to the backend
  * (and print out results)
@@ -1523,7 +1562,11 @@ DescribeQuery(const char *query, double *elapsed_msec)
 	 * good thing because libpq provides no easy way to do that.)
 	 */
 	results = PQprepare(pset.db, "", query, 0, NULL);
-	if (PQresultStatus(results) != PGRES_COMMAND_OK)
+	OK = PQresultStatus(results) == PGRES_COMMAND_OK;
+
+	SetResultVariables(results, OK);
+
+	if (!OK)
 	{
 		psql_error("%s", PQerrorMessage(pset.db));
 		ClearOrSaveResult(results);
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 4d1c0ec..ae951f5 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -337,7 +337,7 @@ helpVariables(unsigned short int pager)
 	 * Windows builds currently print one more line than non-Windows builds.
 	 * Using the larger number is fine.
 	 */
-	output = PageOutput(147, pager ? &(pset.popt.topt) : NULL);
+	output = PageOutput(155, pager ? &(pset.popt.topt) : NULL);
 
 	fprintf(output, _("List of specially treated variables\n\n"));
 
@@ -360,6 +360,8 @@ helpVariables(unsigned short int pager)
 	  "if set to \"noexec\", just show them without execution\n"));
 	fprintf(output, _("  ENCODING\n"
 	  "current client character set encoding\n"));
+	fprintf(output, _("  ERROR\n"
+	

Re: [HACKERS] Automatic testing of patches in commit fest

2017-09-12 Thread Andrew Dunstan


On 09/12/2017 11:30 AM, Tom Lane wrote:
> Alvaro Herrera  writes:
>> Tom Lane wrote:
>>> Aleksander Alekseev  writes:
 === Apply Failed: 29 ===
 https://commitfest.postgresql.org/14/1235/ (Support arrays over domain 
 types)
>>> Can you clarify what went wrong for you on that one?  I went to rebase it,
>>> but I end up with the identical patch except for a few line-numbering
>>> variations.
>> I think "git apply" refuses to apply a patch if it doesn't apply
>> exactly.  So you could use "git apply -3" (which merges) or just plain
>> old "patch" and the patch would work fine.
>> If the criteria is that strict, I think we should relax it a bit to
>> avoid punting patches for pointless reasons.  IOW I think we should at
>> least try "git apply -3".
> FWIW, I always initially apply patches with good ol' patch(1).  So for
> me, whether that way works would be the most interesting thing.  Don't
> know about other committers' workflows.



Yeah, that's what I do, too.


>
>> Also, at this point this should surely be just an experiment.
> +1 ... seems like there's enough noise here that changing patch status
> based on the results might be premature.  Still, I applaud the effort.


I think a regular report of what doesn't apply and what doesn't build
will be very useful on its own, especially if there are links to the
failure reports. When we are satisfied that we're not getting
significant numbers of false negatives over a significant period we can
talk about automating CF state changes. I agree this is nice work.


cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench regression test failure

2017-09-12 Thread Tom Lane
Fabien COELHO  writes:
>> I have a serious, serious dislike for tests that seem to work until
>> they're run on a heavily loaded machine.

> I'm not that sure the error message was because of that.

No, this particular failure (probably) wasn't.  But now that I've realized
that this test case is timing-sensitive, I'm worried about what will
happen when it's run on a sufficiently slow or loaded machine.

>> I would not necessarily object to doing something in the code that
>> would guarantee that, though.

> Hmmm. Interesting point.

It could be as simple as putting the check-for-done at the bottom of the
loop not the top, perhaps.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Tom Lane
Robert Haas  writes:
> Well, if we provided a different SQLSTATE for each qualitatively
> different type of libpq error, that might well be useful enough to
> justify some risk of application breakage.  But replacing a constant
> string that we've had for ~15 years with a different constraint string
> isn't doing anything about the lack-of-information problem you're
> complaining about.

True.  Well, the original point here was whether psql ought to be doing
something to mask libpq's (mis) behavior.  I'm inclined to think not:
if it doesn't get a SQLSTATE from the PGresult, it should just set the
sqlstate variables to empty strings.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench regression test failure

2017-09-12 Thread Fabien COELHO



I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine.


I'm not that sure the error message was because of that. ISTM that it was 
rather finding 3 seconds in two because it started just at the right time, 
or maybe because of slowness induce by load and the order in which the 
different checks are performed.


So unless there is some reason why pgbench is *guaranteed* to run at 
least one transaction per thread, I'd rather the test not assume that.


Well, pgbench is for testing performance... so if the checks allow zero 
performance that's quite annoying as well:-) The tests are designed to 
require very low performance (eg there are a lot of -t 1 when only one 
transaction is enough to check a point), but maybe some test assume a 
minimal requirement, maybe 10 tps with 2 threads...



I would not necessarily object to doing something in the code that
would guarantee that, though.


Hmmm. Interesting point.

There could be a client-side synchronization barrier, eg something like 
"\sync :nclients/nthreads" could be easy enough to implement with pthread, 
and quite error prone to use, but probably that could be okay for 
validation purposes. Or maybe we could expose something at the SQL level, 
eg "SELECT synchro('synchroname', whomanyclientstowait);" which would be 
harder to implement server-side but possibly doable as well.


A simpler option may be to introduce a synchronization barrier at thread 
start, so that all threads start together and that would set the "zero" 
time. Not sure that would solve the potential issue you raise, although 
that would help.


Currently the statistics collection and outputs are performed by thread 0 
in addition to the client it runs, so that pgbench would work even if 
there are no threads, but it also means that under a heavy load some 
things may not be done on the target time but a little bit later, if some 
thread is stuck somewhere. Although the async protocol try to avoid that.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På tirsdag 12. september 2017 kl. 21:11:45, skrev Robert Haas <
robertmh...@gmail.com >:
On Tue, Sep 12, 2017 at 2:59 PM, Andreas Joseph Krogh
  wrote:
 > There are many ways to do/configure things it seems, resulting in many ifs
 > and buts which makes section 10 rather confusing. I really think a complete
 > example, with absolute paths, would be clarifying.
 >
 > I'm afraid many will still re-create standbys from scratch without a really
 > good and complete example to follow.

 And I'm afraid that they won't.
 
Yea. Put it that way - me too:-)
The consequences of not re-creating standbys from scratch and not 
understanding section 10, and doing it wrong, are far worse...
 
--
 Andreas Joseph Krogh
 




Re: [HACKERS] domain type smashing is expensive

2017-09-12 Thread Tom Lane
Robert Haas  writes:
> On Tue, Sep 12, 2017 at 1:37 PM, Tom Lane  wrote:
>> The trick here is that I don't think we want to change the returned column
>> types for queries that are not being sent to a client.  The parser and
>> planner aren't really aware of that context ATM.  Maybe we could make them
>> so?

> I guess it depends on whether that context is mutable.  Can I Parse a
> query to create a prepared statement, then use that from a stored
> procedure?  If so, then it's not firmly known at plan time what the
> execution context will be.

Um, good point; I'm pretty sure that we don't distinguish.  This may
well be the reason it's done like this right now.

>> I wonder if it'd help to put some kind of bespoke cache into getBaseType.
>> We've done that elsewhere, eg operator lookup.

> That might be a possibility, although I feel like it's likely to be
> substantially less effective than the quick hack, and it's not really
> attacking the problem at the root anyway.

I'd say that what you're proposing is the exact opposite of attacking
the problem at the root.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 3:12 PM, Tom Lane  wrote:
>> I think this is a bad plan.  Right now, libpq sets no SQLSTATE for
>> internally generated errors; it is almost certain that there are
>> applications testing for an empty SQLSTATE to notice when they're
>> getting an error from libpq.  EnterpriseDB had a support ticket quite
>> recently where this precise behavior was at issue.  Changing it will
>> break stuff, so we shouldn't do it unless there's a really compelling
>> benefit.  Universally returning PQ000 is not a sufficient improvement
>> over universally returning the empty string to justify the risk of
>> application breakage.
>
> I don't think I want to buy this argument, because the logical conclusion
> of it is that we can never fix libpq to offer proper SQLSTATEs for
> client-side errors.  Admittedly, the fact that nobody's bothered to do so
> in ~15 years may indicate that nobody cares ... but I would think that
> at least it'd be useful to distinguish, say, ENOMEM from connection loss.
> Saying we can't do it for compatibility reasons doesn't sound great
> to me.  Especially when you've not provided any hard evidence as to why
> the current lack-of-information is useful.

Well, if we provided a different SQLSTATE for each qualitatively
different type of libpq error, that might well be useful enough to
justify some risk of application breakage.  But replacing a constant
string that we've had for ~15 years with a different constraint string
isn't doing anything about the lack-of-information problem you're
complaining about.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Tom Lane
Robert Haas  writes:
> On Tue, Sep 12, 2017 at 1:23 PM, Fabien COELHO  wrote:
>> I added two error codes, which is debatable. One is used hardcoded by libpq
>> if no diagnostic is found, and the other by psql if libpq returned something
>> empty, which might happen if psql is linked with an older libpq, maybe. I do
>> not know how to trigger such errors anyway, so this is rather academic.

> I think this is a bad plan.  Right now, libpq sets no SQLSTATE for
> internally generated errors; it is almost certain that there are
> applications testing for an empty SQLSTATE to notice when they're
> getting an error from libpq.  EnterpriseDB had a support ticket quite
> recently where this precise behavior was at issue.  Changing it will
> break stuff, so we shouldn't do it unless there's a really compelling
> benefit.  Universally returning PQ000 is not a sufficient improvement
> over universally returning the empty string to justify the risk of
> application breakage.

I don't think I want to buy this argument, because the logical conclusion
of it is that we can never fix libpq to offer proper SQLSTATEs for
client-side errors.  Admittedly, the fact that nobody's bothered to do so
in ~15 years may indicate that nobody cares ... but I would think that
at least it'd be useful to distinguish, say, ENOMEM from connection loss.
Saying we can't do it for compatibility reasons doesn't sound great
to me.  Especially when you've not provided any hard evidence as to why
the current lack-of-information is useful.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 2:59 PM, Andreas Joseph Krogh
 wrote:
> There are many ways to do/configure things it seems, resulting in many ifs
> and buts which makes section 10 rather confusing. I really think a complete
> example, with absolute paths, would be clarifying.
>
> I'm afraid many will still re-create standbys from scratch without a really
> good and complete example to follow.

And I'm afraid that they won't.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 3:46 AM, Amit Langote
 wrote:
> In this case, AcquireExecutorLocks will lock all the relations in
> PlannedStmt.rtable, which must include all partitioned tables of all
> partition trees involved in the query.  Of those, it will lock the tables
> whose RT indexes appear in PlannedStmt.nonleafResultRelations with
> RowExclusiveLock mode.  PlannedStmt.nonleafResultRelations is a global
> list of all partitioned table RT indexes obtained by concatenating
> partitioned_rels lists of all ModifyTable nodes involved in the query
> (set_plan_refs does that).  We need to distinguish nonleafResultRelations,
> because we need to take the stronger lock on a given table before any
> weaker one if it happens to appear in the query as a non-result relation
> too, to avoid lock strength upgrade deadlock hazard.

Hmm.  The problem with this theory in my view is that it doesn't
explain why InitPlan() and ExecOpenScanRelation() lock the relations
instead of just assuming that they are already locked either by
AcquireExecutorLocks or by planning.  If ExecLockNonLeafAppendTables()
doesn't really need to take locks, then ExecOpenScanRelation() must
not need to do it either.  We invented ExecLockNonLeafAppendTables()
on the occasion of removing the scans of those tables which would
previously have caused ExecOpenScanRelation() to be invoked, so as to
keep the locking behavior unchanged.

AcquireExecutorLocks() looks like an odd bit of code to me.  The
executor itself locks result tables in InitPlan() and then everything
else during InitPlan() and all of the others later on while walking
the plan tree -- comments in InitPlan() say that this is to avoid a
lock upgrade hazard if a result rel is also a source rel.  But
AcquireExecutorLocks() has no such provision; it just locks everything
in RTE order.  In theory, that's a deadlock hazard of another kind, as
we just talked about in the context of EIBO.  In fact, expanding in
bound order has made the situation worse: before, expansion order and
locking order were the same, so maybe having AcquireExecutorLocks()
work in RTE order coincidentally happened to give the same result as
the executor code itself as long as there are no result relations.
But this is certainly not true any more.  I'm not sure it's worth
expending a lot of time on this -- it's evidently not a problem in
practice, or somebody probably would've complained before now.

But that having been said, I don't think we should assume that all the
locks taken from the executor are worthless because plancache.c will
always do the job for us.  I don't know of a case where we execute a
saved plan without going through the plan cache, but that doesn't mean
that there isn't one or that there couldn't be one in the future.
It's not the job of these partitioning patches to whack around the way
we do locking in general -- they should preserve the existing behavior
as much as possible.  If we want to get rid of the locking in the
executor altogether, that's a separate discussion where, I have a
feeling, there will prove to be better reasons for the way things are
than we are right now supposing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-12 Thread Pavel Stehule
Hi

I am sending rebased patch

Regards

Pavel
diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y
index 94f1f58593..4b6bf0b5bc 100644
--- a/src/pl/plpgsql/src/pl_gram.y
+++ b/src/pl/plpgsql/src/pl_gram.y
@@ -92,9 +92,10 @@ static	char			*NameOfDatum(PLwdatum *wdatum);
 static	void			 check_assignable(PLpgSQL_datum *datum, int location);
 static	void			 read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row,
 		  bool *strict);
-static	PLpgSQL_row		*read_into_scalar_list(char *initial_name,
-			   PLpgSQL_datum *initial_datum,
-			   int initial_location);
+static void read_into_list(char *initial_name,
+	  PLpgSQL_datum *initial_datum, int initial_location,
+	  PLpgSQL_datum **scalar,
+	  PLpgSQL_rec **rec, PLpgSQL_row **row);
 static	PLpgSQL_row		*make_scalar_list1(char *initial_name,
 		   PLpgSQL_datum *initial_datum,
 		   int lineno, int location);
@@ -1558,33 +1559,9 @@ for_variable	: T_DATUM
 	{
 		$$.name = NameOfDatum(&($1));
 		$$.lineno = plpgsql_location_to_lineno(@1);
-		if ($1.datum->dtype == PLPGSQL_DTYPE_ROW)
-		{
-			$$.scalar = NULL;
-			$$.rec = NULL;
-			$$.row = (PLpgSQL_row *) $1.datum;
-		}
-		else if ($1.datum->dtype == PLPGSQL_DTYPE_REC)
-		{
-			$$.scalar = NULL;
-			$$.rec = (PLpgSQL_rec *) $1.datum;
-			$$.row = NULL;
-		}
-		else
-		{
-			int			tok;
 
-			$$.scalar = $1.datum;
-			$$.rec = NULL;
-			$$.row = NULL;
-			/* check for comma-separated list */
-			tok = yylex();
-			plpgsql_push_back_token(tok);
-			if (tok == ',')
-$$.row = read_into_scalar_list($$.name,
-			   $$.scalar,
-			   @1);
-		}
+		read_into_list($$.name, $1.datum, @1,
+	   &$$.scalar, &$$.rec, &$$.row);
 	}
 | T_WORD
 	{
@@ -3337,89 +3314,21 @@ check_assignable(PLpgSQL_datum *datum, int location)
 }
 
 /*
- * Read the argument of an INTO clause.  On entry, we have just read the
- * INTO keyword.
- */
-static void
-read_into_target(PLpgSQL_rec **rec, PLpgSQL_row **row, bool *strict)
-{
-	int			tok;
-
-	/* Set default results */
-	*rec = NULL;
-	*row = NULL;
-	if (strict)
-		*strict = false;
-
-	tok = yylex();
-	if (strict && tok == K_STRICT)
-	{
-		*strict = true;
-		tok = yylex();
-	}
-
-	/*
-	 * Currently, a row or record variable can be the single INTO target,
-	 * but not a member of a multi-target list.  So we throw error if there
-	 * is a comma after it, because that probably means the user tried to
-	 * write a multi-target list.  If this ever gets generalized, we should
-	 * probably refactor read_into_scalar_list so it handles all cases.
-	 */
-	switch (tok)
-	{
-		case T_DATUM:
-			if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_ROW)
-			{
-check_assignable(yylval.wdatum.datum, yylloc);
-*row = (PLpgSQL_row *) yylval.wdatum.datum;
-
-if ((tok = yylex()) == ',')
-	ereport(ERROR,
-			(errcode(ERRCODE_SYNTAX_ERROR),
-			 errmsg("record or row variable cannot be part of multiple-item INTO list"),
-			 parser_errposition(yylloc)));
-plpgsql_push_back_token(tok);
-			}
-			else if (yylval.wdatum.datum->dtype == PLPGSQL_DTYPE_REC)
-			{
-check_assignable(yylval.wdatum.datum, yylloc);
-*rec = (PLpgSQL_rec *) yylval.wdatum.datum;
-
-if ((tok = yylex()) == ',')
-	ereport(ERROR,
-			(errcode(ERRCODE_SYNTAX_ERROR),
-			 errmsg("record or row variable cannot be part of multiple-item INTO list"),
-			 parser_errposition(yylloc)));
-plpgsql_push_back_token(tok);
-			}
-			else
-			{
-*row = read_into_scalar_list(NameOfDatum(&(yylval.wdatum)),
-			 yylval.wdatum.datum, yylloc);
-			}
-			break;
-
-		default:
-			/* just to give a better message than "syntax error" */
-			current_token_is_not_variable(tok);
-	}
-}
-
-/*
  * Given the first datum and name in the INTO list, continue to read
- * comma-separated scalar variables until we run out. Then construct
+ * comma-separated variables until we run out. Then construct
  * and return a fake "row" variable that represents the list of
- * scalars.
+ * fields. When there is only one rec or row field, then return
+ * this variable without nesting.
  */
-static PLpgSQL_row *
-read_into_scalar_list(char *initial_name,
-	  PLpgSQL_datum *initial_datum,
-	  int initial_location)
+static void
+read_into_list(char *initial_name,
+	  PLpgSQL_datum *initial_datum, int initial_location,
+	  PLpgSQL_datum **scalar, PLpgSQL_rec **rec, PLpgSQL_row **row)
 {
 	int nfields;
 	char			*fieldnames[1024];
 	int varnos[1024];
-	PLpgSQL_row		*row;
+	PLpgSQL_row		*auxrow;
 	int tok;
 
 	check_assignable(initial_datum, initial_location);
@@ -3427,6 +3336,21 @@ read_into_scalar_list(char *initial_name,
 	varnos[0]	  = initial_datum->dno;
 	nfields		  = 1;
 
+	*rec = NULL;
+	*row = NULL;
+	if (scalar)
+		*scalar = NULL;
+
+	/*
+	 * save row or rec if list 

Re: [HACKERS] Clarification in pg10's pgupgrade.html step 10 (upgrading standby servers)

2017-09-12 Thread Andreas Joseph Krogh
På tirsdag 12. september 2017 kl. 19:19:22, skrev Bruce Momjian <
br...@momjian.us >:
On Thu, Aug  3, 2017 at 11:37:32AM +0200, Michael Paquier wrote:
 > On Mon, Jul 31, 2017 at 6:13 PM, Robert Haas  wrote:
 > > On Fri, Jul 28, 2017 at 10:35 AM, Andreas Joseph Krogh
 > >  wrote:
 > >> I'm reading https://www.postgresql.org/docs/10/static/pgupgrade.html to 
try
 > >> to understand how to upgrade standby-servers using pg_upgrade with pg10.
 > >>
 > >> The text in step 10 sais:
 > >> "You will not be running pg_upgrade on the standby servers, but rather
 > >> rsync", which to me sounds like rsync, in step 10-f, should be issued on 
the
 > >> standy servers. Is this the case? If so I don't understand how the 
standby's
 > >> data is upgraded and what "remote_dir" is. If rsync is supposed to be 
issued
 > >> on the primary then I think it should be explicitly mentioned, and step 
10-f
 > >> should provide a clarer example with more detailed values for the
 > >> directory-structures involved.
 > >>
 > >> I really think section 10 needs improvement as I'm certainly not 
comfortable
 > >> upgrading standbys following the existing procedure.
 > >
 > > Yeah, I don't understand it either, and I have never been convinced
 > > that there's any safe way to do it other than recloning the standbys
 > > from the upgraded master.
 >
 > Here are my 2c on the matter. 10-f means that the upgraded node may
 > have generated WAL with wal_level = minimal, which, at least it seems
 > to me, that we have a risk of having inconsistent data pages if only a
 > rsync is used on the old standbys. Like Robert, the flow we used in
 > the products I work on is to re-create standbys from scratch after the
 > upgrade using a fresh backup, with a VM cloning. An upgrade here is an
 > in-place process not only linked to Postgres, so standby VMs are made
 > of many services, some are being linked to Postgres. So this choice is
 > mainly decided by those dependencies, still it feels safer anyway.

 I have applied the attached doc patch back to 9.5 to clarify
 pg_upgrade's rsync instructions and explain how it works.

 Improvements?
 
 
Thanks, that certainly improves things.
But; I still find the rsync-command in f) confusing;
1. Why --size-only? From rsync manual: "skip files that match in size", is 
this safe??
2. Why is old_pgdata in the rsync-command, why is it needed to sync it?
 
There are many ways to do/configure things it seems, resulting in many ifs and 
buts which makes section 10 rather confusing. I really think a complete 
example, with absolute paths, would be clarifying.
 
I'm afraid many will still re-create standbys from scratch without a really 
good and complete example to follow.

 --
 Andreas Joseph Krogh


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Pavel Stehule
2017-09-12 20:43 GMT+02:00 Robert Haas :

> On Tue, Sep 12, 2017 at 1:23 PM, Fabien COELHO 
> wrote:
> > I added two error codes, which is debatable. One is used hardcoded by
> libpq
> > if no diagnostic is found, and the other by psql if libpq returned
> something
> > empty, which might happen if psql is linked with an older libpq, maybe.
> I do
> > not know how to trigger such errors anyway, so this is rather academic.
>
> I think this is a bad plan.  Right now, libpq sets no SQLSTATE for
> internally generated errors; it is almost certain that there are
> applications testing for an empty SQLSTATE to notice when they're
> getting an error from libpq.  EnterpriseDB had a support ticket quite
> recently where this precise behavior was at issue.  Changing it will
> break stuff, so we shouldn't do it unless there's a really compelling
> benefit.  Universally returning PQ000 is not a sufficient improvement
> over universally returning the empty string to justify the risk of
> application breakage.
>

+1

Pavel


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 1:23 PM, Fabien COELHO  wrote:
> I added two error codes, which is debatable. One is used hardcoded by libpq
> if no diagnostic is found, and the other by psql if libpq returned something
> empty, which might happen if psql is linked with an older libpq, maybe. I do
> not know how to trigger such errors anyway, so this is rather academic.

I think this is a bad plan.  Right now, libpq sets no SQLSTATE for
internally generated errors; it is almost certain that there are
applications testing for an empty SQLSTATE to notice when they're
getting an error from libpq.  EnterpriseDB had a support ticket quite
recently where this precise behavior was at issue.  Changing it will
break stuff, so we shouldn't do it unless there's a really compelling
benefit.  Universally returning PQ000 is not a sufficient improvement
over universally returning the empty string to justify the risk of
application breakage.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] domain type smashing is expensive

2017-09-12 Thread Robert Haas
On Tue, Sep 12, 2017 at 1:37 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On short-running queries that return a lot of columns,
>> SendRowDescriptionMessage's calls to getBaseTypeAndTypmod() are a
>> noticeable expense.
>
> Yeah, I was never very happy with the way that the original domain
> patch dealt with that.  I think you're not even focusing on the
> worst part, which is all the getBaseType calls in the parser.
> I do not have a good idea about how to get rid of them though.

Well, I'm focusing on the part that shows up in the profile.  Prepared
queries don't get re-parsed repeatedly, so the calls in the parser
don't matter in that context.  I'm not saying it wouldn't be nice to
get rid of them, but it only helps people who aren't preparing their
queries.

>> +   if (typid < FirstBootstrapObjectId)
>> +   break;
>
> I'm really unwilling to buy into an assumption that we'll never
> have any built-in domains just to support such a crock as this.

I more or less expected that reaction, but I think it's a bit
short-sighted.  If somebody wanted to define a domain type in
pg_type.h, they'd have to write any domain constraint out in
pg_constraint.h in nodeToString() form, and it seems to me that the
chances that we'd accept a patch are pretty much nil, because it would
be a maintenance nuisance.  Now, maybe you could argue that somebody
might want to define a constraint-less domain in pg_type.h, but I
can't recall any proposal to do such a thing and don't see why
anybody'd want to do it.

> You'd need to dig around in the archives from around that time.  But
> my hazy recollection is that the argument was that clients would be
> much more likely to know what to do with a built-in type than with
> some domain over it.  psql, for example, knows about right-justifying
> the built-in numeric types, but it'd fail to do so for domains.

Mmm, that's a good point.

>> 2. Precompute the list of types to be sent to the client during
>> planning instead of during execution.  The point of prepared
>> statements is supposed to be to do as much of the work as possible at
>> prepare time so that bind/execute is as fast as possible, but we're
>> not really adhering to that design philosophy here.  However, I don't
>> have a clear idea of exactly how to do that.
>
> That'd help for prepared statements, but not for simple query execution.

Sure, but that's kinda my point.  We've got to send a RowDescription
message for every query, and if that requires smashing domain types to
base types, we have to do it.  What we don't have to do is repeat that
work for every execution of a prepared query.

> The trick here is that I don't think we want to change the returned column
> types for queries that are not being sent to a client.  The parser and
> planner aren't really aware of that context ATM.  Maybe we could make them
> so?

I guess it depends on whether that context is mutable.  Can I Parse a
query to create a prepared statement, then use that from a stored
procedure?  If so, then it's not firmly known at plan time what the
execution context will be.

> But it still seems like a kluge that is only addressing a small part
> of the domain-smashing issue.
>
> I wonder if it'd help to put some kind of bespoke cache into getBaseType.
> We've done that elsewhere, eg operator lookup.

That might be a possibility, although I feel like it's likely to be
substantially less effective than the quick hack, and it's not really
attacking the problem at the root anyway.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench regression test failure

2017-09-12 Thread Tom Lane
Fabien COELHO  writes:
> By definition, parallelism induces non determinism. When I put 2 seconds, 
> the intention was that I would get a non empty trace with a "every second" 
> aggregation. I would rather take a longer test rather than allowing an 
> empty file: the point is to check that something is generated, but 
> avoiding a longer test is desirable. So I would suggest to stick to 
> between 1 and 3, and if it fails then maybe add one second...

That's a losing game.  You can't ever guarantee that N seconds is
enough for slow, heavily loaded machines, and cranking up N just
penalizes developers who are testing under normal circumstances.

I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine.  So unless there is some
reason why pgbench is *guaranteed* to run at least one transaction
per thread, I'd rather the test not assume that.

I would not necessarily object to doing something in the code that
would guarantee that, though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench regression test failure

2017-09-12 Thread Fabien COELHO



Apparently, one of the threads ran 3 transactions where the test script
expects it to run at most 2.  Is this a pgbench bug, or is the test
being overoptimistic about how exact the "-T 2" cutoff is?



Probably both? It seems that cutting off on time is not a precise science,
so I suggest to accept 1, 2 and 3 lines, see attached.


Before I'd deciphered the test output fully, I was actually guessing that
the problem was the opposite, namely too few lines.


The test was waiting for betwen 1 and 2 lines, so I assumed that the 3
should the number of lines found.

Isn't it possible that some thread is slow enough to start up that it 
doesn't get to run any transactions?  IOW, do we need to allow 0 to 3 
lines?


By definition, parallelism induces non determinism. When I put 2 seconds, 
the intention was that I would get a non empty trace with a "every second" 
aggregation. I would rather take a longer test rather than allowing an 
empty file: the point is to check that something is generated, but 
avoiding a longer test is desirable. So I would suggest to stick to 
between 1 and 3, and if it fails then maybe add one second...


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Call RelationDropStorage() for broader range of object drops.

2017-09-12 Thread Hadi Moshayedi
Motivation for this patch is that some FDWs (notably, cstore_fdw) try
utilizing PostgreSQL internal storage. PostgreSQL assigns relfilenode's to
foreign tables, but doesn't clean up storage for foreign tables when
dropping tables. Therefore, in cstore_fdw we have to do some tricks to
handle dropping objects that lead to dropping of cstore table properly.

As far as I can see in the code, the requirement for
RelationDropStorage(rel) is having valid rel->rd_node.relNode, but it
doesn't actually require the storage files to actually exist. We don't emit
warning messages in mdunlinkfork() if the result of unlink() is ENOENT. So
we can RelationDropStorage() regardless of storage files existing or not,
given that the relation has valid relfilenode.

So I am suggesting to change the check at heap_drop_with_catalog() at
src/backend/catalog/heap.c:

-if (rel->rd_rel->relkind != RELKIND_VIEW &&
-rel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE &&
-rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
-rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+if (OidIsValid(rel->rd_node.relNode))
 {
 RelationDropStorage(rel);
 }

Any feedback on this?

Thanks,
Hadi
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 45ee9ac8b9..6ec2a98a99 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1828,10 +1828,7 @@ heap_drop_with_catalog(Oid relid)
 	/*
 	 * Schedule unlinking of the relation's physical files at commit.
 	 */
-	if (rel->rd_rel->relkind != RELKIND_VIEW &&
-		rel->rd_rel->relkind != RELKIND_COMPOSITE_TYPE &&
-		rel->rd_rel->relkind != RELKIND_FOREIGN_TABLE &&
-		rel->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)
+	if (OidIsValid(rel->rd_node.relNode))
 	{
 		RelationDropStorage(rel);
 	}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-12 Thread Konstantin Knizhnik



On 12.09.2017 19:28, Simon Riggs wrote:

On 1 September 2017 at 09:47, Konstantin Knizhnik
 wrote:

On 01.09.2017 09:25, Simon Riggs wrote:

On 1 September 2017 at 05:40, Thomas Munro
 wrote:

On Fri, Jun 9, 2017 at 8:08 PM, Konstantin Knizhnik
 wrote:

Attached please find rebased version of the patch.
Now "projection" attribute is used instead of surjective/injective.

Hi Konstantin,

This still applies but it doesn't compile after commits 2cd70845 and
c6293249.  You need to change this:

Form_pg_attribute att = RelationGetDescr(indexDesc)->attrs[i];

... to this:

Form_pg_attribute att = TupleDescAttr(RelationGetDescr(indexDesc),
i);

Thanks!

Does the patch work fully with that change? If so, I will review.


Attached please find rebased version of the patch.
Yes, I checked that it works after this fix.
Thank you in advance for review.

Thanks for the patch. Overall looks sound and I consider that we are
working towards commit for this.

The idea is that we default "projection = on", and can turn it off in
case the test is expensive. Why bother to have the option? (No docs at
all then!) Why not just evaluate the test and autotune whether to make
the test again in the future? That way we can avoid having an option
completely. I am imagining collecting values on the relcache entry for
the index.


Autotune is definitely good thing. But I do not think that excludes 
having explicit parameter for manual tuning.
For some functional indexes DBA or programmer knows for sure that it 
doesn't perform projection.
For example if it translates or changes encoding of original key. It 
seems to me that we should make it possible to

declare this index as non-projective and do not rely on autotune.

Also I have some doubts concerning using autotune in this case. First of 
all it is very hard to estimate complexity of test.
How can we measure it? Calculate average execution time? It can vary for 
different systems and greatly depends on system load...
Somehow calculate cost of indexed expression? It may be also not always 
produce expected result.


Moreover, in some cases test may be not expensive, but still useless, if 
index expression specifies one-to-one mapping (for example function 
reversing key).
Autotone will never be able to reliable determine that indexed 
expression is projection or not.


It seems to be more precise to compare statistic for source column and 
index expression.
If them are similar, then most likely index expression is not a 
projection...

I will think more about it.


To implement autotuning we would need to instrument the execution. We
could then display the collected value via EXPLAIN, so we could just
then use EXPLAIN in your tests rather than implementing a special
debug mode just for testing. We could also pass that information thru
to stats as well.



--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] domain type smashing is expensive

2017-09-12 Thread Tom Lane
Robert Haas  writes:
> On short-running queries that return a lot of columns,
> SendRowDescriptionMessage's calls to getBaseTypeAndTypmod() are a
> noticeable expense.

Yeah, I was never very happy with the way that the original domain
patch dealt with that.  I think you're not even focusing on the
worst part, which is all the getBaseType calls in the parser.
I do not have a good idea about how to get rid of them though.

> +   if (typid < FirstBootstrapObjectId)
> +   break;

I'm really unwilling to buy into an assumption that we'll never
have any built-in domains just to support such a crock as this.

> 1. Revisit the decision to smash domain types to base types here.
> That change was made by Tom Lane back in 2003
> (d9b679c13a820eb7b464a1eeb1f177c3fea13ece) but the commit message only
> says *that* we decided to do it, not *why* we decided to do it, and
> the one-line comment added by that commit doesn't do any better.

You'd need to dig around in the archives from around that time.  But
my hazy recollection is that the argument was that clients would be
much more likely to know what to do with a built-in type than with
some domain over it.  psql, for example, knows about right-justifying
the built-in numeric types, but it'd fail to do so for domains.

> 2. Precompute the list of types to be sent to the client during
> planning instead of during execution.  The point of prepared
> statements is supposed to be to do as much of the work as possible at
> prepare time so that bind/execute is as fast as possible, but we're
> not really adhering to that design philosophy here.  However, I don't
> have a clear idea of exactly how to do that.

That'd help for prepared statements, but not for simple query execution.

The trick here is that I don't think we want to change the returned column
types for queries that are not being sent to a client.  The parser and
planner aren't really aware of that context ATM.  Maybe we could make them
so?  But it still seems like a kluge that is only addressing a small part
of the domain-smashing issue.

I wonder if it'd help to put some kind of bespoke cache into getBaseType.
We've done that elsewhere, eg operator lookup.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup behavior on non-existent slot

2017-09-12 Thread Jeff Janes
On Wed, Sep 6, 2017 at 2:50 AM, Alvaro Herrera 
wrote:

> Magnus Hagander wrote:
> > On Mon, Sep 4, 2017 at 3:21 PM, Jeff Janes  wrote:
>
> > > Should the parent process of pg_basebackup be made to respond to
> SIGCHLD?
> > > Or call waitpid(bgchild, , WNOHANG) in some strategic loop?
> >
> > I think it's ok to just call waitpid() -- we don't need to react super
> > quickly, but we should react.
>
> Hmm, not sure about that ... in the normal case (slotname is correct)
> you'd be doing thousands of useless waitpid() system calls during the
> whole operation, no?  I think it'd be better to have a SIGCHLD handler
> that sets a flag (just once), which can be quickly checked without
> accessing kernel space.
>

If we don't want polling by waitpid, then my next thought would be to move
the data copy into another process, then have the main process do nothing
but wait for the first child to exit.  If the first to exit is the WAL
receiver, then we must have an error and the data receiver can be killed.
I don't know how to translate that to Windows, however.

Cheers,

Jeff


Re: [HACKERS] [PATCH] Improve geometric types

2017-09-12 Thread Emre Hasegeli
> Hello, sorry to late for the party, but may I comment on this?

Thank you for picking this up again.

> The first patch reconstructs the operators in layers. These
> functions are called very frequently when used. Some function are
> already inlined in float.h but some static functions in float.h
> also can be and are better be inlined. Some of *_internal,
> point_construct, line_calculate_point and so on are the
> candidates.

They are static functions.  I though compiler can decide to inline
them.  Do you think adding "inline" to the function signatures are
necessary?

> You removed some DirectFunctionCall to the functions within the
> same file but other functions remain in the style,
> ex. poly_center or on_sl. The function called from the former
> seems large enough but the latter function calls a so small
> function that it could be inlined. Would you like to make some
> additional functions use C call (instead of DirectFunctionCall)
> and inlining them?

I tried to minimise my changes to make reviewing easier.  I can make
"_internal" functions for the remaining DirectFunctionCall()s, if you
find it necessary.

> This is not a fault of this patch, but some functions like on_pb
> seems missing comment to describe what it is. Would you like to
> add some?

I will add some on the next version.

> In the second patch, the additional include fmgrprotos.h in
> btree_gin.c seems needless.

It must be something I missed on rebase.  I will remove it.

> Some float[48] features were macros
> so that they share the same expressions between float4 and
> float8. They still seems sharing perfectly the same expressions
> in float.h. Is there any reason for converting them into typed
> inline functions?

Kevin Grittner suggested using inline functions instead of macros.
They are easier to use compared to macros, and avoid double-evaluation
hazards.

> In float.h, MAXDOUBLEWIDTH is redueced from 500 to 128, but the
> exponent of double is up to 308 so it doesn't seem sufficient. On
> the other hand we won't use non-scientific notation for extremely
> large numbers and it requires (perhaps) up to 26 bytes in the
> case. In the soruce code, most of them uses "%e" and one of them
> uses '%g". %e always takes the format of
> "-1.(17digits)e+308".. So it would be less than 26
> characters.
>
> =# set extra_float_digits to 3;
> =# select -1.221423424320453e308::float8;
>  ?column?
> ---
>  -1.22142342432045302e+308
>
> man printf: (linux)
>> Style e is used if the exponent from its conversion is less than
>> -4 or greater than or equal to the precision.
>
> So we should be safe to have a buffer with 26 byte length and 500
> bytes will apparently too large and even 128 will be too loose in
> most cases. So how about something like the following?
>
> #define MINDOUBLEWIDTH 32

Should it be same for float4 and float8?

> ...
> float4out@float.c:
>>int  ndig = FLT_DIG + extra_float_digits;
>>
>>if (ndig < 1)
>>   ndig = 1;
>>
>>len = snprintf(ascii, MINDOUBLEWIDTH + 1, "%+.*g", ndig, num);
>> if (len > MINDOUBLEWIDTH + 1)
>>{
>>ascii = (char *) repalloc(ascii, len);
>>if (snprintf(ascii, len, "%+.*e", ndig, num) > len)
>>   error(ERROR, "something wrong happens...");
>> }
>
> I don't think the if part can be used so there would be no
> performance degradation, I believe.

Wouldn't this change the output of the float datatypes?  That would be
a backwards incompatible change.

> I'd like to pause here.

I will submit new versions after your are done with your review.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-09-12 Thread Fabien COELHO


Hello Tom,


Yep, I thought I was optimistic:-) Can I add a special SQLSTATE for that
situation where libpq did not report an error?


Meh.  If we're going to do that I think it might be better to hack
libpq itself to do so, ie, force PQresultErrorField(..., PG_DIAG_SQLSTATE)
to always return something.  But it seems like a hack either way.


I would not have took the liberty to hack into libpq internals for such a 
small front-end feature. However I agree that having libpq always return some 
diagnostic, even if it means "something unclear happened, sorry not to be 
very precise", would be better.


Here is an attempt at implementing your suggestions.

I added two error codes, which is debatable. One is used hardcoded by 
libpq if no diagnostic is found, and the other by psql if libpq returned 
something empty, which might happen if psql is linked with an older libpq, 
maybe. I do not know how to trigger such errors anyway, so this is rather 
academic.


I put back SetResultVariables function which is called twice, for SQL 
queries and the new descriptions. It worked out of the box with DECLARE 
which is just another SQL statement, so maybe I did not understood the 
cursor issue you were signaling...


--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index a74caf8..b994fcd 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3518,6 +3518,16 @@ bar
   
 
   
+   ERROR
+   
+
+ Whether the last query failed, as a boolean.
+ See also SQLSTATE.
+
+   
+  
+
+  
 FETCH_COUNT
 
 
@@ -3654,6 +3664,18 @@ bar
   
 
   
+   LAST_ERROR_SQLSTATE
+   LAST_ERROR_MESSAGE
+   
+
+ The error code and associated error message of the last
+ error, or "0" and empty strings if no error occured
+ since the beginning of the script.
+
+   
+  
+
+  
   
ON_ERROR_ROLLBACK

@@ -3722,6 +3744,25 @@ bar
   
 
   
+   ROW_COUNT
+   
+
+ How many rows were returned or affected by the last query.
+
+   
+  
+
+  
+   SQLSTATE
+   
+
+ The error code associated to the last query, or
+ 0 if no error occured.
+
+   
+  
+
+  
 QUIET
 
 
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index b997058..bbffcac 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -493,7 +493,6 @@ ResetCancelConn(void)
 #endif
 }
 
-
 /*
  * AcceptResult
  *
@@ -971,6 +970,44 @@ loop_exit:
 	return success;
 }
 
+/*
+ * Set special variables
+ * - ERROR: true/false, whether an error occurred
+ * - SQLSTATE: code of error, or "0"
+ * - LAST_ERROR_SQLSTATE: same for last error
+ * - LAST_ERROR_MESSAGE: message of last error
+ * - ROW_COUNT: how many rows were returned or affected, or "0"
+ */
+static void
+SetResultVariables(PGresult *results, bool success)
+{
+	if (success)
+	{
+		char   *ntuples = PQcmdTuples(results);
+		SetVariable(pset.vars, "ERROR", "false");
+		SetVariable(pset.vars, "SQLSTATE", "0");
+		SetVariable(pset.vars, "ROW_COUNT", *ntuples ? ntuples : "0");
+	}
+	else
+	{
+		char 		   *code = PQresultErrorField(results, PG_DIAG_SQLSTATE);
+		char 		   *mesg = PQresultErrorField(results, PG_DIAG_MESSAGE_PRIMARY);
+
+		SetVariable(pset.vars, "ERROR", "true");
+
+		/*
+		 * Ensure that something sensible is shown,
+		 * without assumption about libpq implementation
+		 */
+		if (code == NULL || *code == '\0')
+			code = "PQ001" /* ERROR_LIBPQ_EMPTY_SQLSTATE */ ;
+
+		SetVariable(pset.vars, "SQLSTATE", code);
+		SetVariable(pset.vars, "LAST_ERROR_SQLSTATE", code);
+		SetVariable(pset.vars, "LAST_ERROR_MESSAGE", mesg ? mesg : "");
+		SetVariable(pset.vars, "ROW_COUNT", "0");
+	}
+}
 
 /*
  * ProcessResult: utility function for use by SendQuery() only
@@ -1107,6 +1144,8 @@ ProcessResult(PGresult **results)
 		first_cycle = false;
 	}
 
+	SetResultVariables(*results, success);
+
 	/* may need this to recover from conn loss during COPY */
 	if (!first_cycle && !CheckConnection())
 		return false;
@@ -1214,7 +1253,6 @@ PrintQueryResults(PGresult *results)
 	return success;
 }
 
-
 /*
  * SendQuery: send the query string to the backend
  * (and print out results)
@@ -1523,7 +1561,11 @@ DescribeQuery(const char *query, double *elapsed_msec)
 	 * good thing because libpq provides no easy way to do that.)
 	 */
 	results = PQprepare(pset.db, "", query, 0, NULL);
-	if (PQresultStatus(results) != PGRES_COMMAND_OK)
+	OK = PQresultStatus(results) == PGRES_COMMAND_OK;
+
+	SetResultVariables(results, OK);
+
+	if (!OK)
 	{
 		psql_error("%s", PQerrorMessage(pset.db));
 		ClearOrSaveResult(results);
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 4d1c0ec..ae951f5 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -337,7 +337,7 @@ 

  1   2   >