Re: [HACKERS] expanding inheritance in partition bound order

2017-09-04 Thread Amit Khandekar
On 4 September 2017 at 06:34, Amit Langote
 wrote:
> Hi Amit,
>
> On 2017/09/03 16:07, Amit Khandekar wrote:
>> On 31 August 2017 at 13:06, Amit Langote  
>> wrote:
 Mind you, that idea has some problems anyway in the face of default
 partitions, null partitions, and list partitions which accept
 non-contiguous values (e.g. one partition for 1, 3, 5; another for 2,
 4, 6).  We might need to mark the PartitionDesc to indicate whether
 PartitionDesc-order is in fact bound-order in a particular instance,
 or something like that.
>>>
>>> ISTM, the primary motivation for the EIBO patch at this point is to get
>>> the partitions ordered in a predictable manner so that the partition-wise
>>> join patch and update partition key patches could implement certain logic
>>> using O (n) algorithm rather than an O (n^2) one.  Neither of them depend
>>> on the actual order in the sense of, say, sticking a PathKey to the
>>> resulting Append.
>>
>> Now that the inheritance hierarchy is expanded in depth-first order,
>> RelationGetPartitionDispatchInfo() needs to be changed to arrange the
>> PartitionDispatch array and the leaf partitions in depth-first order
>> (as we know this is a requirement for the update-partition-key patch
>> for efficiently determining which of the leaf partitions are already
>> present in the update result rels).
>
> I was thinking the same.
>
>> Amit, I am not sure if you are
>> already doing this as part of the patches in this mail thread. Please
>> let me know.
>
> Actually, I had thought of changing the expansion order in
> RelationGetPartitionDispatchInfo to depth-first after Robert committed his
> patch the other day, but haven't got around to doing that yet.  Will do
> that in the updated patch (the refactoring patch) I will post sometime
> later today or tomorrow on a differently titled thread, because the EIBO
> work seems to be done.

Great, thanks. Just wanted to make sure someone is working on that,
because, as you said, it is no longer an EIBO patch. Since you are
doing that, I won't work on that.

>
>> Also, let me know if you think there will be any loss of
>> efficiency in tuple routing code if we arrange the Partition Dispatch
>> indexes in depth-first order.
>
> I don't think there will be any loss in the efficiency of the tuple
> routing code itself.  It's just that the position of the ResultRelInfos
> (of leaf partitions) and PartitionDispatch objects (of partitioned tables)
> will be different in their respective arrays, that is, pd->indexes will
> now have different values than formerly.

Ok. Good to hear that.

-- 
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-09-04 Thread Ryan Murphy
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   not tested
Spec compliant:   not tested
Documentation:not tested

This feature is obviously a pretty deep cut, and I don't understand the JOIN 
system enough to comment on whether the code is correct.  I did not see any 
issues when glancing through the patch.

I ran "make check", "make installcheck" and "make installcheck-world" and had 
ALMOST no problems:  I marked it Tested and Passed because the only issue I had 
was a recurring issue I've had with "make installcheck-world" which I think is 
unrelated:

*** path/to/postgres/src/interfaces/ecpg/test/expected/connect-test5.stderr 
2017-02-14 09:22:25.0 -0600
--- path/to/postgres/src/interfaces/ecpg/test/results/connect-test5.stderr  
2017-09-04 23:31:50.0 -0500
***
*** 36,42 
  [NO_PID]: sqlca: code: 0, state: 0
  [NO_PID]: ECPGconnect: opening database  on  port  
 for user regress_ecpg_user2
  [NO_PID]: sqlca: code: 0, state: 0
! [NO_PID]: ECPGconnect: could not open database: FATAL:  database 
"regress_ecpg_user2" does not exist
...

But this still Needs Review by someone who knows the JOIN system better.

Best,
Ryan
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-04 Thread Amit Langote
On 2017/09/04 21:32, Ashutosh Bapat wrote:
> On Mon, Sep 4, 2017 at 10:04 AM, Amit Langote wrote:
>> By the way, if you want to get rid of PartitionedChildRelInfo, you can do
>> that as long as you find some other way of putting together the
>> partitioned_rels list to add into the ModifyTable (Append/MergeAppend)
>> node created for the root partitioned table.  Currently,
>> PartitionedChildRelInfo (and the root->pcinfo_list) is the way for
>> expand_inherited_rtentry() to pass that information to the planner's
>> path-generating code.  We may be able to generate that list when actually
>> creating the path using set_append_rel_pathlist() or
>> inheritance_planner(), without having created a PartitionedChildRelInfo
>> node beforehand.
> 
> AFAIU, the list contained RTIs of the relations, which didnt' have
> corresponding AppendRelInfos to lock those relations. Now that we
> create AppendRelInfos even for partitioned partitions, I don't think
> we need the list to take care of the locks. Is there any other reason
> why we maintain that list (apart from the trigger case I have raised
> and Fujita-san says that the list is not required in that case as
> well.)

We do *need* the list in ModifyTable (Append/MergeAppend) node itself.  We
can, however, get rid of the PartitionedChildRelInfo node that carries the
partitioned child RT indexes from an earlier planning phase
(expand_inherited_rtentry) to a later phase
(create_{modifytable|append|merge_append}_path).  The later phase can
build that list from the AppendRelInfos that you mention we now [1] build.

>>> Though I haven't read the patch yet, I think the above code is useless.
>>> And I proposed a patch to clean it up before [1].  I'll add that patch to
>>> the next commitfest.
>>
>> +1.
> +1. Will Fujita-san's patch also handle getting rid of partitioned_rels list?

As Fujita-san mentioned, his patch won't.  Actually, I suppose he didn't
say that partitioned_rels itself is useless, just that its particular
usage in ExecInitModifyTable is.  We still need that list for planner to
tell the executor that there are some RT entries the latter would need to
lock before executing a given plan.  Without that dedicated list, the
executor cannot know at all that certain tables in the partition tree
(viz. the partitioned ones) need to be locked.  I mentioned the reason -
(Merge)Append.subplans, ModifyTable.resultRelations does not contain
respective entries corresponding to the partitioned tables, and
traditionally, the executor looks at those lists to figure out the tables
to lock.

Thanks,
Amit

[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=30833ba154



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Masahiko Sawada
On Tue, Sep 5, 2017 at 12:24 PM, Michael Paquier
 wrote:
> On Tue, Sep 5, 2017 at 12:05 PM, Masahiko Sawada  
> wrote:
>> In get_rel_oids() we often switch the memory context to vac_context
>> and switch back. As a result almost code in get_rel_oids() is working
>> in vac_context. Maybe we can switch memory context before and after
>> the calling get_rel_oids?
>
> I thought about that as well, and it seemed to me that the current
> patch approach is less bug-prone for the future if get_rel_oids() gets
> called in some future code paths.

Okay, I agree. Also I found that dedupe_relations() eventually
allocates the list in current memory context that may not be
vac_context and set it to *relations at the end of that function. I
think we should switch the memory context to vac_context before doing
that. Or to more simplify the code maybe we can do the all treatment
of the relations list after switching to vac_context. For example,

oldcontext = MemoryContextSwtichTo(vac_context)
relations = copyObject(relations);
get_rel_oids();
check_colums_exist(relations);
dedupe_relations();
MemoryContextSwtichTo(oldcontext);

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Useless code in ExecInitModifyTable

2017-09-04 Thread Ryan Murphy
> FWIW, I find that good ol' patch is much more reliable about applying
> patches that didn't come from git itself.  In this case
>
>
Thanks, I knew I was missing something!
Ryan


[HACKERS] cache lookup errors for missing replication origins

2017-09-04 Thread Michael Paquier
Hi all,

Cache lookup errors with elog() can be triggered easily by users at
SQL level using replication origin functions:
=# select pg_replication_origin_advance('popo', '0/1');
ERROR:  XX000: cache lookup failed for replication origin 'popo'
LOCATION:  replorigin_by_name, origin.c:229
=# select pg_replication_origin_drop('popo');
ERROR:  XX000: cache lookup failed for replication origin 'popo'
LOCATION:  replorigin_by_name, origin.c:229
=# select pg_replication_origin_session_setup('popo');
ERROR:  XX000: cache lookup failed for replication origin 'popo'
LOCATION:  replorigin_by_name, origin.c:229
=# select pg_replication_origin_progress('popo', true);
ERROR:  XX000: cache lookup failed for replication origin 'popo';
LOCATION:  replorigin_by_name, origin.c:229
A cache lookup means that an illogical status has been reached, but
those code paths don't refer to that. So I think that the error in
replorigin_by_name should be changed to that:
ERROR:  42704: replication slot "%s" does not exist

As far as I can see, replorigin_by_oid makes no use of its missing_ok
= false in the backend code, so letting it untouched would have no
impact. replorigin_by_name with missing_ok = false is only used with
SQL-callable functions, so it could be changed without any impact
elsewhere (without considering externally-maintained replication
modules).

Thanks,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Useless code in ExecInitModifyTable

2017-09-04 Thread Tom Lane
Ryan Murphy  writes:
> I downloaded the latest patch "clean-nodeModifyTable-v2.patch"
> and tried applying it to HEAD using "git apply ".

> The only response from git was "fatal: unrecognized input".

FWIW, I find that good ol' patch is much more reliable about applying
patches that didn't come from git itself.  In this case

patch -p1 <~../path/to/clean-nodeModifyTable-v2.patch

seems to work without complaint.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-04 Thread Etsuro Fujita

On 2017/09/04 21:32, Ashutosh Bapat wrote:

On Mon, Sep 4, 2017 at 10:04 AM, Amit Langote

By the way, if you want to get rid of PartitionedChildRelInfo, you can do
that as long as you find some other way of putting together the
partitioned_rels list to add into the ModifyTable (Append/MergeAppend)
node created for the root partitioned table.  Currently,
PartitionedChildRelInfo (and the root->pcinfo_list) is the way for
expand_inherited_rtentry() to pass that information to the planner's
path-generating code.  We may be able to generate that list when actually
creating the path using set_append_rel_pathlist() or
inheritance_planner(), without having created a PartitionedChildRelInfo
node beforehand.


AFAIU, the list contained RTIs of the relations, which didnt' have
corresponding AppendRelInfos to lock those relations. Now that we
create AppendRelInfos even for partitioned partitions, I don't think
we need the list to take care of the locks.
I don't think so either.  (Since I haven't followed discussions on this 
thread in detail yet, I don't understand the idea/need of creating 
AppendRelInfos for partitioned partitions, though.)



Though I haven't read the patch yet, I think the above code is useless.
And I proposed a patch to clean it up before [1].  I'll add that patch to
the next commitfest.


+1.

+1. Will Fujita-san's patch also handle getting rid of partitioned_rels list?


No.  The patch just removes the partitioned_rels list from 
nodeModifyTable.c.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Tom Lane
Fabien COELHO  writes:
>> * In the same vein, I don't know what this does:
>> sub pgbench($)
>> and I see no existing instances of it elsewhere in our tree.  I think it'd
>> be better not to require advanced degrees in Perl programming in order to
>> read the test cases.

> It just says that 5 scalars are expected, so it would complain if "type" 
> or number do not match. Now, why give type hints if they are not 
> mandatory, as devs can always detect their errors by extensive testing 
> instead:-)

My concern is basically about maintaining coding style consistency.
I don't want readers to come across something like this and have to
wonder "why is this function like this, when hundreds of apparently
similar functions elsewhere in the tree aren't?  Is there some reason
why it needs to use this feature when the rest don't?".  If the answer
to that is "yes" then there should be a comment explaining why.  If
the answer is "no", well, it shouldn't be randomly different from the
rest of our code.

Now, having gone and looked up what that construct does, I wouldn't
necessarily be against a patch that introduced use of these poor man's
prototypes throughout our Perl code.  It's the inconsistency that I'm
down on.  But others with more Perl experience than I might have good
reasons why it's not like that already.  I do have to say that many of
the examples I've seen look more like line noise than readable code.

>> * Another way in which command_checks_all introduces API complexity is
>> that it accounts for a variable number of tests depending on how many
>> patterns are provided.  This seems like a mess.  I see that you have
>> in some places (not very consistently) annotated call sites as to how
>> many tests they account for, but who's going to do the math to make
>> sure everything adds up?

> Perl:-) I run the test, figure out the number it found in the resulting 
> error message, and update the number in the source. Not too hard:-)

Yeah, but then what error is all that work protecting you from?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Useless code in ExecInitModifyTable

2017-09-04 Thread Ryan Murphy
Hello!

I downloaded the latest patch "clean-nodeModifyTable-v2.patch"
and tried applying it to HEAD using "git apply ".

The only response from git was "fatal: unrecognized input".
I tried this with git 2.5.4 stable that comes native on my mac, and git 2.12 
built from source.
In both cases I got the same error.

I know you recently rebased this patch already, but could you please 
double-check it?
Of course it's possible I'm doing something wrong.

Thanks,
Ryan

The new status of this patch is: Waiting on Author

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Simon Riggs
On 5 September 2017 at 02:16, Michael Paquier  wrote:
> On Mon, Sep 4, 2017 at 11:47 PM, Bossart, Nathan  wrote:
>> I've made this change in v14 of the main patch.
>>
>> In case others had opinions regarding the de-duplication patch, I've
>> attached that again as well.
>
> +   /*
> +* Create the relation list in a long-lived memory context so that it
> +* survives transaction boundaries.
> +*/
> +   old_cxt = MemoryContextSwitchTo(AutovacMemCxt);
> +   rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1);
> +   rel = makeVacuumRelation(rangevar, NIL, tab->at_relid);
> +   rel_list = list_make1(rel);
> +   MemoryContextSwitchTo(old_cxt);
> That's way better, thanks for the new patch.
>
> So vacuum_multiple_tables_v14.patch is good for a committer in my
> opinion. With this patch, if the same relation is specified multiple
> times, then it gets vacuum'ed that many times. Using the same column
> multi-times results in an error as on HEAD, but that's not a new
> problem with this patch.
>
> So I would tend to think that the same column specified multiple times
> should cause an error, and that we could let VACUUM run work N times
> on a relation if it is specified this much. This feels more natural,
> at least to me, and it keeps the code simple.

ISTM there is no difference between
  VACUUM a, b
and
  VACUUM a; VACUUM b;

If we want to keep the code simple we must surely consider whether the
patch has any utility.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Michael Paquier
On Tue, Sep 5, 2017 at 12:05 PM, Masahiko Sawada  wrote:
> In get_rel_oids() we often switch the memory context to vac_context
> and switch back. As a result almost code in get_rel_oids() is working
> in vac_context. Maybe we can switch memory context before and after
> the calling get_rel_oids?

I thought about that as well, and it seemed to me that the current
patch approach is less bug-prone for the future if get_rel_oids() gets
called in some future code paths.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Masahiko Sawada
On Tue, Sep 5, 2017 at 10:16 AM, Michael Paquier
 wrote:
> On Mon, Sep 4, 2017 at 11:47 PM, Bossart, Nathan  wrote:
>> I've made this change in v14 of the main patch.
>>
>> In case others had opinions regarding the de-duplication patch, I've
>> attached that again as well.
>
> +   /*
> +* Create the relation list in a long-lived memory context so that it
> +* survives transaction boundaries.
> +*/
> +   old_cxt = MemoryContextSwitchTo(AutovacMemCxt);
> +   rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1);
> +   rel = makeVacuumRelation(rangevar, NIL, tab->at_relid);
> +   rel_list = list_make1(rel);
> +   MemoryContextSwitchTo(old_cxt);
> That's way better, thanks for the new patch.
>
> So vacuum_multiple_tables_v14.patch is good for a committer in my
> opinion.

In get_rel_oids() we often switch the memory context to vac_context
and switch back. As a result almost code in get_rel_oids() is working
in vac_context. Maybe we can switch memory context before and after
the calling get_rel_oids?

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function to move the position of a replication slot

2017-09-04 Thread Andres Freund
On 2017-09-05 11:36:47 +0900, Michael Paquier wrote:
> On Thu, Aug 31, 2017 at 9:19 PM, Magnus Hagander  wrote:
> > PFA an updated and rebased patch.
> >
> > Rebased. Now named pg_advance_replication_slot. ERROR on logical slots.
> > Forward only.
> >
> > I think that, in the end, covered all the comments?
> 
> +   if (backwards)
> +   ereport(WARNING,
> +   (errmsg("Not moving replication slot backwards!")));
> Shouldn't this be an ERROR, mentioning the current position of the slot?
> 
> +ereport(ERROR,
> +(errmsg("Only physical replication slots can be 
> advanced.")));
> ERRCODE_FEATURE_NOT_SUPPORTED, no?

Seither of these seem to follow the message guidelines.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function to move the position of a replication slot

2017-09-04 Thread Michael Paquier
On Thu, Aug 31, 2017 at 9:19 PM, Magnus Hagander  wrote:
> PFA an updated and rebased patch.
>
> Rebased. Now named pg_advance_replication_slot. ERROR on logical slots.
> Forward only.
>
> I think that, in the end, covered all the comments?

+   if (backwards)
+   ereport(WARNING,
+   (errmsg("Not moving replication slot backwards!")));
Shouldn't this be an ERROR, mentioning the current position of the slot?

+ereport(ERROR,
+(errmsg("Only physical replication slots can be advanced.")));
ERRCODE_FEATURE_NOT_SUPPORTED, no?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statement timeout behavior in extended queries

2017-09-04 Thread Tatsuo Ishii
> What do you think?  I've not really tested this with the extended
> protocol, so I'd appreciate if you could rerun your test from the
> older thread.

Attached is your patch just rebased against master branch.

Also I attached the test cases and results using pgproto on patched
master branch. All looks good to me.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 8d3fecf..6c53b9c 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -144,6 +144,11 @@ static bool doing_extended_query_message = false;
 static bool ignore_till_sync = false;
 
 /*
+ * Flag to keep track of whether statement timeout timer is active.
+ */
+static bool stmt_timeout_active = false;
+
+/*
  * If an unnamed prepared statement exists, it's stored here.
  * We keep it separate from the hashtable kept by commands/prepare.c
  * in order to reduce overhead for short-lived queries.
@@ -182,6 +187,8 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static void enable_statement_timeout(void);
+static void disable_statement_timeout(void);
 
 
 /* 
@@ -1228,7 +1235,8 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	/*
 	 * Start up a transaction command so we can run parse analysis etc. (Note
 	 * that this will normally change current memory context.) Nothing happens
-	 * if we are already in one.
+	 * if we are already in one.  This also arms the statement timeout if
+	 * necessary.
 	 */
 	start_xact_command();
 
@@ -1516,7 +1524,8 @@ exec_bind_message(StringInfo input_message)
 	/*
 	 * Start up a transaction command so we can call functions etc. (Note that
 	 * this will normally change current memory context.) Nothing happens if
-	 * we are already in one.
+	 * we are already in one.  This also arms the statement timeout if
+	 * necessary.
 	 */
 	start_xact_command();
 
@@ -2008,6 +2017,9 @@ exec_execute_message(const char *portal_name, long max_rows)
 			 * those that start or end a transaction block.
 			 */
 			CommandCounterIncrement();
+
+			/* full command has been executed, reset timeout */
+			disable_statement_timeout();
 		}
 
 		/* Send appropriate CommandComplete to client */
@@ -2437,25 +2449,27 @@ start_xact_command(void)
 	{
 		StartTransactionCommand();
 
-		/* Set statement timeout running, if any */
-		/* NB: this mustn't be enabled until we are within an xact */
-		if (StatementTimeout > 0)
-			enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
-		else
-			disable_timeout(STATEMENT_TIMEOUT, false);
-
 		xact_started = true;
 	}
+
+	/*
+	 * Start statement timeout if necessary.  Note that this'll intentionally
+	 * not reset the clock on an already started timeout, to avoid the timing
+	 * overhead when start_xact_command() is invoked repeatedly, without an
+	 * interceding finish_xact_command() (e.g. parse/bind/execute).  If that's
+	 * not desired, the timeout has to be disabled explicitly.
+	 */
+	enable_statement_timeout();
 }
 
 static void
 finish_xact_command(void)
 {
+	/* cancel active statement timeout after each command */
+	disable_statement_timeout();
+
 	if (xact_started)
 	{
-		/* Cancel any active statement timeout before committing */
-		disable_timeout(STATEMENT_TIMEOUT, false);
-
 		CommitTransactionCommand();
 
 #ifdef MEMORY_CONTEXT_CHECKING
@@ -4521,3 +4535,42 @@ log_disconnections(int code, Datum arg)
 	port->user_name, port->database_name, port->remote_host,
 	port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Start statement timeout timer, if enabled.
+ *
+ * If there's already a timeout running, don't restart the timer.  That
+ * enables compromises between accuracy of timeouts and cost of starting a
+ * timeout.
+ */
+static void
+enable_statement_timeout(void)
+{
+	/* must be within an xact */
+	Assert(xact_started);
+
+	if (StatementTimeout > 0)
+	{
+		if (!stmt_timeout_active)
+		{
+			enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
+			stmt_timeout_active = true;
+		}
+	}
+	else
+		disable_timeout(STATEMENT_TIMEOUT, false);
+}
+
+/*
+ * Disable statement timeout, if active.
+ */
+static void
+disable_statement_timeout(void)
+{
+	if (stmt_timeout_active)
+	{
+		disable_timeout(STATEMENT_TIMEOUT, false);
+
+		stmt_timeout_active = false;
+	}
+}
== 001-without-trans ===
FE=> Query(query="SET statement_timeout = '4s'")
<= BE CommandComplete(SET)
<= BE ReadyForQuery(I)
FE=> Parse(stmt="", query="SELECT pg_sleep(3)")
FE=> Bind(stmt="", portal="")
FE=> Execute(portal="")
FE=> Parse(stmt="", query="SELECT pg_sleep(2)")
FE=> Bind(stmt="", portal="")
FE=> Execute(portal="")
FE=> Sync
<= BE ParseComplete
<= BE BindComplete
<= BE 

Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-04 Thread Michael Paquier
On Mon, Sep 4, 2017 at 11:15 PM, Tom Lane  wrote:
> I don't want to go there, and was thinking we should expand the new
> comment in DefineSavepoint to explain why not.

Okay.

> It's certainly not that
> much additional work to allow a savepoint so far as xact.c is concerned,
> as your patch shows. The problem is that intra-string savepoints seem
> inconsistent with exec_simple_query's behavior of abandoning the whole
> query string upon error.  If you do
>
> insert ...\; savepoint\; insert ...\; release savepoint\; insert ...;
>
> wouldn't you sort of expect that the savepoint commands mean to keep going
> if the second insert fails?  If they don't mean that, what do they mean?

Hmm. I spent more time looking at my patch and I see what you are
pointing out here. Using something like that with a second insert
failing I would expect the first insert to be visible, but that's not
the case:
savepoint rs; insert into exists values (1); savepoint rs2; insert
into not_exists values (1); rollback to savepoint rs2; commit;'
So this approach makes things inconsistent.

> Now admittedly, the same set of issues pops up if one uses an
> explicit transaction block in a multi-query string:
>
> begin\; insert ...\; savepoint\; insert ...\; release savepoint\; insert 
> ...\; commit;
>
> If one of the inserts fails, you don't really know which one unless you
> were counting command-complete replies (which PQexec doesn't let you do).
> But that behavior was there already, we aren't proposing to make it worse.
> (I think this approach is also the correct workaround to give those
> Oracle-conversion folk: their real problem is failure to convert from
> Oracle's implicit-BEGIN behavior to our explicit-BEGIN.)

Sure there is this workaround.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Michael Paquier
On Mon, Sep 4, 2017 at 11:47 PM, Bossart, Nathan  wrote:
> I've made this change in v14 of the main patch.
>
> In case others had opinions regarding the de-duplication patch, I've
> attached that again as well.

+   /*
+* Create the relation list in a long-lived memory context so that it
+* survives transaction boundaries.
+*/
+   old_cxt = MemoryContextSwitchTo(AutovacMemCxt);
+   rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1);
+   rel = makeVacuumRelation(rangevar, NIL, tab->at_relid);
+   rel_list = list_make1(rel);
+   MemoryContextSwitchTo(old_cxt);
That's way better, thanks for the new patch.

So vacuum_multiple_tables_v14.patch is good for a committer in my
opinion. With this patch, if the same relation is specified multiple
times, then it gets vacuum'ed that many times. Using the same column
multi-times results in an error as on HEAD, but that's not a new
problem with this patch.

So I would tend to think that the same column specified multiple times
should cause an error, and that we could let VACUUM run work N times
on a relation if it is specified this much. This feels more natural,
at least to me, and it keeps the code simple.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Amit Langote
On 2017/09/04 20:46, Konstantin Knizhnik wrote:
> On 04.09.2017 12:59, Amit Langote wrote:
>> On 2017/09/04 18:19, Konstantin Knizhnik wrote:
>>> Concerning your suggestion to merge check_index_predicates() and
>>> remove_restrictions_implied_by_constraints() functions: may be it can be
>>> done, but frankly speaking I do not see much sense in it - there are too
>>> much differences between this functions and too few code reusing.
>> Maybe, you meant to address Thomas here. :)  Reading his comment again, I
>> too am a bit concerned about destructively modifying the input rel's
>> baserestrictinfo.  There should at least be a comment that that's being
>> done.
>
> But I have considered Thomas comment and extracted code updating
> relation's baserestrictinfo from
> relation_excluded_by_constraints() to
> remove_restrictions_implied_by_constraints() function. It was included in
> new version of the patch.

Sorry, I hadn't noticed the new patch.

I was confused because I didn't suggest "to merge check_index_predicates()
and remove_restrictions_implied_by_constraints() functions".  Perhaps, the
wording in my previous message wasn't clear.

Looking at the new patch, the changes regarding
remove_restrictions_implied_by_constraints() look fine.

Like Thomas, I'm not so sure about the whole predtest.c patch.  The core
logic in operator_predicate_proof() should be able to conclude that, say,
k < 21 is implied by k <= 20, which you are trying to address with some
special case code.  If there is still problem you think need to be fixed
here, a better place to look at would be somewhere around get_btree_test_op().

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-09-04 Thread Masahiko Sawada
On Tue, Sep 5, 2017 at 2:33 AM, Fabien COELHO  wrote:
>
> Hello Masahiko-san,
>
 Currently TRUNCATE pgbench_accounts command is executed within a
 transaction started immediately before it. If we move it out of the
 transaction, the table data will be truncated even if the copying data
 failed. Maybe we can do TRUNCATE pgbench_accounts, pgbench_history
 instead. Thought?
>>>
>>>
>>>
>>> Keep the truncate in the transaction, and truncate both (or all?) tables
>>> together.
>>
>>
>> Attached latest patch incorporated the comments I got so far. Please
>> review it.
>
>
> "g" does not work for me yet when after "f", only the message is slightly
> different, it chokes on another dependency, branches instead of accounts.
>
>   sh> ./pgbench -i -I ctpfg
>   cleaning up...
>   creating tables...
>   set primary keys...
>   set foreign keys...
>   ERROR:  cannot truncate a table referenced in a foreign key constraint
>   DETAIL:  Table "pgbench_history" references "pgbench_branches".
>   HINT:  Truncate table "pgbench_history" at the same time, or use TRUNCATE
> ... CASCADE.

Sorry, I'd missed something.

> I think that the whole data generation should be in *one* transaction which
> starts with "truncate pgbench_history, pgbench_branches, pgbench_tellers,
> pgbench_accounts;"

Agreed, and fixed.

>
> In passing, I think that the documentation should tell explicitely what the
> default value is (aka "ctgvp").

Agreed.

Attached the latest patch. Please review it.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


pgbench_custom_initialization_v10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Fabien COELHO


Anyway, done with the addition of a "chomp" parameter, leaving only the 
TAP test changes to consider.


Ok, thanks.


I'll set the CF entry back to "waiting on author" pending your
revisions of those.


See attached.

Removed scalar/array ref hack, plenty [] added everywhere to match.

Removed perl function prototypes.

Does not try to announce the number of tests, just tell when done.

--
Fabien.diff --git a/src/bin/pgbench/t/001_pgbench.pl b/src/bin/pgbench/t/001_pgbench.pl
deleted file mode 100644
index 34d686e..000
--- a/src/bin/pgbench/t/001_pgbench.pl
+++ /dev/null
@@ -1,25 +0,0 @@
-use strict;
-use warnings;
-
-use PostgresNode;
-use TestLib;
-use Test::More tests => 3;
-
-# Test concurrent insertion into table with UNIQUE oid column.  DDL expects
-# GetNewOidWithIndex() to successfully avoid violating uniqueness for indexes
-# like pg_class_oid_index and pg_proc_oid_index.  This indirectly exercises
-# LWLock and spinlock concurrency.  This test makes a 5-MiB table.
-my $node = get_new_node('main');
-$node->init;
-$node->start;
-$node->safe_psql('postgres',
-	'CREATE UNLOGGED TABLE oid_tbl () WITH OIDS; '
-	  . 'ALTER TABLE oid_tbl ADD UNIQUE (oid);');
-my $script = $node->basedir . '/pgbench_script';
-append_to_file($script,
-	'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);');
-$node->command_like(
-	[   qw(pgbench --no-vacuum --client=5 --protocol=prepared
-		  --transactions=25 --file), $script ],
-	qr{processed: 125/125},
-	'concurrent OID generation');
diff --git a/src/bin/pgbench/t/001_pgbench_with_server.pl b/src/bin/pgbench/t/001_pgbench_with_server.pl
new file mode 100644
index 000..5db2fd0
--- /dev/null
+++ b/src/bin/pgbench/t/001_pgbench_with_server.pl
@@ -0,0 +1,441 @@
+use strict;
+use warnings;
+
+use PostgresNode;
+use TestLib;
+use Test::More;
+
+# start a pgbench specific server
+my $node = get_new_node('main');
+$node->init;
+$node->start;
+
+# invoke pgbench
+sub pgbench
+{
+	my ($opts, $stat, $out, $err, $name, $files) = @_;
+	my @cmd = ('pgbench', split /\s+/, $opts);
+	my @filenames = ();
+	if (defined $files)
+	{
+		# note: files are ordered for determinism
+		for my $fn (sort keys %$files)
+		{
+			my $filename = $node->basedir . '/' . $fn;
+			push @cmd, '-f', $filename;
+			# cleanup file weight
+			$filename =~ s/\@\d+$//;
+			#push @filenames, $filename;
+			append_to_file($filename, $$files{$fn});
+		}
+	}
+	$node->command_checks_all(\@cmd, $stat, $out, $err, $name);
+	# cleanup?
+	#unlink @filenames or die "cannot unlink files (@filenames): $!";
+}
+
+# Test concurrent insertion into table with UNIQUE oid column.  DDL expects
+# GetNewOidWithIndex() to successfully avoid violating uniqueness for indexes
+# like pg_class_oid_index and pg_proc_oid_index.  This indirectly exercises
+# LWLock and spinlock concurrency.  This test makes a 5-MiB table.
+
+$node->safe_psql('postgres',
+	'CREATE UNLOGGED TABLE oid_tbl () WITH OIDS; '
+	  . 'ALTER TABLE oid_tbl ADD UNIQUE (oid);');
+
+pgbench('--no-vacuum --client=5 --protocol=prepared --transactions=25',
+  0, [ qr{processed: 125/125} ], [ qr{^$} ], 'concurrency OID generation',
+  { '001_pgbench_concurrent_oid_generation' =>
+'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);' });
+
+# cleanup
+$node->safe_psql('postgres', 'DROP TABLE oid_tbl;');
+
+# Trigger various connection errors
+pgbench('no-such-database',
+  1, [ qr{^$} ],
+  [ qr{connection to database "no-such-database" failed},
+qr{FATAL:  database "no-such-database" does not exist} ],
+  'no such database');
+
+pgbench('-U no-such-user template0',
+  1, [ qr{^$} ],
+  [ qr{connection to database "template0" failed},
+qr{FATAL:  role "no-such-user" does not exist} ],
+  'no such user');
+
+pgbench('-h no-such-host.postgresql.org',
+  1, [ qr{^$} ],
+  [ qr{connection to database "postgres" failed},
+# actual message may vary:
+# - Name or service not known
+# - Temporary failure in name resolution
+qr{could not translate host name "no-such-host.postgresql.org" to address: } ],
+  'no such host');
+
+pgbench('-S -t 1',
+  1, [ qr{^$} ], [ qr{Perhaps you need to do initialization} ],
+  'run without init');
+
+# Initialize pgbench tables scale 1
+pgbench('-i',
+  0, [ qr{^$} ],
+  [ qr{creating tables}, qr{vacuum}, qr{set primary keys}, qr{done\.} ],
+  'pgbench scale 1 initialization',
+);
+
+# Again, with all possible options
+pgbench(
+  # unlogged => faster test
+  '--initialize --scale=1 --unlogged --fillfactor=98 --foreign-keys --quiet' .
+  ' --tablespace=pg_default --index-tablespace=pg_default',
+  0, [ qr{^$}i ],
+  [ qr{creating tables}, qr{vacuum}, qr{set primary keys},
+qr{set foreign keys}, qr{done\.} ],
+  'pgbench scale 1 initialization');
+
+# Run all builtins for a few transactions: 20 checks
+pgbench(
+  '--transactions=5 -Dfoo=bla --client=2 --protocol=simple --builtin=t' .
+  ' --connect -n -v -n',
+  0,
+  [ qr{builtin: TPC-B}, qr{clients: 2\b}, qr{processed: 10/10},
+

[HACKERS] pg_basebackup behavior on non-existent slot

2017-09-04 Thread Jeff Janes
If I tell pg_basebackup to use a non-existent slot, it immediately reports
an error.  And then it exits with an error, but only after streaming the
entire database contents.

If you are doing this interactively and are on the ball, of course, you can
hit ctrl-C when you see the error message.

I don't know if this is exactly a bug, but it seems rather unfortunate.

Should the parent process of pg_basebackup be made to respond to SIGCHLD?
Or call waitpid(bgchild, , WNOHANG) in some strategic loop?



$ /usr/local/pgsql9_6/bin/pg_basebackup -D data_replica -P --slot foobar -Xs

pg_basebackup: could not send replication command "START_REPLICATION":
ERROR:  replication slot "foobar" does not exist
22384213/22384213 kB (100%), 1/1 tablespace
pg_basebackup: child process exited with error 1
pg_basebackup: removing data directory "data_replica"


Cheers,

Jeff


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-04 Thread Alexander Korotkov
On Mon, Sep 4, 2017 at 2:04 PM,  wrote:

> Our clients complain about this issue and therefore I want to raise the
> discussion and suggest several solutions to this problem:
>
> I. Why does PG use Fatal when Error is enough to release lock that rises
> lock conflict?
> "If (RecoveryConflictPending && DoingCommandRead)"
>
> II. Do we really need to truncate the table on hot standby exactly at the
> same time when truncate on master occurs?
>
> In my case conflict happens when the autovacuum truncates table tbl1 on
> master while backend on replica is performing a long transaction involving
> the same table tbl1. This happens because truncate takes an
> AccessExclusiveLock. To tackle this issue we have several options:
>
> 1. We can postpone the truncate on the master until all the replicas have
> finished their transactions (in this case, feedback requests to the master
> should be sent frequently)
> Patch 1
> vacuum_lazy_truncate.patch
>

I've following comments on this patch:
1) You shouldn't use ">=" to compare xids.  You should use
TransactionIdFollowsOrEquals()
function which handles transaction id wraparound correctly.
2) As I understood, this patch makes heap truncate only when no concurrent
transactions are running on both master and replicas with
hot_standby_feedback enabled.  For busy system, it would be literally
"never do heap truncate after vacuum".

2. Maybe there is an option somehow not to send AccessExclusiveLock and not
> to truncate table on the replica right away. We could try to wait a little
> and truncate tbl1 on replica again.
>
> Here is a patch that makes replica skip truncate WAL record if some
> transaction using the same table is already running on replica. And it also
> forces master to send truncate_wal to replica with actual table length
> every time autovacuum starts.
> Patch 2
> standby_truncate_skip_v1.patch
>
> In this case the transaction which is running for several hours won’t be
> interrupted because of truncate. Even if for some reason we haven’t
> truncated this table tbl1 right away, nothing terrible will happen. The
> next truncate wal record will reduce the file size by the actual length (if
> some inserts or updates have been performed on master).
>

Since you wrote this patch under on my request, let me clarify its idea
little more.

Currently, lazy_truncate_heap() is very careful on getting
AccessExclusiveLock to truncate heap.  It doesn't want either block other
transaction or wait for this lock too long.  If lock isn't acquired after
some number of tries lazy_truncate_heap() gives up.  However, once
lazy_truncate_heap() acquires AccessExclusiveLock is acquired on master, it
would be replayed on replicas where it will conflict with read-only
transactions if any.  That leads to unexpected behaviour when
hot_standby_feedback is on.

Idea of fixing that is to apply same logic of getting AccessExclusiveLock
on standby as on master: give up if somebody is holding conflicting lock
for long enough.  That allows standby to have more free pages at the end of
heap than master have.  That shouldn't lead to errors since those pages are
empty, but allows standby to waste some extra space.  In order to mitigate
this deficiency, we're generating XLOG_SMGR_TRUNCATE records more frequent:
on finish of every vacuum.  Therefore, if even standby gets some extra
space of empty pages, it would be corrected during further vacuum cycles.

@@ -397,7 +425,6 @@ lazy_vacuum_rel(Relation onerel, int options,
> VacuumParams *params,
>   appendStringInfo(, _("avg read rate: %.3f MB/s, avg write rate:
> %.3f MB/s\n"),
>   read_rate, write_rate);
>   appendStringInfo(, _("system usage: %s"), pg_rusage_show());
> -
>   ereport(LOG,
>   (errmsg_internal("%s", buf.data)));
>   pfree(buf.data);
> @@ -1820,7 +1847,7 @@ lazy_truncate_heap(Relation onerel, LVRelStats
> *vacrelstats)
>   vacrelstats->pages_removed += old_rel_pages - new_rel_pages;
>   vacrelstats->rel_pages = new_rel_pages;
>
> - ereport(elevel,
> + ereport(WARNING,
>   (errmsg("\"%s\": truncated %u to %u pages",
>   RelationGetRelationName(onerel),
>   old_rel_pages, new_rel_pages),


Ivan, what these changes are supposed to do?

diff --git a/src/backend/storage/lmgr/lock.c
> b/src/backend/storage/lmgr/lock.c
> index 2b26173..f04888e 100644
> --- a/src/backend/storage/lmgr/lock.c
> +++ b/src/backend/storage/lmgr/lock.c
> @@ -816,7 +816,7 @@ LockAcquireExtended(const LOCKTAG *locktag,
>   XLogStandbyInfoActive())
>   {
>   LogAccessExclusiveLockPrepare();
> - log_lock = true;
> +// log_lock = true;
>   }
>
>   /*


Perhaps, it's certainly bad idea to disable replaying AccessExclusiveLock
*every time*, we should skip taking AccessExclusiveLock only while writing
XLOG_SMGR_TRUNCATE record.  Besides that, this code violates out coding
style.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Tom Lane
Fabien COELHO  writes:
>> * I do not think we need expr_scanner_chomp_substring.  Of the three
>> existing callers of expr_scanner_get_substring, one is doing a manual
>> chomp afterwards, and the other two need chomps per your patch.

> Ok. I thought that I would get a slap on the hand if I changed the initial 
> function, but I get one not for changing it:-)

Well, more for not looking at the other caller and noting it needed
this too.  Anyway, done with the addition of a "chomp" parameter,
leaving only the TAP test changes to consider.

I'll set the CF entry back to "waiting on author" pending your
revisions of those.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - minor fix for meta command only scripts

2017-09-04 Thread Jeff Janes
On Mon, Sep 4, 2017 at 1:56 PM, Fabien COELHO  wrote:

>
> Hello Jeff,
>
> I have fixed a bug introduced in the patch by changing && by || in the
 (min_sec > 0 && maxsock != -1) condition which was inducing errors with
 multi-threads & clients...

>>>
> Since this commit (12788ae49e1933f463bc5), if I use the --rate to throttle
>> the transaction rate, it does get throttled to about the indicated speed,
>> but the pg_bench consumes the entire CPU.
>>
>>
>> At the block of code starting
>>if (min_usec > 0 && maxsock != -1)
>>
>> If maxsock == -1, then there is no sleep happening.
>>
>
> Argh, shame on me:-(
>
> I cannot find the "induced errors" I was refering to in the message...
> Sleeping is definitely needed to avoid a hard loop.
>
> Patch attached fixes it and does not seem introduce any special issue...
>
> Should probably be backpatched.
>
> Thanks for the debug!


Thanks Fabien, that works for me.

But if min_sec <= 0, do we want to do whatever it is that we already know
is over-do, before stopping to do the select?  If it is safe to go through
this code path when maxsock == -1, then should we just change it to this?

if (min_usec > 0)

Cheers,

Jeff


Re: [HACKERS] pgbench - minor fix for meta command only scripts

2017-09-04 Thread Fabien COELHO


Hello Jeff,


I have fixed a bug introduced in the patch by changing && by || in the
(min_sec > 0 && maxsock != -1) condition which was inducing errors with
multi-threads & clients...



Since this commit (12788ae49e1933f463bc5), if I use the --rate to throttle
the transaction rate, it does get throttled to about the indicated speed,
but the pg_bench consumes the entire CPU.


At the block of code starting
   if (min_usec > 0 && maxsock != -1)

If maxsock == -1, then there is no sleep happening.


Argh, shame on me:-(

I cannot find the "induced errors" I was refering to in the message... 
Sleeping is definitely needed to avoid a hard loop.


Patch attached fixes it and does not seem introduce any special issue...

Should probably be backpatched.

Thanks for the debug!

--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 364e254..3e23a6a 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -4587,7 +4587,7 @@ threadRun(void *arg)
 		 * or it's time to print a progress report.  Update input_mask to show
 		 * which client(s) received data.
 		 */
-		if (min_usec > 0 && maxsock != -1)
+		if (min_usec > 0 || maxsock != -1)
 		{
 			int			nsocks; /* return from select(2) */
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JIT compiling expressions/deform + inlining prototype v2.0

2017-09-04 Thread Andres Freund
Hi,

On 2017-09-04 20:01:03 +0300, Konstantin Knizhnik wrote:
> > I previously had an early prototype of JITing [1] expression evaluation
> > and tuple deforming.  I've since then worked a lot on this.
> > 
> > Here's an initial, not really pretty but functional, submission. This
> > supports all types of expressions, and tuples, and allows, albeit with
> > some drawbacks, inlining of builtin functions.  Between the version at
> > [1] and this I'd done some work in c++, because that allowed to
> > experiment more with llvm, but I've now translated everything back.
> > Some features I'd to re-implement due to limitations of C API.
> > 
> > 
> > I've whacked this around quite heavily today, this likely has some new
> > bugs, sorry for that :(
> 
> Can you please clarify the following fragment calculating attributes
> alignment:

Hi. That piece of code isn't particularly clear (and has a bug in the
submitted version), I'm revising it.

> 
> /* compute what following columns are aligned to */
> +if (att->attlen < 0)
> +{
> +/* can't guarantee any alignment after varlen field */
> +attcuralign = -1;
> +}
> +else if (att->attnotnull && attcuralign >= 0)
> +{
> +Assert(att->attlen > 0);
> +attcuralign += att->attlen;
> +}
> +else if (att->attnotnull)
> +{
> +/*
> + * After a NOT NULL fixed-width column, alignment is
> + * guaranteed to be the minimum of the forced alignment and
> + * length.  XXX
> + */
> +attcuralign = alignto + att->attlen;
> +Assert(attcuralign > 0);
> +}
> +else
> +{
> +//elog(LOG, "attnotnullreset: %d", attnum);
> +attcuralign = -1;
> +}
> 
> 
> I wonder why in this branch (att->attnotnull && attcuralign >= 0)
> we are not adding "alignto" and comment in the following branch else if
> (att->attnotnull)
> seems to be not related to this branch, because in this case attcuralign is
> expected to be less then zero wjhich means that previous attribute is varlen
> field.

Yea, I've changed that already, although it's currently added earlier,
because the alignment is needed before, to access the column correctly.
I've also made number of efficiency improvements, primarily to access
columns with an absolute offset if all preceding ones are fixed width
not null columns - that is quite noticeable performancewise.


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Fabien COELHO



I think we should go with Daniel's idea for all three parts.


I'm okay with that, although probably it should be an independent patch.


In the documentation, I do not think that both SERVER_VERSION_NAME and
_NUM (or whatever their chosen name) deserve two independent explanations
with heavy repeats just one after the other, and being treated differently
from VERSION_*.


I started with it that way, but it seemed pretty unreadable with the
parenthetical examples added.  And I think we need the examples,
particularly the one pointing out that you might get something like "beta".


Yes for "beta" which is also in the v8 patch I sent. One shared doc with 
different examples does not look too bad to me, and having things repeated 
so closely do not look good.



Seeing it as is, it calls for having "SERVER_VERSION" as well, but I'm not
sure of the better way to get it. I tried with "SELECT VERSION() AS
SERVER_VERSION \gset" but varnames are lowerized.


The problem there is you can't get version() without an extra round trip
to the server --- and an extra logged query --- which people are going to
complain about.


Yep.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - minor fix for meta command only scripts

2017-09-04 Thread Jeff Janes
On Mon, Sep 26, 2016 at 1:01 AM, Heikki Linnakangas  wrote:

> On 09/24/2016 12:45 PM, Fabien COELHO wrote:
>
>>
>> Attached are some small changes to your version:
>>
>> I have added the sleep_until fix.
>>
>> I have fixed a bug introduced in the patch by changing && by || in the
>> (min_sec > 0 && maxsock != -1) condition which was inducing errors with
>> multi-threads & clients...
>>
>> I have factored out several error messages in "commandFailed", in place of
>> the "metaCommandFailed", and added the script number as well in the error
>> messages. All messages are now specific to the failed command.
>>
>> I have added two states to the machine:
>>
>>   - CSTATE_CHOOSE_SCRIPT which simplifies threadRun, there is now one call
>> to chooseScript instead of two before.
>>
>>   - CSTATE_END_COMMAND which manages is_latencies and proceeding to the
>> next command, thus merging the three instances of updating the stats
>> that were in the first version.
>>
>> The later state means that processing query results is included in the per
>> statement latency, which is an improvement because before I was getting
>> some transaction latency significantly larger that the apparent sum of the
>> per-statement latencies, which did not make much sense...
>>
>
> Ok. I agree that makes more sense.
>
> I have added & updated a few comments.
>>
>
> Thanks! Committed.
>
> There are some places where the break could be a pass through
>> instead, not sure how desirable it is, I'm fine with break.
>>
>
> I left them as "break". Pass-throughs are error-prone, and make it more
> difficult to read, IMHO. The compiler will optimize it into a pass-through
> anyway, if possible and worthwhile, so there should be no performance
> difference.



Since this commit (12788ae49e1933f463bc5), if I use the --rate to throttle
the transaction rate, it does get throttled to about the indicated speed,
but the pg_bench consumes the entire CPU.


At the block of code starting
if (min_usec > 0 && maxsock != -1)

If maxsock == -1, then there is no sleep happening.

Cheers,

Jeff


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Tom Lane
Fabien COELHO  writes:
> I also like Daniel's idea to update formatting rules, eg following what is 
> done for environment variables and accepting that it won't fit in one page 
> anyway.

Yeah.  When you look at all three portions of the helpVariables output,
it's clear that we have faced this issue multiple times before and not
been too consistent about how we dealt with it.  There are places that
go over 80 columns; there are places that break the description into
multiple lines (and some of those *also* exceed 80 columns); there are
places that just shove the description onto the next line.

I think we should go with Daniel's idea for all three parts.

> I like trying to keep the 80 (or 88 it seems) columns limit if possible, 
> because my getting older eyes water on long lines.

Me too :-(.  Also, it seems like we should not assume that we have
indefinite amounts of space in both dimensions.  We've already accepted
the need to page vertically, so let's run with that and try to hold
the line on horizontal space.

> In the documentation, I do not think that both SERVER_VERSION_NAME and 
> _NUM (or whatever their chosen name) deserve two independent explanations 
> with heavy repeats just one after the other, and being treated differently 
> from VERSION_*.

I started with it that way, but it seemed pretty unreadable with the
parenthetical examples added.  And I think we need the examples,
particularly the one pointing out that you might get something like "beta".

> Seeing it as is, it calls for having "SERVER_VERSION" as well, but I'm not 
> sure of the better way to get it. I tried with "SELECT VERSION() AS 
> SERVER_VERSION \gset" but varnames are lowerized.

The problem there is you can't get version() without an extra round trip
to the server --- and an extra logged query --- which people are going to
complain about.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Fabien COELHO


Hello Tom,


As for included bug fixes, I can do separate patches, but I think that it
is enough to first commit the pgbench files and then the tap-test files,
in that order. I'll see what committers say.


Starting to look at this.  I concur that the bug fixes ought to be
committed separately, but since they're in separate files it's not hard
to disassemble the patch.


Ok.


A couple of thoughts --

* I do not think we need expr_scanner_chomp_substring.  Of the three
existing callers of expr_scanner_get_substring, one is doing a manual
chomp afterwards, and the other two need chomps per your patch.
Seems to me we should just include the chomp logic in
expr_scanner_get_substring.  Maybe it'd be worth adding a "bool chomp"
argument to it, but I think that would be more for clarity than
usefulness.


Ok. I thought that I would get a slap on the hand if I changed the initial 
function, but I get one not for changing it:-)



* I do not like the API complexity of command_checks_all, specifically
not the business of taking either a scalar or an array for the cmd,
out, and err arguments.  I think it'd be clearer and less bug-prone
to just take arrays, full stop.  Maybe it's just that I'm a crummy Perl
programmer, but I do not see uses of this "ref ... eq 'ARRAY'" business
elsewhere in our test scaffolding, and this doesn't seem like a great
place to introduce it.  At worst you'd need to add brackets around the
arguments in a few callers.


Hmmm. I find it quite elegant and harmless, but it can be removed.


* In the same vein, I don't know what this does:

sub pgbench($)

and I see no existing instances of it elsewhere in our tree.  I think it'd
be better not to require advanced degrees in Perl programming in order to
read the test cases.


It just says that 5 scalars are expected, so it would complain if "type" 
or number do not match. Now, why give type hints if they are not 
mandatory, as devs can always detect their errors by extensive testing 
instead:-)


But I agree that it is not a usual Perl stance and it can be removed.


* Another way in which command_checks_all introduces API complexity is
that it accounts for a variable number of tests depending on how many
patterns are provided.  This seems like a mess.  I see that you have
in some places (not very consistently) annotated call sites as to how
many tests they account for, but who's going to do the math to make
sure everything adds up?


Perl:-) I run the test, figure out the number it found in the resulting 
error message, and update the number in the source. Not too hard:-)


Maybe it'd be better to not use like(), or do something else so that 
each command_checks_all call counts as one Test::More test rather than 
N.


Or just forget prespecifying a test count and use done_testing 
instead.


Yep, "done_testing" looks fine, I'll investigate that, but other test 
seemed to insist on declaring the expected number. Now "done_testing" may 
be a nicer option if some tests need to be skipped on some platform.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Tom Lane
Nikolay Shaplov  writes:
> 2. Patch for -t/-R/-L case.

> Current pgbench does not process -t/-R/-L case correctly. This was also fixed 
> in this patch. 

This portion of the patch seems uncontroversial, so I've pushed it, with
some minor cosmetic adjustments (mostly comments).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Fabien COELHO


Hello Tom,


So I thought we were done bikeshedding the variable names for this
feature, but as I was reviewing the patch with intent to commit,
I noticed you hadn't updated helpVariables() to mention them.


Indeed.


Possibly you missed this because it doesn't mention VERSION either,
but that doesn't seem very defensible.


Long time ago. Maybe I greped for it to check where it was appearing and 
did not find what does not exist...



I inserted text to describe all five variables --- but
"SERVER_VERSION_NAME" is too long to fit in the available column space.


Indeed.


In the attached updated patch, I moved all the descriptive text over one
column, and really should have moved it over two columns; but adding even
one space makes a couple of the lines longer than 80 columns when they
were not before.  Since we've blown past 80 columns on some of the other
output, maybe that doesn't matter.  Or maybe we should shorten this
variable name so it doesn't force reformatting of all this text.


It seems that PSQL_EDITOR_LINENUMBER_ARG (25 characters) has been accepted 
before for an environment variable.



Possible ideas include "DB_VERSION_NAME", "SERVER_VER_NAME", or
"SERVER_VERSION_STR".  (The last saves only one character, whereas
we really need to save two if we're trying not to be wider than any
other documented variable.)

Thoughts?


Like Pavel, I must admit that I do not like these options much, nor the 
other ones down thread: I hate "hungarian" naming, ISTM that mixing abbrev 
and full words is better avoided. These are really minor aethetical 
preferences that I may break occasionally, eg with _NUM for the nice 
similarity with _NAME.


ISTM that it needs to be consistent with the pre-existing VERSION, which 
rules out "VER".


Now if this is a bloker, I think that anything is more useful than no 
variable as it is useful to have them for simple scripting test through 
server side expressions.


I also like Daniel's idea to update formatting rules, eg following what is 
done for environment variables and accepting that it won't fit in one page 
anyway.


  SERVER_VERSION NAME
bla bla bla


Attached updated patch changes helpVariables() as we'd need to do if
not renaming, and does some minor doc/comment wordsmithing elsewhere.


Given my broken English, I'm fine with wordsmithing.

I like trying to keep the 80 (or 88 it seems) columns limit if possible, 
because my getting older eyes water on long lines.


In the documentation, I do not think that both SERVER_VERSION_NAME and 
_NUM (or whatever their chosen name) deserve two independent explanations 
with heavy repeats just one after the other, and being treated differently 
from VERSION_*.


The same together-ness approach can be used for helpVariables(), see v8 
attached for instance.


Seeing it as is, it calls for having "SERVER_VERSION" as well, but I'm not 
sure of the better way to get it. I tried with "SELECT VERSION() AS 
SERVER_VERSION \gset" but varnames are lowerized.


--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index c592eda..79646fd 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3688,6 +3688,19 @@ bar
   
 
   
+SERVER_VERSION_NAME
+SERVER_VERSION_NUM
+
+
+The server's version as a string, for example 9.6.2, 10.1 or 11beta1,
+and in numeric form, for example 90602, 11.
+This is set every time you connect to a database
+(including program start-up), but can be changed or unset.
+
+
+  
+
+  
 SINGLELINE
 
 
@@ -3733,10 +3746,15 @@ bar
 
   
 VERSION
+VERSION_NAME
+VERSION_NUM
 
 
-This variable is set at program start-up to
-reflect psql's version.  It can be changed or unset.
+ These variables are set at program start-up to reflect
+ psql's version, respectively as a verbose string,
+ a short string (e.g., 9.6.2, 10.1,
+ or 11beta1), and a number (e.g., 90602
+ or 11).  They can be changed or unset.
 
 
   
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 96f3a40..237e063 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3337,6 +3337,9 @@ checkWin32Codepage(void)
 void
 SyncVariables(void)
 {
+	char		vbuf[32];
+	const char *server_version;
+
 	/* get stuff from connection */
 	pset.encoding = PQclientEncoding(pset.db);
 	pset.popt.topt.encoding = pset.encoding;
@@ -3348,6 +3351,20 @@ SyncVariables(void)
 	SetVariable(pset.vars, "PORT", PQport(pset.db));
 	SetVariable(pset.vars, "ENCODING", pg_encoding_to_char(pset.encoding));
 
+	/* this bit should match connection_warnings(): */
+	/* Try to get full text form of version, might include "devel" etc */
+	server_version = PQparameterStatus(pset.db, "server_version");
+	/* Otherwise fall back on pset.sversion 

Re: [HACKERS] pgbench tap tests & minor fixes.

2017-09-04 Thread Tom Lane
Fabien COELHO  writes:
> As for included bug fixes, I can do separate patches, but I think that it 
> is enough to first commit the pgbench files and then the tap-test files, 
> in that order. I'll see what committers say.

Starting to look at this.  I concur that the bug fixes ought to be
committed separately, but since they're in separate files it's not hard
to disassemble the patch.

A couple of thoughts --

* I do not think we need expr_scanner_chomp_substring.  Of the three
existing callers of expr_scanner_get_substring, one is doing a manual
chomp afterwards, and the other two need chomps per your patch.
Seems to me we should just include the chomp logic in
expr_scanner_get_substring.  Maybe it'd be worth adding a "bool chomp"
argument to it, but I think that would be more for clarity than
usefulness.

* I do not like the API complexity of command_checks_all, specifically
not the business of taking either a scalar or an array for the cmd,
out, and err arguments.  I think it'd be clearer and less bug-prone
to just take arrays, full stop.  Maybe it's just that I'm a crummy Perl
programmer, but I do not see uses of this "ref ... eq 'ARRAY'" business
elsewhere in our test scaffolding, and this doesn't seem like a great
place to introduce it.  At worst you'd need to add brackets around the
arguments in a few callers.

* In the same vein, I don't know what this does:

sub pgbench($)

and I see no existing instances of it elsewhere in our tree.  I think it'd
be better not to require advanced degrees in Perl programming in order to
read the test cases.

* Another way in which command_checks_all introduces API complexity is
that it accounts for a variable number of tests depending on how many
patterns are provided.  This seems like a mess.  I see that you have
in some places (not very consistently) annotated call sites as to how
many tests they account for, but who's going to do the math to make
sure everything adds up?  Maybe it'd be better to not use like(), or
do something else so that each command_checks_all call counts as one
Test::More test rather than N.  Or just forget prespecifying a test
count and use done_testing instead.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-09-04 Thread Fabien COELHO



Looks good to me. I have marked the patch status as "Ready for
committer".


LGTM too.  Pushed with a minor adjustment to make parseVariable()
have exactly the same test as valid_variable_name().


 \set ありがとうございました 1
 \set 谢谢 2
 \set dankeschön 3

:-)

--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-04 Thread Marko Tiikkaja
On Mon, Sep 4, 2017 at 7:46 PM, Peter Geoghegan  wrote:

> On Mon, Sep 4, 2017 at 10:05 AM, Marko Tiikkaja  wrote:

> But I'm generally against
> > interfaces which put arbitrary restrictions on what power users can do on
> > the basis that less experienced people might misuse the interface.
>
> I agree with that as a broad principle, but disagree that it applies
> here. Or if it does, then you have yet to convince me of it. In all
> sincerity, if you think that I just don't understand your perspective,
> then please try to make me understand it. Would a power user actually
> miss ON CONFLICT DO SELECT? And if that's the case, would it really be
> something they'd remember 5 minutes later?
>

I don't know, but I don't want to be limiting what people can do just
because I can't come up with a use case.

In any case, others, feel free to chime in.


.m


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Pavel Stehule
2017-09-04 19:35 GMT+02:00 Tom Lane :

> "Daniel Verite"  writes:
> > The two-space left margin on the entire block does not add that
> > much to readability, IMV, so maybe we could reclaim these
> > two characters.
>
> Well, it's a sub-list of the entire output of helpVariables(), so
> I think some indentation is a good idea.
>
> > That would look like the following, for example, with a 3-space margin
> > for the description:
>
> > AUTOCOMMIT
> >If set, successful SQL commands are automatically committed
>
> But we could do something close to that, say two-space indent for the
> variable names and four-space for the descriptions.
>
> > To me that looks like a good trade-off: it eases the size constraints
> > for both the description and the name of the variable, at the cost
> > of consuming one more line per variable, but that's why the pager
> > is for.
>
> Yeah, we're already past the point where it's likely that
> helpVariables()'s output would fit on one screen for anybody, so
> maybe this is the best way.
>

the "less" pager supports horizontal scrolling very well.

regards

Pavel

>
> regards, tom lane
>


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-04 Thread Peter Geoghegan
On Mon, Sep 4, 2017 at 10:05 AM, Marko Tiikkaja  wrote:
> I had to look at the patch to see what I'd done, and the tests suggest that
> we already complain about this with if a FOR [lock level] clause is present:
>
>+begin transaction isolation level read committed;
>+insert into selfconflict values (10,1), (10,2) on conflict(f1) do select
> for update returning *;
>+ERROR:  ON CONFLICT command cannot affect row a second time
>+HINT:  Ensure that no rows proposed for insertion within the same
> command have duplicate constrained values.
>+commit;
>
> (in expected/insert_conflict.out.)

Right. I saw that you do it for ON CONFLICT DO SELECT FOR UPDATE, and so on.

>> On to the subject of my more general reservation: Why did you include
>> ON CONFLICT DO SELECT at all? Why not just ON CONFLICT DO SELECT FOR
>> UPDATE (and FOR SHARE, ...) ?
>
>
> If you don't intend to actually do anything with the row in the same
> database transaction, locking seems unnecessary.  For example, you might
> want to provide an idempotent method in your API which inserts the data and
> returns the ID to the caller.  The transaction is committed by the time the
> client sees the data, so locking is just extra overhead.

That makes sense, but I personally feel that the plain ON CONFLICT DO
SELECT variant isn't worth the trouble. I welcome other people's
opinions on that.

>> I think I know what you're going to say about it: ON CONFLICT DO
>> NOTHING doesn't lock the conflicting row, so why should I insist on it
>> here (why not have an ON CONFLICT DO SELECT variant, too)?
>
>
> I wasn't going to say that, no.

Well, it was a foundation for the ON CONFLICT DO SELECT variant that I
actually agree with, in any case.

>> In other words, while ON CONFLICT DO NOTHING may have set a precedent
>> here, it at least has semantics that limit the consequences of not
>> locking the row; and it *feels* a little bit dirty to use it
>> indifferently, even where that makes sense. ON CONFLICT DO SELECT is
>> probably going to be used within wCTEs some of the time. I'm not sure
>> that a plain ON CONFLICT DO SELECT variant won't allow unpredictable,
>> complicated problems when composed within a more complicated query.
>
>
> Yeah, in most cases you'd probably do a  SELECT FOR KEY SHARE.  And I
> wouldn't mind that being the default, but it would mean inventing special
> syntax with no previous precedent (as far as I know) for not locking the row
> in cases where the user doesn't want that.  And that doesn't seem too
> attractive, either.

I'm not in favor of spellings that are inconsistent with SELECT FOR ... .

> Maybe it would be better to make the default sensible for people who are not
> super familiar with postgres.  I don't know.  And the more I think about the
> use case above, the less I care about it.

I was going to ask you to offer a real-world example of where the
plain ON CONFLICT DO SELECT variant is compelling. If you care about
it (if you're not going to concede the point), then I still suggest
doing so.

> But I'm generally against
> interfaces which put arbitrary restrictions on what power users can do on
> the basis that less experienced people might misuse the interface.

I agree with that as a broad principle, but disagree that it applies
here. Or if it does, then you have yet to convince me of it. In all
sincerity, if you think that I just don't understand your perspective,
then please try to make me understand it. Would a power user actually
miss ON CONFLICT DO SELECT? And if that's the case, would it really be
something they'd remember 5 minutes later?

I actually think that ON CONFLICT DO NOTHING does have semantics that
are, shall we say, questionable. That's the cost of having it not lock
conflicting rows during big ETL operations. That's a huge practical
benefit for ETL use-cases. Whereas here, with ON CONFLICT DO SELECT,
I see a somewhat greater risk, and a much, much smaller benefit. A
benefit that might actually be indistinguishable from zero.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-09-04 Thread Tom Lane
Tatsuo Ishii  writes:
>> Here is a v3 with a more precise comment.

> Looks good to me. I have marked the patch status as "Ready for
> committer".

LGTM too.  Pushed with a minor adjustment to make parseVariable()
have exactly the same test as valid_variable_name().

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Tom Lane
"Daniel Verite"  writes:
> The two-space left margin on the entire block does not add that
> much to readability, IMV, so maybe we could reclaim these
> two characters.

Well, it's a sub-list of the entire output of helpVariables(), so
I think some indentation is a good idea.

> That would look like the following, for example, with a 3-space margin
> for the description:

> AUTOCOMMIT
>If set, successful SQL commands are automatically committed

But we could do something close to that, say two-space indent for the
variable names and four-space for the descriptions.

> To me that looks like a good trade-off: it eases the size constraints
> for both the description and the name of the variable, at the cost
> of consuming one more line per variable, but that's why the pager
> is for.

Yeah, we're already past the point where it's likely that
helpVariables()'s output would fit on one screen for anybody, so
maybe this is the best way.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench: Skipping the creating primary keys after initialization

2017-09-04 Thread Fabien COELHO


Hello Masahiko-san,


Currently TRUNCATE pgbench_accounts command is executed within a
transaction started immediately before it. If we move it out of the
transaction, the table data will be truncated even if the copying data
failed. Maybe we can do TRUNCATE pgbench_accounts, pgbench_history
instead. Thought?



Keep the truncate in the transaction, and truncate both (or all?) tables
together.


Attached latest patch incorporated the comments I got so far. Please review it.


"g" does not work for me yet when after "f", only the message is slightly 
different, it chokes on another dependency, branches instead of accounts.


  sh> ./pgbench -i -I ctpfg
  cleaning up...
  creating tables...
  set primary keys...
  set foreign keys...
  ERROR:  cannot truncate a table referenced in a foreign key constraint
  DETAIL:  Table "pgbench_history" references "pgbench_branches".
  HINT:  Truncate table "pgbench_history" at the same time, or use TRUNCATE ... 
CASCADE.

I think that the whole data generation should be in *one* transaction 
which starts with "truncate pgbench_history, pgbench_branches, 
pgbench_tellers, pgbench_accounts;"


In passing, I think that the documentation should tell explicitely what 
the default value is (aka "ctgvp").


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER INDEX .. SET STATISTICS ... behaviour

2017-09-04 Thread Adrien Nayrat
On 09/04/2017 06:16 PM, Alexander Korotkov wrote:
> Looks good for me.  I've integrated those changes in the patch.
> New revision is attached.

Thanks, I changed status to "ready for commiter".

-- 
Adrien NAYRAT

http://dalibo.com - http://dalibo.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Pavel Stehule
2017-09-04 19:08 GMT+02:00 Tom Lane :

> Pavel Stehule  writes:
> > 2017-09-04 18:56 GMT+02:00 Tom Lane :
> >> Just had another idea: maybe make the new variable names
> >> SERVER_VERSION_S
> >> SERVER_VERSION_N
> >> VERSION_S
> >> VERSION_N
>
> > SERVER_VERSION_STR looks better than this.
>
> I dunno, I'm not very pleased with the "STR" idea because the verbose
> version string is also a string.  Documenting "S" as meaning "short"
> would dodge that objection.
>
>
You have to necessary read doc to understand this, and I don't think so it
is good idea.

So SERVER_VERSION_NAME looks like best solution to me.




> regards, tom lane
>


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Daniel Verite
Tom Lane wrote:

> Since we've blown past 80 columns on some of the other
> output, maybe that doesn't matter.  Or maybe we should shorten this
> variable name so it doesn't force reformatting of all this text.

The two-space left margin on the entire block does not add that
much to readability, IMV, so maybe we could reclaim these
two characters.

Another idea: since there are already several variables for which
the text + spacing + presentation don't fit anyway, 
we could forget about the tabular presentation and grow
vertically.

That would look like the following, for example, with a 3-space margin
for the description:

AUTOCOMMIT
   If set, successful SQL commands are automatically committed
COMP_KEYWORD_CASE
   Determines the case used to complete SQL key words
   [lower, upper, preserve-lower, preserve-upper]
DBNAME
   The currently connected database name
ECHO
   Controls what input is written to standard output
   [all, errors, none, queries]
ECHO_HIDDEN
   If set, display internal queries executed by backslash commands;
   if set to "noexec", just show without execution
ENCODING
   Current client character set encoding
FETCH_COUNT
   The number of result rows to fetch and display at a time
   (default: 0=unlimited)
etc..

To me that looks like a good trade-off: it eases the size constraints
for both the description and the name of the variable, at the cost
of consuming one more line per variable, but that's why the pager
is for.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Tom Lane
Pavel Stehule  writes:
> 2017-09-04 18:56 GMT+02:00 Tom Lane :
>> Just had another idea: maybe make the new variable names
>> SERVER_VERSION_S
>> SERVER_VERSION_N
>> VERSION_S
>> VERSION_N

> SERVER_VERSION_STR looks better than this.

I dunno, I'm not very pleased with the "STR" idea because the verbose
version string is also a string.  Documenting "S" as meaning "short"
would dodge that objection.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-04 Thread Marko Tiikkaja
On Mon, Sep 4, 2017 at 4:09 AM, Peter Geoghegan  wrote:

> On Tue, Aug 15, 2017 at 12:17 AM, Marko Tiikkaja  wrote:
> > On Tue, Aug 15, 2017 at 7:43 AM, Peter Geoghegan  wrote:
> >>
> >> On Mon, Aug 14, 2017 at 6:23 PM, Marko Tiikkaja  wrote:
> >> > Attached is a patch for $SUBJECT.  It might still be a bit rough
> around
> >> > the
> >> > edges and probably light on docs and testing, but I thought I'd post
> it
> >> > anyway so I won't forget.
> >>
> >> Is it possible for ON CONFLICT DO SELECT to raise a cardinality
> >> violation error? Why or why not?
> >
> >
> > No.  I don't see when that would need to happen.  But I'm guessing you
> have
> > a case in mind?
>
> Actually, no, I didn't. But I wondered if you did. I think that it
> makes some sense not to, now that I think about it. ON CONFLICT DO
> NOTHING doesn't have cardinality violations, because it cannot affect
> a row twice if there are duplicates proposed for insertion (at least,
> not through any ON CONFLICT related codepath). But, this opinion only
> applies to ON CONFLICT DO SELECT, not ON CONFLICT DO SELECT FOR
> UPDATE. And I have other reservations, which I'll go in to
> momentarily, about ON CONFLICT DO SELECT in general. So, the upshot is
> that I think we need cardinality violations in all cases for this
> feature. Why would a user ever not want to know that the row was
> locked twice?
>

I had to look at the patch to see what I'd done, and the tests suggest that
we already complain about this with if a FOR [lock level] clause is present:

   +begin transaction isolation level read committed;
   +insert into selfconflict values (10,1), (10,2) on conflict(f1) do
select for update returning *;
   +ERROR:  ON CONFLICT command cannot affect row a second time
   +HINT:  Ensure that no rows proposed for insertion within the same
command have duplicate constrained values.
   +commit;

(in expected/insert_conflict.out.)


> On to the subject of my more general reservation: Why did you include
> ON CONFLICT DO SELECT at all? Why not just ON CONFLICT DO SELECT FOR
> UPDATE (and FOR SHARE, ...) ?
>

If you don't intend to actually do anything with the row in the same
database transaction, locking seems unnecessary.  For example, you might
want to provide an idempotent method in your API which inserts the data and
returns the ID to the caller.  The transaction is committed by the time the
client sees the data, so locking is just extra overhead.


> I think I know what you're going to say about it: ON CONFLICT DO
> NOTHING doesn't lock the conflicting row, so why should I insist on it
> here (why not have an ON CONFLICT DO SELECT variant, too)?
>

I wasn't going to say that, no.


> In other words, while ON CONFLICT DO NOTHING may have set a precedent
> here, it at least has semantics that limit the consequences of not
> locking the row; and it *feels* a little bit dirty to use it
> indifferently, even where that makes sense. ON CONFLICT DO SELECT is
> probably going to be used within wCTEs some of the time. I'm not sure
> that a plain ON CONFLICT DO SELECT variant won't allow unpredictable,
> complicated problems when composed within a more complicated query.
>

Yeah, in most cases you'd probably do a  SELECT FOR KEY SHARE.  And I
wouldn't mind that being the default, but it would mean inventing special
syntax with no previous precedent (as far as I know) for not locking the
row in cases where the user doesn't want that.  And that doesn't seem too
attractive, either.

Maybe it would be better to make the default sensible for people who are
not super familiar with postgres.  I don't know.  And the more I think
about the use case above, the less I care about it.  But I'm generally
against interfaces which put arbitrary restrictions on what power users can
do on the basis that less experienced people might misuse the interface.


.m


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Pavel Stehule
2017-09-04 18:56 GMT+02:00 Tom Lane :

> I wrote:
> > ... Or maybe we should shorten this
> > variable name so it doesn't force reformatting of all this text.
> > Possible ideas include "DB_VERSION_NAME", "SERVER_VER_NAME", or
> > "SERVER_VERSION_STR".  (The last saves only one character, whereas
> > we really need to save two if we're trying not to be wider than any
> > other documented variable.)
>
> Just had another idea: maybe make the new variable names
>
> SERVER_VERSION_S
> SERVER_VERSION_N
> VERSION_S
> VERSION_N
>
> "_S" could usefully be read as either "string" or "short", and probably
> we should document it as meaning "short".  This way avoids creating any
> weird inconsistencies with the existing precedent of the VERSION variable.
>

-1

With respect, it doesn't look well and intuitive.

SERVER_VERSION_STR looks better than this.

I can live very well with SERVER_VERSION_STR and SERVER_VERSION_NUM

Regards

Pavel




>
> regards, tom lane
>


Re: [HACKERS] JIT compiling expressions/deform + inlining prototype v2.0

2017-09-04 Thread Konstantin Knizhnik



On 01.09.2017 09:41, Andres Freund wrote:

Hi,

I previously had an early prototype of JITing [1] expression evaluation
and tuple deforming.  I've since then worked a lot on this.

Here's an initial, not really pretty but functional, submission. This
supports all types of expressions, and tuples, and allows, albeit with
some drawbacks, inlining of builtin functions.  Between the version at
[1] and this I'd done some work in c++, because that allowed to
experiment more with llvm, but I've now translated everything back.
Some features I'd to re-implement due to limitations of C API.


I've whacked this around quite heavily today, this likely has some new
bugs, sorry for that :(


Can you please clarify the following fragment calculating attributes 
alignment:



/* compute what following columns are aligned to */
+if (att->attlen < 0)
+{
+/* can't guarantee any alignment after varlen field */
+attcuralign = -1;
+}
+else if (att->attnotnull && attcuralign >= 0)
+{
+Assert(att->attlen > 0);
+attcuralign += att->attlen;
+}
+else if (att->attnotnull)
+{
+/*
+ * After a NOT NULL fixed-width column, alignment is
+ * guaranteed to be the minimum of the forced alignment and
+ * length.  XXX
+ */
+attcuralign = alignto + att->attlen;
+Assert(attcuralign > 0);
+}
+else
+{
+//elog(LOG, "attnotnullreset: %d", attnum);
+attcuralign = -1;
+}


I wonder why in this branch (att->attnotnull && attcuralign >= 0)
we are not adding "alignto" and comment in the following branch else if 
(att->attnotnull)
seems to be not related to this branch, because in this case attcuralign 
is expected to be less then zero wjhich means that previous attribute is 
varlen field.



--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Pavel Stehule
Hi

2017-09-04 18:31 GMT+02:00 Tom Lane :

> So I thought we were done bikeshedding the variable names for this
> feature, but as I was reviewing the patch with intent to commit,
> I noticed you hadn't updated helpVariables() to mention them.
> Possibly you missed this because it doesn't mention VERSION either,
> but that doesn't seem very defensible.
>
> I inserted text to describe all five variables --- but
> "SERVER_VERSION_NAME" is too long to fit in the available column space.
> In the attached updated patch, I moved all the descriptive text over one
> column, and really should have moved it over two columns; but adding even
> one space makes a couple of the lines longer than 80 columns when they
> were not before.  Since we've blown past 80 columns on some of the other
> output, maybe that doesn't matter.  Or maybe we should shorten this
> variable name so it doesn't force reformatting of all this text.
>
> Possible ideas include "DB_VERSION_NAME", "SERVER_VER_NAME", or
> "SERVER_VERSION_STR".  (The last saves only one character, whereas
> we really need to save two if we're trying not to be wider than any
> other documented variable.)
>
> Thoughts?
>

I prefer SERVER_VERSION_NAME - although it touch 80 columns limit - it is
consistent with VERSION_NAME.

Or maybe break a column line and don't impact other rows.

Regards

Pavel


> Attached updated patch changes helpVariables() as we'd need to do if
> not renaming, and does some minor doc/comment wordsmithing elsewhere.
>
> regards, tom lane
>
>


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Tom Lane
I wrote:
> ... Or maybe we should shorten this
> variable name so it doesn't force reformatting of all this text.
> Possible ideas include "DB_VERSION_NAME", "SERVER_VER_NAME", or
> "SERVER_VERSION_STR".  (The last saves only one character, whereas
> we really need to save two if we're trying not to be wider than any
> other documented variable.)

Just had another idea: maybe make the new variable names

SERVER_VERSION_S
SERVER_VERSION_N
VERSION_S
VERSION_N

"_S" could usefully be read as either "string" or "short", and probably
we should document it as meaning "short".  This way avoids creating any
weird inconsistencies with the existing precedent of the VERSION variable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-09-04 Thread Tom Lane
So I thought we were done bikeshedding the variable names for this
feature, but as I was reviewing the patch with intent to commit,
I noticed you hadn't updated helpVariables() to mention them.
Possibly you missed this because it doesn't mention VERSION either,
but that doesn't seem very defensible.

I inserted text to describe all five variables --- but
"SERVER_VERSION_NAME" is too long to fit in the available column space.
In the attached updated patch, I moved all the descriptive text over one
column, and really should have moved it over two columns; but adding even
one space makes a couple of the lines longer than 80 columns when they
were not before.  Since we've blown past 80 columns on some of the other
output, maybe that doesn't matter.  Or maybe we should shorten this
variable name so it doesn't force reformatting of all this text.

Possible ideas include "DB_VERSION_NAME", "SERVER_VER_NAME", or
"SERVER_VERSION_STR".  (The last saves only one character, whereas
we really need to save two if we're trying not to be wider than any
other documented variable.)

Thoughts?

Attached updated patch changes helpVariables() as we'd need to do if
not renaming, and does some minor doc/comment wordsmithing elsewhere.

regards, tom lane

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index c592eda..a693fb9 100644
*** a/doc/src/sgml/ref/psql-ref.sgml
--- b/doc/src/sgml/ref/psql-ref.sgml
*** bar
*** 3688,3693 
--- 3688,3717 

  

+ SERVER_VERSION_NAME
+ 
+ 
+ The server's version number as a string, for
+ example 9.6.2, 10.1, or 11beta1.
+ This is set every time you connect to a database (including
+ program start-up), but can be changed or unset.
+ 
+ 
+   
+ 
+   
+ SERVER_VERSION_NUM
+ 
+ 
+ The server's version number in numeric form, for
+ example 90602 or 11.
+ This is set every time you connect to a database (including
+ program start-up), but can be changed or unset.
+ 
+ 
+   
+ 
+   
  SINGLELINE
  
  
*** bar
*** 3733,3742 
  

  VERSION
  
  
! This variable is set at program start-up to
! reflect psql's version.  It can be changed or unset.
  
  

--- 3757,3771 
  

  VERSION
+ VERSION_NAME
+ VERSION_NUM
  
  
! These variables are set at program start-up to reflect
! psql's version, respectively as a verbose string,
! a short string (e.g., 9.6.2, 10.1,
! or 11beta1), and a number (e.g., 90602
! or 11).  They can be changed or unset.
  
  

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 96f3a40..4283bf3 100644
*** a/src/bin/psql/command.c
--- b/src/bin/psql/command.c
*** checkWin32Codepage(void)
*** 3337,3342 
--- 3337,3345 
  void
  SyncVariables(void)
  {
+ 	char		vbuf[32];
+ 	const char *server_version;
+ 
  	/* get stuff from connection */
  	pset.encoding = PQclientEncoding(pset.db);
  	pset.popt.topt.encoding = pset.encoding;
*** SyncVariables(void)
*** 3348,3353 
--- 3351,3370 
  	SetVariable(pset.vars, "PORT", PQport(pset.db));
  	SetVariable(pset.vars, "ENCODING", pg_encoding_to_char(pset.encoding));
  
+ 	/* this bit should match connection_warnings(): */
+ 	/* Try to get full text form of version, might include "devel" etc */
+ 	server_version = PQparameterStatus(pset.db, "server_version");
+ 	/* Otherwise fall back on pset.sversion */
+ 	if (!server_version)
+ 	{
+ 		formatPGVersionNumber(pset.sversion, true, vbuf, sizeof(vbuf));
+ 		server_version = vbuf;
+ 	}
+ 	SetVariable(pset.vars, "SERVER_VERSION_NAME", server_version);
+ 
+ 	snprintf(vbuf, sizeof(vbuf), "%d", pset.sversion);
+ 	SetVariable(pset.vars, "SERVER_VERSION_NUM", vbuf);
+ 
  	/* send stuff to it, too */
  	PQsetErrorVerbosity(pset.db, pset.verbosity);
  	PQsetErrorContextVisibility(pset.db, pset.show_context);
*** UnsyncVariables(void)
*** 3366,3371 
--- 3383,3390 
  	SetVariable(pset.vars, "HOST", NULL);
  	SetVariable(pset.vars, "PORT", NULL);
  	SetVariable(pset.vars, "ENCODING", NULL);
+ 	SetVariable(pset.vars, "SERVER_VERSION_NAME", NULL);
+ 	SetVariable(pset.vars, "SERVER_VERSION_NUM", NULL);
  }
  
  
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index b3dbb59..ee612b0 100644
*** a/src/bin/psql/help.c
--- b/src/bin/psql/help.c
*** helpVariables(unsigned short int pager)
*** 336,342 
  	 * Windows builds currently print one more line than non-Windows builds.
  	 * Using the larger number is fine.
  	 */
! 	output = PageOutput(88, pager ? &(pset.popt.topt) : NULL);
  
  	fprintf(output, _("List of specially treated variables\n\n"));
  

Re: [HACKERS] CSV Logging questions

2017-09-04 Thread David Fetter
On Mon, Sep 04, 2017 at 05:27:40PM +0100, Greg Stark wrote:
> I was just looking over the CSV logging code and have a few questions
> about why things were done the way they were done.
> 
> 1) Why do we gather a per-session log line number? Is it just to aid
> people importing to avoid duplicate entries from partial files? Is
> there some other purpose given that entries will already be sequential
> in the csv file?
> 
> 2) Why is the file error conditional on log_error_verbosity? Surely
> the whole point of a structured log is that you can log everything and
> choose what to display later -- i.e. why csv logging doesn't look at
> log_line_prefix to determine which other bits to display. There's no
> added cost to include this information unconditionally and they're far
> from the largest piece of data being logged either.
> 
> 3) Similarly I wonder if the statement should always be included even
> with hide_stmt is set so that users can write sensible queries against
> the data even if it means duplicating data.
> 
> 4) Why the session start time? Is this just so that  session_start_time> uniquely identiifes a session? Should we perhaps
> generate a unique session identifier instead?
> 
> The real reason I'm looking at this is because I'm looking at the
> json_log plugin from Michael Paquier. It doesn't have the log line
> numbers and I can't figure whether this is something it should have
> because I can't quite figure out why they exist in CSV files. I think
> there are a few other fields that have been added in Postgres but are
> missing from the JSON log because of version skew.
> 
> I'm wondering if we should abstract out the CSV format so instead of
> using emit_log_hook you would add a new format and it would specify a
> "add_log_attribute(key,val)" hook which would get called once per log
> format so you could have as many log formats as you want and be sure
> they would all have the same data. That would also mean that the
> timestamps would be in sync and we could probably eliminate the
> occurrences of the wrong format appearing in the wrong logs.

+1 for making the emitters all work off the same source.

Any idea how much work we're talking about to do these things?

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CSV Logging questions

2017-09-04 Thread Greg Stark
I was just looking over the CSV logging code and have a few questions
about why things were done the way they were done.

1) Why do we gather a per-session log line number? Is it just to aid
people importing to avoid duplicate entries from partial files? Is
there some other purpose given that entries will already be sequential
in the csv file?

2) Why is the file error conditional on log_error_verbosity? Surely
the whole point of a structured log is that you can log everything and
choose what to display later -- i.e. why csv logging doesn't look at
log_line_prefix to determine which other bits to display. There's no
added cost to include this information unconditionally and they're far
from the largest piece of data being logged either.

3) Similarly I wonder if the statement should always be included even
with hide_stmt is set so that users can write sensible queries against
the data even if it means duplicating data.

4) Why the session start time? Is this just so that  uniquely identiifes a session? Should we perhaps
generate a unique session identifier instead?

The real reason I'm looking at this is because I'm looking at the
json_log plugin from Michael Paquier. It doesn't have the log line
numbers and I can't figure whether this is something it should have
because I can't quite figure out why they exist in CSV files. I think
there are a few other fields that have been added in Postgres but are
missing from the JSON log because of version skew.

I'm wondering if we should abstract out the CSV format so instead of
using emit_log_hook you would add a new format and it would specify a
"add_log_attribute(key,val)" hook which would get called once per log
format so you could have as many log formats as you want and be sure
they would all have the same data. That would also mean that the
timestamps would be in sync and we could probably eliminate the
occurrences of the wrong format appearing in the wrong logs.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only count(*) for indexes supporting bitmap scans

2017-09-04 Thread Alexander Kuzmenkov

On 04.09.2017 15:17, Alexey Chernyshov wrote:

make check-world fails on contrib/postgres_fdw because of Subquery Scan on ... 
Probably, query plan was changed.


Hi Alexey,

Thanks for testing! This is the same problem as the one in 
'select_distinct' I mentioned before. I changed the test, the updated 
patch is attached.


--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c19b3318c7..5f5f65d60c 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2203,22 +2203,23 @@ SELECT t1c1, avg(t1c1 + t2c1) FROM (SELECT t1.c1, t2.c1 FROM ft1 t1 JOIN ft2 t2
 -- join with lateral reference
 EXPLAIN (VERBOSE, COSTS OFF)
 SELECT t1."C 1" FROM "S 1"."T 1" t1, LATERAL (SELECT DISTINCT t2.c1, t3.c1 FROM ft1 t2, ft2 t3 WHERE t2.c1 = t3.c1 AND t2.c2 = t1.c2) q ORDER BY t1."C 1" OFFSET 10 LIMIT 10;
- QUERY PLAN 
-
+QUERY PLAN
+--
  Limit
Output: t1."C 1"
->  Nested Loop
  Output: t1."C 1"
  ->  Index Scan using t1_pkey on "S 1"."T 1" t1
Output: t1."C 1", t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8
- ->  HashAggregate
-   Output: t2.c1, t3.c1
-   Group Key: t2.c1, t3.c1
-   ->  Foreign Scan
+ ->  Subquery Scan on q
+   ->  HashAggregate
  Output: t2.c1, t3.c1
- Relations: (public.ft1 t2) INNER JOIN (public.ft2 t3)
- Remote SQL: SELECT r1."C 1", r2."C 1" FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")) AND ((r1.c2 = $1::integer
-(13 rows)
+ Group Key: t2.c1, t3.c1
+ ->  Foreign Scan
+   Output: t2.c1, t3.c1
+   Relations: (public.ft1 t2) INNER JOIN (public.ft2 t3)
+   Remote SQL: SELECT r1."C 1", r2."C 1" FROM ("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (((r1."C 1" = r2."C 1")) AND ((r1.c2 = $1::integer
+(14 rows)
 
 SELECT t1."C 1" FROM "S 1"."T 1" t1, LATERAL (SELECT DISTINCT t2.c1, t3.c1 FROM ft1 t2, ft2 t3 WHERE t2.c1 = t3.c1 AND t2.c2 = t1.c2) q ORDER BY t1."C 1" OFFSET 10 LIMIT 10;
  C 1 
@@ -2672,16 +2673,17 @@ select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800
 -- Unshippable HAVING clause will be evaluated locally, and other qual in HAVING clause is pushed down
 explain (verbose, costs off)
 select count(*) from (select c5, count(c1) from ft1 group by c5, sqrt(c2) having (avg(c1) / avg(c1)) * random() <= 1 and avg(c1) < 500) x;
-   QUERY PLAN
--
+  QUERY PLAN   
+---
  Aggregate
Output: count(*)
-   ->  Foreign Scan
- Output: ft1.c5, NULL::bigint, (sqrt((ft1.c2)::double precision))
- Filter: (avg(ft1.c1)) / (avg(ft1.c1::double precision * random()) <= '1'::double precision)
- Relations: Aggregate on (public.ft1)
- Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY c5, (sqrt(c2)) HAVING ((avg("C 1") < 500::numeric))
-(7 rows)
+   ->  Subquery Scan on x
+ ->  Foreign Scan
+   Output: ft1.c5, NULL::bigint, (sqrt((ft1.c2)::double precision))
+   Filter: (avg(ft1.c1)) / (avg(ft1.c1::double precision * random()) <= '1'::double precision)
+   Relations: Aggregate on (public.ft1)
+   Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY c5, (sqrt(c2)) HAVING ((avg("C 1") < 500::numeric))
+(8 rows)
 
 select count(*) from (select c5, count(c1) from ft1 group by c5, sqrt(c2) having (avg(c1) / avg(c1)) * random() 

Re: [HACKERS] ALTER INDEX .. SET STATISTICS ... behaviour

2017-09-04 Thread Alexander Korotkov
Hi, Adrien!

On Mon, Sep 4, 2017 at 3:57 PM, Adrien Nayrat 
wrote:

> On 08/30/2017 02:26 PM, Alexander Korotkov wrote:
> I reviewed this patch. It works as expected but I have few remarks :
>

Thank you for reviewing my patch!.


>   * There is a warning during compilation :
>
> gram.y: In function ‘base_yyparse’:
> gram.y:2090:6: warning: ISO C90 forbids mixed declarations and code
> [-Wdeclaration-after-statement]
>   AlterTableCmd *n = makeNode(AlterTableCmd);
>   ^
>

Fixed.



> If I understand we should declare AlterTableCmd *n [...] before "if"?
>
>   * I propose to add "Stats target" information in psql verbose output
> command
> \d+ (patch attached with test)
>
> \d+ tmp_idx
>  Index "public.tmp_idx"
>  Column |   Type   | Definition | Storage | Stats target
> +--++-+--
>  a  | integer  | a  | plain   |
>  expr   | double precision | (d + e)| plain   | 1000
>  b  | cstring  | b  | plain   |
> btree, for table "public.tmp"
>
>
>   * psql completion is missing (added to previous patch)
>

Looks good for me.  I've integrated those changes in the patch.
New revision is attached.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


alter-index-statistics-3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-04 Thread Alexander Korotkov
On Mon, Sep 4, 2017 at 4:51 PM, Alex Ignatov 
wrote:

> In this situation this parameter (max_standby_streaming_delay) wont help
> because if you have subsequent statement on standby (following info is from
> documentation and from our experience ): Thus, if one query has resulted in
> significant delay, subsequent conflicting queries will have much less grace
> time until the standby server has caught up again. And you never now how to
> set this parameter exept to -1 which mean up to infinity delayed standby.
>
> On our experience only autovacuum on master took AccesExclusiveLock that
> raise this Fatal message on standby. After this AccessExclusive reached
> standby and max_standby_streaming_delay > -1 you definitely sooner or
> later  get this Fatal on recovery .
> With this patch we try to get rid of AccessEclusiveLock applied on standby
> while we have active statement on it.
>

+1,
Aborting read-only query on standby because of vacuum on master seems to be
rather unexpected behaviour for user when hot standby feedback is on.
I think we should work on this problem for v11.  Backpatching documentation
for previous releases would be good too.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] GnuTLS support

2017-09-04 Thread David Fetter
On Fri, Sep 01, 2017 at 10:33:37PM +0200, Alvaro Herrera wrote:
> Tom Lane wrote:
> > Robert Haas  writes:
> > > On Thu, Aug 31, 2017 at 1:52 PM, Andreas Karlsson  
> > > wrote:
> 
> > >> There are currently two failing SSL tests which at least to me seems more
> > >> like they test specific OpenSSL behaviors rather than something which 
> > >> need
> > >> to be true for all SSL libraries.
> > 
> > > I don't know what we should do about these issues.
> > 
> > Maybe the SSL test suite needs to be implementation-specific as well.
> 
> If only two tests fail currently, I suggest that the thing to do is to
> split it up in generic vs. library-specific test files.  It should be
> easy to do with the TAP framework -- just move the library-specific
> tests to their own file and mark it as skipped at the start of the file
> when a different library is detected.

This seems like a much smarter and more reliable way to test.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Tom Lane
Simon Riggs  writes:
> Which is why next year when upgrading from PG10 -> PG11 we will
> mention it and that point not mention the other older solutions, which
> were once our best.

This is boilerplate text that we tend to copy-and-paste without thinking
about it; if it's designed in a way that requires it to change more than
about once per decade, that's going to be a problem.  (The existing text
has been there verbatim since 9.0, looks like.)

I'm okay with a passing reference to some list of replication tools
elsewhere in the docs, but not with much more than that.

It's also worth pointing out that the existing wording is meant to
explain how to achieve upgrade-in-place.  Logical replication to a
new server seems like a fundamentally different thing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GnuTLS support

2017-09-04 Thread Bruce Momjian
On Fri, Sep  1, 2017 at 12:09:35PM -0400, Robert Haas wrote:
> I think that what this shows is that the current set of GUCs is overly
> OpenSSL-centric.  We created a set of GUCs that are actually specific
> to one particular implementation but named them as if they were
> generic.  My idea about this would be to actually rename the existing
> GUCs to start with "openssl" rather than "ssl", and then add new GUCs
> as needed for other SSL implementations.
> 
> Depending on what we think is best, GUCs for an SSL implementation
> other than the one against which we compiled can either not exist at
> all, or can exist but be limited to a single value (e.g. "none", as we
> currently do when the compile has no SSL support at all).  Also, we
> could add a read-only GUC with a name like ssl_library that exposes
> the name of the underlying SSL implementation - none, openssl, gnutls,
> or whatever.
> 
> I think if we go the route of insisting that every SSL implementation
> has to use the existing GUCs, we're just trying to shove a square peg
> into a round hole, and there's no real benefit for users.  If the
> string that has to be stuffed into ssl_ciphers differs based on which
> library was chosen at compile time, then you can't have a uniform
> default configuration for all libraries anyway.  I think it'll be
> easier to explain and document this if there's separate documentation
> for openssl_ciphers, gnutls_ciphers, etc. rather than one giant
> documentation section that tries to explain every implementation
> separately.

I am worried about having 3x version of TLS controls in postgresql.conf,
and only one set being active.  Perhaps we need to break out the TLS
config to separate files or something.  Anyway, this needs more thought.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-04 Thread Tom Lane
Michael Paquier  writes:
> Hmm. While this patch looks to me in a better shape than what Simon's
> is proposing, thinking about
> cah2-v61vxnentfj2v-zd+ma-g6kqmjgd5svxou3jbvdzqh0...@mail.gmail.com
> which involved a migration Oracle->Postgres, I have been wondering if
> it is possible to still allow savepoints in those cases to ease the
> pain and surprise of some users.

I don't want to go there, and was thinking we should expand the new
comment in DefineSavepoint to explain why not.  It's certainly not that
much additional work to allow a savepoint so far as xact.c is concerned,
as your patch shows.  The problem is that intra-string savepoints seem
inconsistent with exec_simple_query's behavior of abandoning the whole
query string upon error.  If you do

insert ...\; savepoint\; insert ...\; release savepoint\; insert ...;

wouldn't you sort of expect that the savepoint commands mean to keep going
if the second insert fails?  If they don't mean that, what do they mean?

Also, the main thing that we need xact.c's involvement for in the first
place is the fact that implicit transaction blocks, unlike regular ones,
auto-cancel on an error, leaving you outside a block not inside a failed
one.  So I don't exactly see how savepoints would fit into that.

Now I do not think we can change exec_simple_query's behavior without big
compatibility problems --- to the extent that there's a justifiable
use-case for multi-query strings at all, a big part of it is the implied
"do B only if A succeeds" semantics.  But if that's what happens, then
having savepoint commands in the string is just a can of worms from both
definitional and practical points of view.  If an error happens, did it
happen before or after the savepoint, and what state is the session left
in?  You can't easily tell because of the lack of reporting about
savepoint state.  Right now, the only real issue after a failure is "are
we in a transaction block or not", which the server does return enough
info to distinguish.

Now admittedly, the same set of issues pops up if one uses an
explicit transaction block in a multi-query string:

begin\; insert ...\; savepoint\; insert ...\; release savepoint\; insert ...\; 
commit;

If one of the inserts fails, you don't really know which one unless you
were counting command-complete replies (which PQexec doesn't let you do).
But that behavior was there already, we aren't proposing to make it worse.
(I think this approach is also the correct workaround to give those
Oracle-conversion folk: their real problem is failure to convert from
Oracle's implicit-BEGIN behavior to our explicit-BEGIN.)

In short, -1 for relaxing the prohibition on SAVEPOINT.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-04 Thread Alex Ignatov


-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Amit Kapila
Sent: Monday, September 4, 2017 3:32 PM
To: i.kartys...@postgrespro.ru
Cc: pgsql-hackers 
Subject: Re: [HACKERS] WIP: long transactions on hot standby feedback replica / 
proof of concept

On Mon, Sep 4, 2017 at 4:34 PM,   wrote:
> Our clients complain about this issue and therefore I want to raise 
> the discussion and suggest several solutions to this problem:
>
> I. Why does PG use Fatal when Error is enough to release lock that 
> rises lock conflict?
> "If (RecoveryConflictPending && DoingCommandRead)"
>
> II. Do we really need to truncate the table on hot standby exactly at 
> the same time when truncate on master occurs?
>
> In my case conflict happens when the autovacuum truncates table tbl1 
> on master while backend on replica is performing a long transaction 
> involving the same table tbl1. This happens because truncate takes an 
> AccessExclusiveLock. To tackle this issue we have several options:
>
> 1. We can postpone the truncate on the master until all the replicas 
> have finished their transactions (in this case, feedback requests to 
> the master should be sent frequently) Patch 1 
> vacuum_lazy_truncate.patch
>
> 2. Maybe there is an option somehow not to send AccessExclusiveLock 
> and not to truncate table on the replica right away. We could try to 
> wait a little and truncate tbl1 on replica again.
>

Can max_standby_streaming_delay help in this situation (point number - 2)?


--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Hello!
In this situation this parameter (max_standby_streaming_delay) wont help 
because if you have subsequent statement on standby (following info is from 
documentation and from our experience ): Thus, if one query has resulted in 
significant delay, subsequent conflicting queries will have much less grace 
time until the standby server has caught up again. And you never now how to set 
this parameter exept to -1 which mean up to infinity delayed standby. 

On our experience only autovacuum on master took AccesExclusiveLock that raise 
this Fatal message on standby. After this AccessExclusive reached standby and 
max_standby_streaming_delay > -1 you definitely sooner or later  get this Fatal 
on recovery . 
With this patch we try to get rid of AccessEclusiveLock applied on standby 
while we have active statement on it.



--
Alex Ignatov 
Postgres Professional: http://www.postgrespro.com 
The Russian Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] obsolete code in pg_upgrade

2017-09-04 Thread Bruce Momjian
On Tue, Aug 22, 2017 at 08:28:15PM -0400, Peter Eisentraut wrote:
> It seems to me that this code in pg_upgrade/check.c has been useless
> since at least version 9.1:
> 
> /* Is it 9.0 but without tablespace directories? */
> if (GET_MAJOR_VERSION(new_cluster.major_version) == 900 &&
> new_cluster.controldata.cat_ver < TABLE_SPACE_SUBDIRS_CAT_VER)
> pg_fatal("This utility can only upgrade to PostgreSQL version
> 9.0 after 2010-01-11\n"
>  "because of backend API changes made during
> development.\n");

Coming in late, but agreed.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make async slave to wait for lsn to be replayed

2017-09-04 Thread i . kartyshov

I forget to include patch in last letter.

Craig Ringer wrote 2017-08-15 05:00:

That ship sailed a long time ago unfortunately, they're all over
pg_stat_replication and pg_replication_slots and so on. They're
already routinely used for monitoring replication lag in bytes,
waiting for a peer to catch up, etc.

--

 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Function pg_replication_slots is only master, and waitlsn is async hot 
standby replication function. It allows us to wait untill insert made on 
master is be replayed on replica.


--
Ivan Kartyshov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Companydiff --git a/doc/src/sgml/ref/allfiles.sgml b/doc/src/sgml/ref/allfiles.sgml
index 01acc2e..6792eb0 100644
--- a/doc/src/sgml/ref/allfiles.sgml
+++ b/doc/src/sgml/ref/allfiles.sgml
@@ -181,6 +181,7 @@ Complete list of usable sgml source files in this directory.
 
 
 
+
 
 
 
diff --git a/doc/src/sgml/ref/waitlsn.sgml b/doc/src/sgml/ref/waitlsn.sgml
new file mode 100644
index 000..077e869
--- /dev/null
+++ b/doc/src/sgml/ref/waitlsn.sgml
@@ -0,0 +1,119 @@
+
+
+
+ 
+  WAITLSN
+ 
+
+ 
+  WAITLSN
+  7
+  SQL - Language Statements
+ 
+
+ 
+  WAITLSN
+  wait until target LSN has been replayed
+ 
+
+ 
+
+WAITLSN 'LSN' [ , delay ]
+WAITLSN_INFINITE 'LSN'
+WAITLSN_NO_WAIT 'LSN'
+
+ 
+
+ 
+  Description
+
+  
+   The WAITLSN wait till target LSN will
+   be replayed with an optional delay (milliseconds by default
+   infinity) to be wait for LSN to replayed.
+  
+
+  
+   WAITLSN provides a simple
+   interprocess LSN wait mechanism for a backends on slave
+   in master-slave replication scheme on PostgreSQL database.
+  
+
+  
+   The WAITLSN_INFINITE wait till target LSN will
+   be replayed on slave .
+  
+
+  
+   The WAITLSN_NO_WAIT will tell if target LSN was replayed without any wait.
+  
+ 
+
+ 
+  Parameters
+
+  
+   
+LSN
+
+ 
+  Target log sequence number to be wait for.
+ 
+
+   
+   
+delay
+
+ 
+  Time in miliseconds to waiting for LSN to be replayed.
+ 
+
+   
+  
+ 
+
+ 
+  Notes
+
+  
+   Delay time for waiting till LSN to be replayed must be integer. For
+   default it is infinity. Waiting can be interupped using Ctl+C, or
+   by Postmaster death.
+  
+ 
+
+ 
+  Examples
+
+  
+   Configure and execute a waitlsn from
+   psql:
+
+
+WAITLSN '0/3F07A6B1', 1;
+NOTICE:  LSN is not reached. Try to make bigger delay.
+WAITLSN
+
+WAITLSN '0/3F07A611';
+WAITLSN
+
+WAITLSN '0/3F0FF791', 50;
+^CCancel request sent
+NOTICE:  LSN is not reached. Try to make bigger delay.
+ERROR:  canceling statement due to user request
+
+
+ 
+
+ 
+  Compatibility
+
+  
+   There is no WAITLSN statement in the SQL
+   standard.
+  
+ 
+
diff --git a/doc/src/sgml/reference.sgml b/doc/src/sgml/reference.sgml
index 9000b3a..0c5951a 100644
--- a/doc/src/sgml/reference.sgml
+++ b/doc/src/sgml/reference.sgml
@@ -209,6 +209,7 @@



+   
 
  
 
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index df4843f..9dfb59d 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -40,6 +40,7 @@
 #include "catalog/pg_control.h"
 #include "catalog/pg_database.h"
 #include "commands/tablespace.h"
+#include "commands/waitlsn.h"
 #include "miscadmin.h"
 #include "pgstat.h"
 #include "port/atomics.h"
@@ -147,7 +148,6 @@ const struct config_enum_entry sync_method_options[] = {
 	{NULL, 0, false}
 };
 
-
 /*
  * Although only "on", "off", and "always" are documented,
  * we accept all the likely variants of "on" and "off".
@@ -7237,6 +7237,15 @@ StartupXLOG(void)
 	break;
 }
 
+/*
+ * After update lastReplayedEndRecPtr set Latches in SHMEM array
+ */
+if (XLogCtl->lastReplayedEndRecPtr >= GetMinWaitLSN())
+{
+
+	WaitLSNSetLatch(XLogCtl->lastReplayedEndRecPtr);
+}
+
 /* Else, try to fetch the next WAL record */
 record = ReadRecord(xlogreader, InvalidXLogRecPtr, LOG, false);
 			} while (record != NULL);
diff --git a/src/backend/commands/Makefile b/src/backend/commands/Makefile
index 4a6c99e..0d10117 100644
--- a/src/backend/commands/Makefile
+++ b/src/backend/commands/Makefile
@@ -20,6 +20,6 @@ OBJS = amcmds.o aggregatecmds.o alter.o analyze.o async.o cluster.o comment.o \
 	policy.o portalcmds.o prepare.o proclang.o publicationcmds.o \
 	schemacmds.o seclabel.o sequence.o statscmds.o subscriptioncmds.o \
 	tablecmds.o tablespace.o trigger.o tsearchcmds.o typecmds.o user.o \
-	vacuum.o vacuumlazy.o variable.o view.o
+	vacuum.o vacuumlazy.o variable.o view.o waitlsn.o
 
 include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index bacc08e..7d6011f 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -139,7 +139,6 @@
 #include "utils/ps_status.h"
 #include "utils/timestamp.h"
 
-
 /*
  

Re: [HACKERS] ALTER INDEX .. SET STATISTICS ... behaviour

2017-09-04 Thread Adrien Nayrat
On 08/30/2017 02:26 PM, Alexander Korotkov wrote:
> Patch rebased to current master is attached.

Hello,

I reviewed this patch. It works as expected but I have few remarks :

  * There is a warning during compilation :

gram.y: In function ‘base_yyparse’:
gram.y:2090:6: warning: ISO C90 forbids mixed declarations and code
[-Wdeclaration-after-statement]
  AlterTableCmd *n = makeNode(AlterTableCmd);
  ^

If I understand we should declare AlterTableCmd *n [...] before "if"?

  * I propose to add "Stats target" information in psql verbose output command
\d+ (patch attached with test)

\d+ tmp_idx
 Index "public.tmp_idx"
 Column |   Type   | Definition | Storage | Stats target
+--++-+--
 a  | integer  | a  | plain   |
 expr   | double precision | (d + e)| plain   | 1000
 b  | cstring  | b  | plain   |
btree, for table "public.tmp"


  * psql completion is missing (added to previous patch)


Regards,

-- 
Adrien NAYRAT

http://dalibo.com - http://dalibo.org
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f6049cc9e5..6fb9bdd063 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -1742,6 +1742,7 @@ describeOneTableDetails(const char *schemaname,
 	{
 		headers[cols++] = gettext_noop("Storage");
 		if (tableinfo.relkind == RELKIND_RELATION ||
+			tableinfo.relkind == RELKIND_INDEX ||
 			tableinfo.relkind == RELKIND_MATVIEW ||
 			tableinfo.relkind == RELKIND_FOREIGN_TABLE ||
 			tableinfo.relkind == RELKIND_PARTITIONED_TABLE)
@@ -1841,6 +1842,7 @@ describeOneTableDetails(const char *schemaname,
 
 			/* Statistics target, if the relkind supports this feature */
 			if (tableinfo.relkind == RELKIND_RELATION ||
+tableinfo.relkind == RELKIND_INDEX ||
 tableinfo.relkind == RELKIND_MATVIEW ||
 tableinfo.relkind == RELKIND_FOREIGN_TABLE ||
 tableinfo.relkind == RELKIND_PARTITIONED_TABLE)
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 1583cfa998..baa739a8c0 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1644,7 +1644,10 @@ psql_completion(const char *text, int start, int end)
    "UNION SELECT 'ALL IN TABLESPACE'");
 	/* ALTER INDEX  */
 	else if (Matches3("ALTER", "INDEX", MatchAny))
-		COMPLETE_WITH_LIST4("OWNER TO", "RENAME TO", "SET", "RESET");
+		COMPLETE_WITH_LIST5("ALTER COLUMN", "OWNER TO", "RENAME TO", "SET", "RESET");
+	/* ALTER INDEX  ALTER COLUMN  */
+	else if (Matches6("ALTER", "INDEX", MatchAny, "ALTER", "COLUMN", MatchAny))
+		COMPLETE_WITH_CONST("SET STATISTICS");
 	/* ALTER INDEX  SET */
 	else if (Matches4("ALTER", "INDEX", MatchAny, "SET"))
 		COMPLETE_WITH_LIST2("(", "TABLESPACE");
diff --git a/src/test/regress/expected/alter_table.out b/src/test/regress/expected/alter_table.out
index 64160a8b4d..0f36423163 100644
--- a/src/test/regress/expected/alter_table.out
+++ b/src/test/regress/expected/alter_table.out
@@ -103,6 +103,15 @@ ALTER INDEX tmp_idx ALTER COLUMN 1 SET STATISTICS 1000;
 ERROR:  cannot alter statistics on non-expression column "a" of index "tmp_idx"
 HINT:  Alter statistics on table column instead.
 ALTER INDEX tmp_idx ALTER COLUMN 2 SET STATISTICS 1000;
+\d+ tmp_idx
+ Index "public.tmp_idx"
+ Column |   Type   | Definition | Storage | Stats target 
++--++-+--
+ a  | integer  | a  | plain   | 
+ expr   | double precision | (d + e)| plain   | 1000
+ b  | cstring  | b  | plain   | 
+btree, for table "public.tmp"
+
 ALTER INDEX tmp_idx ALTER COLUMN 3 SET STATISTICS 1000;
 ERROR:  cannot alter statistics on non-expression column "b" of index "tmp_idx"
 HINT:  Alter statistics on table column instead.
diff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out
index 064adb4640..8450f2463e 100644
--- a/src/test/regress/expected/create_index.out
+++ b/src/test/regress/expected/create_index.out
@@ -2324,10 +2324,10 @@ DROP TABLE array_gin_test;
 CREATE INDEX gin_relopts_test ON array_index_op_test USING gin (i)
   WITH (FASTUPDATE=on, GIN_PENDING_LIST_LIMIT=128);
 \d+ gin_relopts_test
- Index "public.gin_relopts_test"
- Column |  Type   | Definition | Storage 
-+-++-
- i  | integer | i  | plain
+Index "public.gin_relopts_test"
+ Column |  Type   | Definition | Storage | Stats target 
++-++-+--
+ i  | integer | i  | plain   | 
 gin, for table "public.array_index_op_test"
 Options: fastupdate=on, gin_pending_list_limit=128
 
diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql
index 57f44c445d..e6f6669880 100644
--- a/src/test/regress/sql/alter_table.sql
+++ b/src/test/regress/sql/alter_table.sql
@@ 

Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-04 Thread Ashutosh Bapat
On Mon, Sep 4, 2017 at 3:48 PM, Alvaro Herrera  wrote:
>
> if (tuples > 0)
> {
> -   if (tableinfo.relkind != 
> RELKIND_PARTITIONED_TABLE)
> -   printfPQExpBuffer(, _("Number of 
> child tables: %d (Use \\d+ to list them.)"), tuples);
> -   else
> -   printfPQExpBuffer(, _("Number of 
> partitions: %d (Use \\d+ to list them.)"), tuples);
> +   printfPQExpBuffer(, _("Number of %s: %d 
> (Use \\d+ to list them.)"), ct, tuples);
> printTableAddFooter(, buf.data);
> }
>
> Please don't do this, because it breaks translatability.  I think the
> original is fine.
>

We have used this style in the "else" case of if (!verbose). So, I
just copied it. I have removed that change in the attached patch.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
From c0a153f0535c4d1dca637996d4cd5e6f62c11afe Mon Sep 17 00:00:00 2001
From: Ashutosh Bapat 
Date: Mon, 4 Sep 2017 09:56:41 +0530
Subject: [PATCH] Improve \d+ output of a partitioned table

While displaying partitions in \d+ output of a partitioned table
annotate the partitioned partitions as "has partitions".

For a partitioned table show the number of partitions even if it's 0.

Ashutosh Bapat and Amit Langote.
---
 src/bin/psql/describe.c|   32 ++--
 src/test/regress/expected/create_table.out |   13 ++-
 src/test/regress/expected/foreign_data.out |3 +++
 src/test/regress/expected/insert.out   |   15 +
 src/test/regress/sql/create_table.sql  |2 +-
 src/test/regress/sql/insert.sql|4 
 6 files changed, 57 insertions(+), 12 deletions(-)

diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
index f6049cc..5adf5a4 100644
--- a/src/bin/psql/describe.c
+++ b/src/bin/psql/describe.c
@@ -2828,7 +2828,7 @@ describeOneTableDetails(const char *schemaname,
 		/* print child tables (with additional info if partitions) */
 		if (pset.sversion >= 10)
 			printfPQExpBuffer(,
-			  "SELECT c.oid::pg_catalog.regclass, pg_catalog.pg_get_expr(c.relpartbound, c.oid)"
+			  "SELECT c.oid::pg_catalog.regclass, pg_get_expr(c.relpartbound, c.oid), c.relkind"
 			  " FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i"
 			  " WHERE c.oid=i.inhrelid AND i.inhparent = '%s'"
 			  " ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;", oid);
@@ -2851,7 +2851,18 @@ describeOneTableDetails(const char *schemaname,
 		else
 			tuples = PQntuples(result);
 
-		if (!verbose)
+		/*
+		 * For a partitioned table with no partitions, always print the number
+		 * of partitions as zero, even when verbose output is expected.
+		 * Otherwise, we will not print "Partitions" section for a partitioned
+		 * table without any partitions.
+		 */
+		if (tableinfo.relkind == RELKIND_PARTITIONED_TABLE && tuples == 0)
+		{
+			printfPQExpBuffer(, _("Number of partitions: %d"), tuples);
+			printTableAddFooter(, buf.data);
+		}
+		else if (!verbose)
 		{
 			/* print the number of child tables, if any */
 			if (tuples > 0)
@@ -2883,12 +2894,21 @@ describeOneTableDetails(const char *schemaname,
 }
 else
 {
+	char *partitioned_note;
+
+	if (*PQgetvalue(result, i, 2) == RELKIND_PARTITIONED_TABLE)
+		partitioned_note = " has partitions";
+	else
+		partitioned_note = "";
+
 	if (i == 0)
-		printfPQExpBuffer(, "%s: %s %s",
-		  ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1));
+		printfPQExpBuffer(, "%s: %s %s%s",
+		  ct, PQgetvalue(result, i, 0), PQgetvalue(result, i, 1),
+		  partitioned_note);
 	else
-		printfPQExpBuffer(, "%*s  %s %s",
-		  ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1));
+		printfPQExpBuffer(, "%*s  %s %s%s",
+		  ctw, "", PQgetvalue(result, i, 0), PQgetvalue(result, i, 1),
+		  partitioned_note);
 }
 if (i < tuples - 1)
 	appendPQExpBufferChar(, ',');
diff --git a/src/test/regress/expected/create_table.out b/src/test/regress/expected/create_table.out
index babda89..a35d19e 100644
--- a/src/test/regress/expected/create_table.out
+++ b/src/test/regress/expected/create_table.out
@@ -428,13 +428,15 @@ ERROR:  cannot inherit from partitioned table "partitioned2"
  c  | text|   |  | 
  d  | text|   |  | 
 Partition key: RANGE (a oid_ops, plusone(b), c, d COLLATE "C")
+Number of partitions: 0
 
-\d partitioned2
-Table "public.partitioned2"
- Column |  Type   | Collation | Nullable | Default 
-+-+---+--+-
- a  | integer |   |  | 
+\d+ partitioned2
+  

[HACKERS] 【ECPG】strncpy function does not set the end character '\0'

2017-09-04 Thread postgresql_2...@163.com
Hi

When we reviewed the ecpg code,we found the array seem not have the end
character('\0')  after using the strncpy function. 

In the function ECPGnoticeReceiver, we use the stncpy function copy the
sqlstate to sqlca->sqlstate. And the  sqlca->sqlstate is defined as the size
of 5, and the copy size is sizeof(sqlca->sqlstate). However, from the
previous strcmp function, the sqlstate size may be 5,such as
ECPG_SQLSTATE_INVALID_CURSOR_NAME. So there may be lack of the end character
for sqlca->sqlstate.

--

the copy code 

/* map to SQLCODE for backward compatibility */
if (strcmp(sqlstate, ECPG_SQLSTATE_INVALID_CURSOR_NAME) == 0)
sqlcode = ECPG_WARNING_UNKNOWN_PORTAL;
else if (strcmp(sqlstate, ECPG_SQLSTATE_ACTIVE_SQL_TRANSACTION) ==
0)
sqlcode = ECPG_WARNING_IN_TRANSACTION;
else if (strcmp(sqlstate, ECPG_SQLSTATE_NO_ACTIVE_SQL_TRANSACTION)
== 0)
sqlcode = ECPG_WARNING_NO_TRANSACTION;
else if (strcmp(sqlstate, ECPG_SQLSTATE_DUPLICATE_CURSOR) == 0)
sqlcode = ECPG_WARNING_PORTAL_EXISTS;
else
sqlcode = 0;

   * strncpy(sqlca->sqlstate, sqlstate, sizeof(sqlca->sqlstate));*
sqlca->sqlcode = sqlcode;
sqlca->sqlwarn[2] = 'W';
sqlca->sqlwarn[0] = 'W';

the defined code 

struct sqlca_t
{
charsqlcaid[8];
longsqlabc;
longsqlcode;
struct
{
int sqlerrml;
charsqlerrmc[SQLERRMC_LEN];
}   sqlerrm;
charsqlerrp[8];
longsqlerrd[6];
/* Element 0: empty */
/* 1: OID of processed tuple if applicable  */
/* 2: number of rows processed  */
/* after an INSERT, UPDATE or   */
/* DELETE statement */
/* 3: empty */
/* 4: empty */
/* 5: empty */
charsqlwarn[8];
/* Element 0: set to 'W' if at least one other is 'W'   */
/* 1: if 'W' at least one character string  */
/* value was truncated when it was  */
/* stored into a host variable. */

/*
 * 2: if 'W' a (hopefully) non-fatal notice occurred
 */ /* 3: empty */
/* 4: empty */
/* 5: empty */
/* 6: empty */
/* 7: empty */

   * charsqlstate[5];*
};





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 12:11, Robert Haas  wrote:
>
> It's not really true that the only alternatives to pglogical are
> proprietary.  Really, one could use any logical replication solution,
> and there have been multiple open source alternatives for a decade.

True, but it is by far the best solution out of the many choices.

Which is why next year when upgrading from PG10 -> PG11 we will
mention it and that point not mention the other older solutions, which
were once our best.

>> Our docs mention pglogical already, so don't see an issue with
>> mentioning it here.
>
> The existing reference is alongside a bunch of other third-party
> tools.  Mentioning it at the very top of our release notes would give
> it a pride of place with which I'm not too comfortable.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-09-04 Thread Ashutosh Bapat
On Mon, Sep 4, 2017 at 10:04 AM, Amit Langote
 wrote:
> On 2017/09/04 12:38, Etsuro Fujita wrote:
>> On 2017/09/02 4:10, Ashutosh Bapat wrote:
>>> This rebase mainly changes patch 0001, which translates partition
>>> hierarchy into inheritance hierarchy creating AppendRelInfos and
>>> RelOptInfos for partitioned partitions. Because of that, it's not
>>> necessary to record the partitioned partitions in a
>>> PartitionedChildRelInfos::child_rels. The only RTI that goes in there
>>> is the RTI of child RTE which is same as the parent RTE except inh
>>> flag. I tried removing that with a series of changes but it seems that
>>> following code in ExecInitModifyTable() requires it.
>>> 1897 /* The root table RT index is at the head of the
>>> partitioned_rels list */
>>> 1898 if (node->partitioned_rels)
>>> 1899 {
>>> 1900 Index   root_rti;
>>> 1901 Oid root_oid;
>>> 1902
>>> 1903 root_rti = linitial_int(node->partitioned_rels);
>>> 1904 root_oid = getrelid(root_rti, estate->es_range_table);
>>> 1905 rel = heap_open(root_oid, NoLock);  /* locked by InitPlan */
>>> 1906 }
>>> 1907 else
>>> 1908 rel = mtstate->resultRelInfo->ri_RelationDesc;
>>>
>>> I don't know whether we could change this code not to use
>>> PartitionedChildRelInfos::child_rels.
>
> For a root partitioned tables, ModifyTable.partitioned_rels comes from
> PartitionedChildRelInfo.child_rels recorded for the table by
> expand_inherited_rtnentry().  In fact, the latter is copied verbatim to
> ModifyTablePath (or AppendPath/MergeAppendPath) when creating the same.
> The only point of keeping those RT indexes around in the ModifyTable node
> is for the executor to be able to locate partitioned table RT entries and
> lock them.  Without them, the executor wouldn't know about those tables at
> all, because there won't be subplans corresponding to partitioned tables
> in the tree and hence their RT indexes won't appear in the
> ModifyTable.resultRelations list.  If your patch adds partitioned child
> rel AppendRelInfos back for whatever reason, you should also make sure in
> inheritance_planner() that their RT indexes don't end up the
> resultRelations list.  See this piece of code in inheritance_planner():
>
> 1351 /* Build list of sub-paths */
> 1352 subpaths = lappend(subpaths, subpath);
> 1353
> 1354 /* Build list of modified subroots, too */
> 1355 subroots = lappend(subroots, subroot);
> 1356
> 1357 /* Build list of target-relation RT indexes */
> 1358 resultRelations = lappend_int(resultRelations,
> appinfo->child_relid);
>
> Maybe it won't happen, because if this appinfo corresponds to a
> partitioned child table, recursion would occur and we'll get to this piece
> of code for only the leaf children.

You are right. We don't execute above lines for partitioned partitions.

>
> By the way, if you want to get rid of PartitionedChildRelInfo, you can do
> that as long as you find some other way of putting together the
> partitioned_rels list to add into the ModifyTable (Append/MergeAppend)
> node created for the root partitioned table.  Currently,
> PartitionedChildRelInfo (and the root->pcinfo_list) is the way for
> expand_inherited_rtentry() to pass that information to the planner's
> path-generating code.  We may be able to generate that list when actually
> creating the path using set_append_rel_pathlist() or
> inheritance_planner(), without having created a PartitionedChildRelInfo
> node beforehand.

AFAIU, the list contained RTIs of the relations, which didnt' have
corresponding AppendRelInfos to lock those relations. Now that we
create AppendRelInfos even for partitioned partitions, I don't think
we need the list to take care of the locks. Is there any other reason
why we maintain that list (apart from the trigger case I have raised
and Fujita-san says that the list is not required in that case as
well.)

>
>> Though I haven't read the patch yet, I think the above code is useless.
>> And I proposed a patch to clean it up before [1].  I'll add that patch to
>> the next commitfest.
>
> +1.
+1. Will Fujita-san's patch also handle getting rid of partitioned_rels list?

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-04 Thread Amit Kapila
On Mon, Sep 4, 2017 at 4:34 PM,   wrote:
> Our clients complain about this issue and therefore I want to raise the
> discussion and suggest several solutions to this problem:
>
> I. Why does PG use Fatal when Error is enough to release lock that rises
> lock conflict?
> "If (RecoveryConflictPending && DoingCommandRead)"
>
> II. Do we really need to truncate the table on hot standby exactly at the
> same time when truncate on master occurs?
>
> In my case conflict happens when the autovacuum truncates table tbl1 on
> master while backend on replica is performing a long transaction involving
> the same table tbl1. This happens because truncate takes an
> AccessExclusiveLock. To tackle this issue we have several options:
>
> 1. We can postpone the truncate on the master until all the replicas have
> finished their transactions (in this case, feedback requests to the master
> should be sent frequently)
> Patch 1
> vacuum_lazy_truncate.patch
>
> 2. Maybe there is an option somehow not to send AccessExclusiveLock and not
> to truncate table on the replica right away. We could try to wait a little
> and truncate tbl1 on replica again.
>

Can max_standby_streaming_delay help in this situation (point number - 2)?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 10:34, Andreas Joseph Krogh  wrote:

> I'd like at big red warning "Logical decoding doesn't support Large Objects"
> in that case;
>
> "If upgrading from a 9.4 server or later, and you don't use Large Objects,
> external utilities using logical decoding, such as pglogical or
> proprietary alternatives, can also provide an alternate route,
> often with lower downtime."
>
> pg_upgrade or pg_dump is really the only option for us using LOs.

Sounds like we could change that, or at least add a WARNING that there are LOs.

Patches welcome.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 12:39, Robert Haas  wrote:
> On Mon, Sep 4, 2017 at 7:14 AM, Magnus Hagander  wrote:
>> I agree that singling it out there is probably not the best idea. But a
>> sentence/paragraph saying that there are third party replication solutions
>> that can solve the problem, along with linking to the page with the list,
>> perhaps?
>
> Yeah, that seems fine.

A link to our docs page that covers those would be fine.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make async slave to wait for lsn to be replayed

2017-09-04 Thread i . kartyshov

Hello, thank you for your comments over main idea and code.

On 13.03.2017 05:20, Thomas Munro wrote:
1)

First, I'll restate my view of the syntax-vs-function question:  I
think an fmgr function is the wrong place to do this, because it
doesn't work for our 2 higher isolation levels as mentioned.  Most
people probably use READ COMMITTED most of the time so the extension
version you've proposed is certainly useful for many people and I like
it, but I will vote against inclusion in core of any feature that
doesn't consider all three of our isolation levels, especially if
there is no way to extend support to other levels later.  I don't want
PostgreSQL to keep adding features that eventually force everyone to
use READ COMMITTED because they want to use feature X, Y or Z.


Waiting for LSN is expected to be used on hot standby READ ONLY 
replication.

Only READ COMMITTED and REPEATABLE READ, are allowed on hot standby.


Maybe someone can think of a clever way for an extension to insert a
wait for a user-supplied LSN *before* acquiring a snapshot so it can
work for the higher levels, or maybe the hooks should go into core
PostgreSQL so that the extension can exist as an external project not
requiring a patched PostgreSQL installation, or maybe this should be
done with new core syntax that extends transaction commands.  Do other
people have views on this?


I think it is a good idea, but it is not clear for me, how to do it 
properly.


2)

+wl_lsn_updated_hook(void)
+{
+uint i;
+/*
+ * After update lastReplayedEndRecPtr set Latches in SHMEM array
+ */
+if (counter_waitlsn % count_waitlsn == 0
+|| 
TimestampDifferenceExceeds(time_waitlsn,GetCurrentTimestamp(),interval_waitlsn))

+{

Doesn't this mean that if you are waiting for LSN 1234, and the
primary sends that LSN and then doesn't send anything for another
hour, a standby waiting in pg_waitlsn is quite likely to skip that
update (either because of count_waitlsn or interval_waitlsn), and then
finish up waiting for a very long time?

I'm not sure if this is a good idea, but it's an idea:  You could keep
your update skipping logic, but make sure you always catch the
important case where recovery hits the end of WAL, by invoking your
callback from WaitForWALToBecomeAvailable.  It could have a boolean
parameter that means 'don't skip this one!'.  In other words, it's OK
to skip updates, but not if you know there is no more WAL available to
apply (since you have no idea how long it will be for more to arrive).

Calling GetCurrentTimestamp() at high frequency (after every WAL
record is replayed) may be a bad idea.  It's a slow system call on
some operating systems.  Can we use an existing timestamp for free,
like recoveryLastXTime?  Remembering that the motivation for using
this feature is to wait for *whole transactions* to be replayed and
become visible, there is no sensible reason to wait for random WAL
positions inside a transaction, so if you used that then you would
effectively skip all non-COMMIT records and also skip some COMMIT
records that are coming down the pipe too fast.


Yes, I applied this idea and prepared new patch.

3)

+static void
+wl_own_latch(void)
+{
+SpinLockAcquire(>l_arr[MyBackendId].slock);
+OwnLatch(>l_arr[MyBackendId].latch);
+is_latch_owned = true;
+
+if (state->backend_maxid < MyBackendId)
+state->backend_maxid = MyBackendId;
+
+state->l_arr[MyBackendId].pid = MyProcPid;
+SpinLockRelease(>l_arr[MyBackendId].slock);
+}

What is the point of using extra latches for this?  Why not just use
the standard latch?  Syncrep and many other things do that.  I'm not
actually sure if there is ever a reason to create more latches in
regular backends.  SIGUSR1 will be delivered and set the main latch
anyway.

There are cases of specialised latches in the system, like the wal
receiver latch, and I'm reliably informed that that solves problems
like delivering a wakeup message without having to know which backend
is currently the wal receiver (a problem we might consider solving
today with a condition variable?)  I don't think anything like that
applies here.


In my case I create a bunch of shared latches for each backend, I`ll 
think

how to use standard latches in an efficient way. And about specialized
latches on standby they are already in busy with wal replaying 
functions.


4)

+for (i = 0; i <= state->backend_maxid; i++)
+{
+SpinLockAcquire(>l_arr[i].slock);
+if (state->l_arr[i].pid != 0)
+SetLatch(>l_arr[i].latch);
+SpinLockRelease(>l_arr[i].slock);
+}

Once we get through the update-skipping logic above, we hit this loop
which acquires  spinlocks for every backend one after another and sets
the latches of every backend, no matter whether they are waiting for
the LSN that has been applied.  Assuming we go with this
scan-the-whole-array approach, why not include the LSN waited for in
the array slots, 

Re: [HACKERS] index-only count(*) for indexes supporting bitmap scans

2017-09-04 Thread Alexey Chernyshov
The following review has been posted through the commitfest application:
make installcheck-world:  tested, failed
Implements feature:   not tested
Spec compliant:   not tested
Documentation:not tested

Hi Alexander,

make check-world fails on contrib/postgres_fdw because of Subquery Scan on ... 
Probably, query plan was changed.

The new status of this patch is: Waiting on Author

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Bossart, Nathan
On 9/3/17, 11:46 PM, "Michael Paquier"  wrote:
> I did not consider first that the list portion also needed to be
> modified, perhaps because I am not coding that myself... So now that
> it is becoming more complicated what about just using AutovacMemCxt?
> This would simplify the list of VacuumRelation entries and the
> RangeVar creation as well, and honestly this is ugly and there are no
> other similar patterns in the backend code:

+1

> This would become way more readable by using makeRangeVar() and the
> new makeVacuumRelation. As this is partly my fault that we are at this
> state, I am fine as well to remove this burden from you, Nathan, and
> fix that myself in a new version. And I don't want to step on your
> toes either :)

No worries, I can take care of it.  I appreciate your patience with all
of these reviews.

Nathan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Konstantin Knizhnik



On 04.09.2017 12:59, Amit Langote wrote:

Hi Konstantin,

On 2017/09/04 18:19, Konstantin Knizhnik wrote:

On 04.09.2017 05:38, Amit Langote wrote:

On 2017/09/02 12:44, Thomas Munro wrote:

On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:

postgres=# explain select * from bt where k between 1 and 2 and v
= 100;
QUERY PLAN
--
   Append  (cost=0.29..15.63 rows=2 width=8)
 ->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
   Index Cond: (v = 100)
 ->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
   Index Cond: (v = 100)
   Filter: (k <= 2)
(6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.

Please correct me if I wrong, but it seems to me that in case of table
constraints it is not necessary to specially handle update case.
As far as I understand we need to leave predicate in the plan in case of
partial indexes because due to "read committed" isolation policy
we may need to recheck that tuple still satisfies update condition (tuple
can be changed by some other committed transaction while we are waiting
for it and not satisfying this condition any more).
But no transaction can change tuple in such way that it violates table
constraints,  right? So we do not need to recheck it.

Actually, I don't really know why check_index_predicates() skips this
optimization in the target relation case, just wanted to point out that
that's so.

Thinking a bit from what you wrote, maybe we need not worry about
EvalPlanQual in the context of your proposed optimization based on the
table's constraints.


Concerning your suggestion to merge check_index_predicates() and
remove_restrictions_implied_by_constraints() functions: may be it can be
done, but frankly speaking I do not see much sense in it - there are too
much differences between this functions and too few code reusing.

Maybe, you meant to address Thomas here. :)  Reading his comment again, I
too am a bit concerned about destructively modifying the input rel's
baserestrictinfo.  There should at least be a comment that that's being done.
But I have considered Thomas comment and extracted code updating 
relation's baserestrictinfo from
relation_excluded_by_constraints() to 
remove_restrictions_implied_by_constraints() function. It was included 
in new version of the patch.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Robert Haas
On Mon, Sep 4, 2017 at 7:14 AM, Magnus Hagander  wrote:
> I agree that singling it out there is probably not the best idea. But a
> sentence/paragraph saying that there are third party replication solutions
> that can solve the problem, along with linking to the page with the list,
> perhaps?

Yeah, that seems fine.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Walsender timeouts and large transactions

2017-09-04 Thread Sokolov Yura
On 2017-05-25 17:52 Petr Jelinek wrote:
> Hi,
> 
> We have had issue with walsender timeout when used with logical
> decoding and the transaction is taking long time to be decoded
> (because it contains many changes)
> 
> I was looking today at the walsender code and realized that it's
> because if the network and downstream are fast enough, we'll always
> take fast path in WalSndWriteData which does not do reply or
> keepalive processing and is only reached once the transaction has
> finished by other code. So paradoxically we die of timeout because
> everything was fast enough to never fall back to slow code path.
> 
> I propose we only use fast path if the last processed reply is not
> older than half of walsender timeout, if it is then we'll force the
> slow code path to process the replies again. This is similar logic
> that we use to determine if to send keepalive message. I also added
> CHECK_INTERRUPRS call to fast code path because otherwise walsender
> might ignore them for too long on large transactions.
> 
> Thoughts?
> 

On 2017-08-10 14:20 Sokolov Yura wrote:
> On 2017-08-09 16:23, Petr Jelinek wrote:
> > On 02/08/17 19:35, Yura Sokolov wrote:  
> >> The following review has been posted through the commitfest 
> >> application:
> >> make installcheck-world:  tested, passed
> >> Implements feature:   not tested
> >> Spec compliant:   not tested
> >> Documentation:not tested
> >> 
> >> There is no check for (last_reply_timestamp <= 0 ||
> >> wal_sender_timeout <= 0) as in other places
> >> (in WalSndKeepaliveIfNecessary for example).
> >> 
> >> I don't think moving update of 'now' down to end of loop body is 
> >> correct:
> >> there are calls to ProcessConfigFile with SyncRepInitConfig, 
> >> ProcessRepliesIfAny that can
> >> last non-negligible time. It could lead to over sleeping due to
> >> larger computed sleeptime.
> >> Though I could be mistaken.
> >> 
> >> I'm not sure about moving `if (!pg_is_send_pending())` in a body
> >> loop after WalSndKeepaliveIfNecessary.
> >> Is it necessary? But it looks harmless at least.
> >>   
> > 
> > We also need to do actual timeout handing so that the timeout is not
> > deferred to the end of the transaction (Which is why I moved `if
> > (!pg_is_send_pending())` under WalSndCheckTimeOut() and
> > WalSndKeepaliveIfNecessary() calls).
> >   
> 
> If standby really stalled, then it will not read from socket, and then
> `pg_is_sendpending` eventually will return false, and timeout will be 
> checked.
> If standby doesn't stall, then `last_reply_timestamp` will be updated
> in `ProcessRepliesIfAny`, and so timeout will not be triggered.
> Do I understand correctly, or I missed something?
> 
> >> Could patch be reduced to check after first `if 
> >> (!pg_is_sendpending())` ? like:
> >> 
> >>if (!pq_is_send_pending())
> >> -  return;
> >> +  {
> >> +  if (last_reply_timestamp <= 0 ||
> >> wal_sender_timeout <= 0)
> >> +  {
> >> +  CHECK_FOR_INTERRUPTS();
> >> +  return;
> >> +  }
> >> +  if (now <=
> >> TimestampTzPlusMilliseconds(last_reply_timestamp,
> >> wal_sender_timeout / 2))
> >> +  return;
> >> +  }
> >> 
> >> If not, what problem prevents?  
> > 
> > We should do CHECK_FOR_INTERRUPTS() independently of
> > pq_is_send_pending so that it's possible to stop walsender while
> > it's processing large transaction.  
> 
> In this case CHECK_FOR_INTERRUPTS will be called after 
> wal_sender_timeout/2.
> (This diff is for first appearance of `pq_is_send_pending` in a 
> function).
> 
> If it should be called more often, then patch could be simplier:
> 
>   if (!pq_is_send_pending())
> - return;
> + {
> + CHECK_FOR_INTERRUPTS();
> + if (last_reply_timestamp <= 0 || wal_sender_timeout
> <= 0 ||
> + now <=
> TimestampTzPlusMilliseconds(last_reply_timestamp,
> wal_sender_timeout / 2))
> + return;
> + }
> 
> (Still, I could be mistaken, it is just suggestion).
> 
> Is it hard to add test for case this patch fixes?
> 
> With regards,

Tom, Robert,

I believe this bug have to be fixed in Pg10, and I don't wonna
be that guy who prevents it from being fixed.
What should/could I do?
( https://commitfest.postgresql.org/14/1151/ )

--
With regards,
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER command progress monitor

2017-09-04 Thread Tatsuro Yamada

On 2017/09/04 15:38, Michael Paquier wrote:

On Thu, Aug 31, 2017 at 11:12 AM, Tatsuro Yamada
 wrote:

Then I have questions.

   * Should we have separate views for them?  Or should both be covered by the
 same view with some indication of which command (CLUSTER or VACUUM FULL)
 is actually running?


Using the same view for both, and tell that this is rather VACUUM or
CLUSTER in the view, would be better IMO. Coming up with a name more
generic than pg_stat_progress_cluster may be better though if this
speaks with VACUUM FULL as well, user-facing documentation does not
say that VACUUM FULL is actually CLUSTER.


Thanks for sharing your thoughts.
Agreed.
I'll add new column like a "command" to tell whether running CLUSTER or VACUUM.
And how about this new view name?
 - pg_stat_progress_reorg
Is it more general name than previous name if it covers both commands?



I'll add this patch to CF2017-09.
Any comments or suggestion are welcome.


Nice to see that you are taking the time to implement patches for
upstream, Yamada-san!


Same here. :)
I'd like to contribute creating feature that is for DBA and users.
Progress monitoring feature is important from my DBA experiences.
I'm happy if you lend your hand.

Thanks,
Tatsuro Yamada




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Magnus Hagander
On Mon, Sep 4, 2017 at 1:11 PM, Robert Haas  wrote:

> On Mon, Sep 4, 2017 at 4:49 AM, Simon Riggs  wrote:
> > Migration to Version 10
> >
> > "A dump/restore using pg_dumpall, or use of pg_upgrade, is required
> > for those wishing to migrate data from any previous release."
> >
> > This isn't true and is seriously misleading ...
>
> Fair point.
>
> > since pglogical and other
> > proprietary tools exist that were designed specifically to allow this.
> > Suggested additional wording would be...
> >
> > "If upgrading from a 9.4 server or later, external utilities using
> > logical decoding, such as pglogical or proprietary alternatives, can
> > also provide an alternate route, often with lower downtime."
>
> It's not really true that the only alternatives to pglogical are
> proprietary.  Really, one could use any logical replication solution,
> and there have been multiple open source alternatives for a decade.
>
> > Our docs mention pglogical already, so don't see an issue with
> > mentioning it here.
>
> The existing reference is alongside a bunch of other third-party
> tools.  Mentioning it at the very top of our release notes would give
> it a pride of place with which I'm not too comfortable.
>

I agree that singling it out there is probably not the best idea. But a
sentence/paragraph saying that there are third party replication solutions
that can solve the problem, along with linking to the page with the list,
perhaps?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: [HACKERS] Release Note changes

2017-09-04 Thread Robert Haas
On Mon, Sep 4, 2017 at 4:49 AM, Simon Riggs  wrote:
> Migration to Version 10
>
> "A dump/restore using pg_dumpall, or use of pg_upgrade, is required
> for those wishing to migrate data from any previous release."
>
> This isn't true and is seriously misleading ...

Fair point.

> since pglogical and other
> proprietary tools exist that were designed specifically to allow this.
> Suggested additional wording would be...
>
> "If upgrading from a 9.4 server or later, external utilities using
> logical decoding, such as pglogical or proprietary alternatives, can
> also provide an alternate route, often with lower downtime."

It's not really true that the only alternatives to pglogical are
proprietary.  Really, one could use any logical replication solution,
and there have been multiple open source alternatives for a decade.

> Our docs mention pglogical already, so don't see an issue with
> mentioning it here.

The existing reference is alongside a bunch of other third-party
tools.  Mentioning it at the very top of our release notes would give
it a pride of place with which I'm not too comfortable.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-04 Thread i . kartyshov
Our clients complain about this issue and therefore I want to raise the 
discussion and suggest several solutions to this problem:


I. Why does PG use Fatal when Error is enough to release lock that rises 
lock conflict?

"If (RecoveryConflictPending && DoingCommandRead)"

II. Do we really need to truncate the table on hot standby exactly at 
the same time when truncate on master occurs?


In my case conflict happens when the autovacuum truncates table tbl1 on 
master while backend on replica is performing a long transaction 
involving the same table tbl1. This happens because truncate takes an 
AccessExclusiveLock. To tackle this issue we have several options:


1. We can postpone the truncate on the master until all the replicas 
have finished their transactions (in this case, feedback requests to the 
master should be sent frequently)

Patch 1
vacuum_lazy_truncate.patch

2. Maybe there is an option somehow not to send AccessExclusiveLock and 
not to truncate table on the replica right away. We could try to wait a 
little and truncate tbl1 on replica again.


Here is a patch that makes replica skip truncate WAL record if some 
transaction using the same table is already running on replica. And it 
also forces master to send truncate_wal to replica with actual table 
length every time autovacuum starts.

Patch 2
standby_truncate_skip_v1.patch

In this case the transaction which is running for several hours won’t be 
interrupted because of truncate. Even if for some reason we haven’t 
truncated this table tbl1 right away, nothing terrible will happen. The 
next truncate wal record will reduce the file size by the actual length 
(if some inserts or updates have been performed on master).


If you have any ideas or concerns about suggested solution I’ll be glad 
to hear them.

Thanks!

--
Ivan Kartyshov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Companydiff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 45b1859..bf73a1b 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -280,7 +280,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	 * Optionally truncate the relation.
 	 */
 	if (should_attempt_truncation(vacrelstats))
-		lazy_truncate_heap(onerel, vacrelstats);
+		if (OldestXmin >= ShmemVariableCache->latestCompletedXid)
+			lazy_truncate_heap(onerel, vacrelstats);
 
 	/* Report that we are now doing final cleanup */
 	pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index 9a5fde0..a6dc369 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -31,6 +31,9 @@
 #include "storage/smgr.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "storage/lmgr.h"
+#include "storage/procarray.h"
+#include "access/transam.h"
 
 /*
  * We keep a list of all relations (represented as RelFileNode values)
@@ -495,9 +498,21 @@ smgr_redo(XLogReaderState *record)
 		xl_smgr_truncate *xlrec = (xl_smgr_truncate *) XLogRecGetData(record);
 		SMgrRelation reln;
 		Relation	rel;
+		LOCKTAG		locktag;
+		VirtualTransactionId *backends;
+		VirtualTransactionId *backends2;
 
 		reln = smgropen(xlrec->rnode, InvalidBackendId);
 
+		SET_LOCKTAG_RELATION(locktag, reln->smgr_rnode.node.dbNode,
+			 reln->smgr_rnode.node.relNode);
+
+		backends = GetLockConflicts(, AccessExclusiveLock);
+		backends2 = GetConflictingVirtualXIDs(InvalidTransactionId, InvalidOid);
+
+		if (!VirtualTransactionIdIsValid(*backends) && !VirtualTransactionIdIsValid(*backends2))
+		{
+
 		/*
 		 * Forcibly create relation if it doesn't exist (which suggests that
 		 * it was dropped somewhere later in the WAL sequence).  As in
@@ -542,6 +557,10 @@ smgr_redo(XLogReaderState *record)
 			visibilitymap_truncate(rel, xlrec->blkno);
 
 		FreeFakeRelcacheEntry(rel);
+		} else
+		{
+			XLogFlush(lsn);
+		}
 	}
 	else
 		elog(PANIC, "smgr_redo: unknown op code %u", info);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 45b1859..fd91db0 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -61,6 +61,8 @@
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
 #include "utils/tqual.h"
+#include "catalog/storage_xlog.h"
+#include "access/rmgr.h"
 
 
 /*
@@ -317,6 +319,32 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 		new_rel_pages = vacrelstats->old_rel_pages;
 		new_rel_tuples = vacrelstats->old_rel_tuples;
 	}
+	else
+	{
+		if (RelationNeedsWAL(onerel))
+		{
+			/* Get the current relation length */
+			LockRelationForExtension(onerel, ExclusiveLock);
+
+			/*
+			 * Make an XLOG entry reporting the file truncation.
+			 */
+			XLogRecPtr	lsn;
+			xl_smgr_truncate xlrec;
+
+			xlrec.blkno = vacrelstats->rel_pages;
+			xlrec.rnode = onerel->rd_node;
+
+			XLogBeginInsert();
+			XLogRegisterData((char *) , sizeof(xlrec));
+
+			

Re: [HACKERS] [POC] hash partitioning

2017-09-04 Thread amul sul
I've updated patch to use an extended hash function (​Commit #
81c5e46c490e2426db243eada186995da5bb0ba7) for the partitioning.

Regards,
Amul


On Thu, Jul 27, 2017 at 5:11 PM, amul sul  wrote:

> Attaching newer patches rebased against the latest master head. Thanks !
>
> Regards,
> Amul
>


0001-Cleanup_v6.patch
Description: Binary data


0002-hash-partitioning_another_design-v17.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Chris Travers
On Mon, Sep 4, 2017 at 12:23 PM, Michael Paquier 
wrote:

> On Mon, Sep 4, 2017 at 7:21 PM, Michael Paquier
>  wrote:
> > A simple idea would be to pass as a parameter a regex on which we
> > check files to skip when scanning the directory of the target remotely
> > or locally. This needs to be used with care though, it would be easy
> > to corrupt an instance.
>
> I actually shortcut that with a strategy similar to base backups: logs
> are on another partition, log_directory uses an absolute path, and
> PGDATA has no reference to the log path.
>

Yeah, it is quite possible to move all these out of the data directory, but
bad things can happen when you accidentally copy configuration or logs over
those on the target and expecting that all environments will be properly
set up to avoid these problems is not always a sane assumption.

So consequently, I think it would be good to fix in the tool.  The
fundamental question is if there is any reason someone would actually want
to copy config files over.


--
> Michael
>



-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


Re: [HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Chris Travers
Ok so I have a proof of concept patch here.

This is proof of concept only.  It odes not change documentation or the
like.

The purpose of the patch is discussion on the "do we want this" side.

The patch is fairly trivial but I have not added test cases or changed docs
yet.

Intention of the patch:
pg_rewind is an important backbone tool for recovering data directories
following a switchover.  However currently it is over inclusive as to what
it copies.  This patch excludes any file ending in "serverlog", ".conf",
and ".log" because these are never directly related and add a great deal of
complexity to switchovers.

.conf files are excluded for two major reasons.  The first is that often we
may want to put postgresql.conf and other files in the data directory, and
if we change these during switchover this can change, for example, the port
the database is running on or other things that can break production or
testing environments.  This is usually a problem with testing environments,
but it could happen with production environments as well.

A much larger concern with .conf files though is the recovery.conf.  This
file MUST be put in the data directory, and it helps determine the
replication topology regarding cascading replication and the like.  If you
rewind from an upstream replica, you suddenly change the replication
topology and that can have wide-ranging impacts.

I think we are much better off insisting that .conf files should be copied
separately because the scenarios where you want to do so are more limited
and the concern often separate from rewinding the timeline itself.

The second major exclusion added are files ending in "serverlog" and
".log."  I can find no good reason why server logs from the source should
*ever* clobber those on the target.  If you do this, you lose historical
information relating to problems and introduce management issues.


Backwards-incompatibility scenarios:
If somehow one has a workflow that depends on copying .conf files, this
would break that.  I cannot think of any cases where anyone would actually
want to do this but that doesn't mean they aren't out there.  If people
really want to, then they need to copy the configuration files they want
separately.

Next Steps:

If people like this idea I will add test cases and edit documentation as
appropriate.

-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


pg_rewind_log_conf_patch.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Michael Paquier
On Mon, Sep 4, 2017 at 7:21 PM, Michael Paquier
 wrote:
> A simple idea would be to pass as a parameter a regex on which we
> check files to skip when scanning the directory of the target remotely
> or locally. This needs to be used with care though, it would be easy
> to corrupt an instance.

I actually shortcut that with a strategy similar to base backups: logs
are on another partition, log_directory uses an absolute path, and
PGDATA has no reference to the log path.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Michael Paquier
On Mon, Sep 4, 2017 at 5:53 PM, Chris Travers  wrote:
> In some experiments with pg_rewind and rep mgr I noticed that local testing
> is complicated by the fact that pg_rewind appears to copy configuration
> files from the source to target directory.
>
> I would propose to make a modest patch to exclude postgresql.conf,
> pg_hba.conf, and pg_ident.conf from the file tree traversal.
>
> Any feedback before I create.a proof of concept?

A simple idea would be to pass as a parameter a regex on which we
check files to skip when scanning the directory of the target remotely
or locally. This needs to be used with care though, it would be easy
to corrupt an instance.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-04 Thread Alvaro Herrera

if (tuples > 0)
{
-   if (tableinfo.relkind != 
RELKIND_PARTITIONED_TABLE)
-   printfPQExpBuffer(, _("Number of 
child tables: %d (Use \\d+ to list them.)"), tuples);
-   else
-   printfPQExpBuffer(, _("Number of 
partitions: %d (Use \\d+ to list them.)"), tuples);
+   printfPQExpBuffer(, _("Number of %s: %d 
(Use \\d+ to list them.)"), ct, tuples);
printTableAddFooter(, buf.data);
}

Please don't do this, because it breaks translatability.  I think the
original is fine.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Useless code in ExecInitModifyTable

2017-09-04 Thread Etsuro Fujita

On 2017/06/21 17:57, Amit Langote wrote:

On 2017/06/21 16:59, Etsuro Fujita wrote:

Commit d3cc37f1d801a6b5cad9bf179274a8d767f1ee50 added this to
ExecInitModifyTable:

+   /* The root table RT index is at the head of the partitioned_rels list */
+   if (node->partitioned_rels)
+   {
+   Index   root_rti;
+   Oid root_oid;
+
+   root_rti = linitial_int(node->partitioned_rels);
+   root_oid = getrelid(root_rti, estate->es_range_table);
+   rel = heap_open(root_oid, NoLock);  /* locked by InitPlan */
+   }

but I noticed that that function doesn't use the relation descriptor at
all.  Since partitioned_rels is given in case of an UPDATE/DELETE on a
partitioned table, the relation is opened in that case, but the relation
descriptor isn't referenced at all in initializing WITH CHECK OPTION
constraints and/or RETURNING projections.  (The mtstate->resultRelinfo
array is referenced in those initialization, instead.)  So, I'd like to
propose to remove this from that function.  Attached is a patch for that.


Thanks for cleaning that up.  I cannot see any problem in applying the patch.


I noticed that the patch needs to be rebased.  Updated patch attached.

Thanks for the review!

Best regards,
Etsuro Fujita
*** a/src/backend/executor/nodeModifyTable.c
--- b/src/backend/executor/nodeModifyTable.c
***
*** 45,51 
  #include "foreign/fdwapi.h"
  #include "miscadmin.h"
  #include "nodes/nodeFuncs.h"
- #include "parser/parsetree.h"
  #include "storage/bufmgr.h"
  #include "storage/lmgr.h"
  #include "utils/builtins.h"
--- 45,50 
***
*** 1894,1913  ExecInitModifyTable(ModifyTable *node, EState *estate, int 
eflags)
  
estate->es_result_relation_info = saved_resultRelInfo;
  
-   /* The root table RT index is at the head of the partitioned_rels list 
*/
-   if (node->partitioned_rels)
-   {
-   Index   root_rti;
-   Oid root_oid;
- 
-   root_rti = linitial_int(node->partitioned_rels);
-   root_oid = getrelid(root_rti, estate->es_range_table);
-   rel = heap_open(root_oid, NoLock);  /* locked by InitPlan */
-   }
-   else
-   rel = mtstate->resultRelInfo->ri_RelationDesc;
- 
/* Build state for INSERT tuple routing */
if (operation == CMD_INSERT &&
rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
{
--- 1893,1900 
  
estate->es_result_relation_info = saved_resultRelInfo;
  
/* Build state for INSERT tuple routing */
+   rel = mtstate->resultRelInfo->ri_RelationDesc;
if (operation == CMD_INSERT &&
rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
{
***
*** 2091,2100  ExecInitModifyTable(ModifyTable *node, EState *estate, int 
eflags)
mtstate->ps.ps_ExprContext = NULL;
}
  
-   /* Close the root partitioned rel if we opened it above. */
-   if (rel != mtstate->resultRelInfo->ri_RelationDesc)
-   heap_close(rel, NoLock);
- 
/*
 * If needed, Initialize target list, projection and qual for ON 
CONFLICT
 * DO UPDATE.
--- 2078,2083 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: pg_rewind: Fix some problems when copying files >2GB.

2017-09-04 Thread Alvaro Herrera
Tom Lane wrote:
> Michael Paquier  writes:
> > I don't like breaking the abstraction of pg_log() with the existing
> > flags with some kind of pg_debug() layer. The set of APIs present now
> > in pg_rewind for logging is nice to have, and I think that those debug
> > messages should be translated. So what about the attached?
> 
> Your point about INT64_FORMAT not necessarily working with fprintf
> is an outstanding reason not to keep it like it is.  I've not reviewed
> this patch in detail but I think this is basically the way to fix it.

Actually this code goes throgh vsnprintf, not fprintf, which should be
safe, so I removed that part of the comment, and pushed.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Amit Langote
Hi Konstantin,

On 2017/09/04 18:19, Konstantin Knizhnik wrote:
> On 04.09.2017 05:38, Amit Langote wrote:
>> On 2017/09/02 12:44, Thomas Munro wrote:
>>> On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
>>>  wrote:
 postgres=# explain select * from bt where k between 1 and 2 and v
 = 100;
    QUERY PLAN
 --
   Append  (cost=0.29..15.63 rows=2 width=8)
     ->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
   Index Cond: (v = 100)
     ->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
   Index Cond: (v = 100)
   Filter: (k <= 2)
 (6 rows)
>>> +1
>>>
>>> This seems like a good feature to me: filtering stuff that is
>>> obviously true is a waste of CPU cycles and may even require people to
>>> add redundant stuff to indexes.  I was pondering something related to
>>> this over in the partition-wise join thread (join quals that are
>>> implied by partition constraints and should be discarded).
>>>
>>> It'd be interesting to get Amit Langote's feedback, so I CC'd him.
>>> I'd be surprised if he and others haven't got a plan or a patch for
>>> this down the back of the sofa.
>> I agree that that's a good optimization in the cases it's correct.  Given
>> that check_index_predicates() already applies the same optimization when
>> considering using a partial index, it might make sense to try to do the
>> same even earlier for the table itself using its CHECK / NOT NULL
>> constraints as predicates (I said *earlier* because
>> relation_excluded_by_constrains happens for a relation before we look at
>> its indexes).  Also, at the end of relation_excluded_by_constraints() may
>> not be such a bad place to do this.
>>
>> By the way, I read in check_index_predicates() that we should not apply
>> this optimization if the relation in question is a target of UPDATE /
>> DELETE / SELECT FOR UPDATE.
>
> Please correct me if I wrong, but it seems to me that in case of table
> constraints it is not necessary to specially handle update case.
> As far as I understand we need to leave predicate in the plan in case of
> partial indexes because due to "read committed" isolation policy
> we may need to recheck that tuple still satisfies update condition (tuple
> can be changed by some other committed transaction while we are waiting
> for it and not satisfying this condition any more).
> But no transaction can change tuple in such way that it violates table
> constraints,  right? So we do not need to recheck it.

Actually, I don't really know why check_index_predicates() skips this
optimization in the target relation case, just wanted to point out that
that's so.

Thinking a bit from what you wrote, maybe we need not worry about
EvalPlanQual in the context of your proposed optimization based on the
table's constraints.

> Concerning your suggestion to merge check_index_predicates() and
> remove_restrictions_implied_by_constraints() functions: may be it can be
> done, but frankly speaking I do not see much sense in it - there are too
> much differences between this functions and too few code reusing.

Maybe, you meant to address Thomas here. :)  Reading his comment again, I
too am a bit concerned about destructively modifying the input rel's
baserestrictinfo.  There should at least be a comment that that's being done.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Improve geometric types

2017-09-04 Thread Aleksander Alekseev
The following review has been posted through the commitfest application:
make installcheck-world:  tested, failed
Implements feature:   not tested
Spec compliant:   not tested
Documentation:not tested

PostgreSQL fails with SIGSEGV during `make check-world`.

Backtrace: http://afiskon.ru/s/d4/f3dc17838a_sigsegv.txt
regression.diffs (not very useful): 
http://afiskon.ru/s/ac/ac5294656c_regression.diffs.txt
regression.out: http://afiskon.ru/s/70/39d872e2b8_regression.out.txt
How to reproduce: https://github.com/afiskon/pgscripts/blob/master/full-build.sh

The environment is Arch Linux x64, gcc 7.1.1

The new status of this patch is: Waiting on Author

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel worker error

2017-09-04 Thread Amit Kapila
On Sat, Sep 2, 2017 at 12:21 PM, Noah Misch  wrote:
> On Thu, Aug 31, 2017 at 03:11:10PM -0400, Robert Haas wrote:
>> On Wed, Aug 30, 2017 at 11:19 AM, Robert Haas  wrote:
>> > But since that's an established design fl^H^Hprinciple, maybe that
>> > means we should go with the approach of teaching SerializeGUCState()
>> > to ignore role altogether and instead have ParallelWorkerMain call
>> > SetCurrentRoleId using information passed via the FixedParallelState
>> > (not sure of the precise details here).
>>
>> Could I get some opinions on the virtues of this approach, vs. any of
>> the other suggestions at or near
>> http://postgr.es/m/ca+tgmoasp90e33-mu2ypgs73ttj37m5hv-xqhjc7tpqx9wx...@mail.gmail.com
>> ?
>
> It seems good to me, better than the other options in that mail.
>

It seems like the consensus is to move forward with this approach.  I
have written a patch implementing the above idea.  Note, that to use
SetCurrentRoleId, we need the value of guc "is_superuser" for the
current user and we don't pass this value to parallel workers as this
is PGC_INTERNAL guc variable.  So, I have passed this value via
FixedParallelState.

After this patch, I think the check of InitializingParallelWorker in
check_role function is redundant.  I have prepared a separate patch
for it, but may be it can be handled with the main patch to fix the
issue.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fix_role_handling_parallel_worker_v1.patch
Description: Binary data


remove_redundant_check_from_check_role_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Andreas Joseph Krogh
På mandag 04. september 2017 kl. 10:49:40, skrev Simon Riggs <
si...@2ndquadrant.com >:
Migration to Version 10

 "A dump/restore using pg_dumpall, or use of pg_upgrade, is required
 for those wishing to migrate data from any previous release."

 This isn't true and is seriously misleading since pglogical and other
 proprietary tools exist that were designed specifically to allow this.
 Suggested additional wording would be...

 "If upgrading from a 9.4 server or later, external utilities using
 logical decoding, such as pglogical or proprietary alternatives, can
 also provide an alternate route, often with lower downtime."

 Our docs mention pglogical already, so don't see an issue with
 mentioning it here.
 
I'd like at big red warning "Logical decoding doesn't support Large Objects" 
in that case;
 
"If upgrading from a 9.4 server or later, and you don't use Large Objects,
external utilities using logical decoding, such as pglogical or
proprietary alternatives, can also provide an alternate route,
often with lower downtime."
 
pg_upgrade or pg_dump is really the only option for us using LOs.
 
-- Andreas Joseph Krogh




Re: [HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Sokolov Yura

On 2017-09-04 11:53, Chris Travers wrote:

In some experiments with pg_rewind and rep mgr I noticed that local
testing is complicated by the fact that pg_rewind appears to copy
configuration files from the source to target directory.

I would propose to make a modest patch to exclude postgresql.conf,
pg_hba.conf, and pg_ident.conf from the file tree traversal.

Any feedback before I create.a proof of concept?

--

Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com [1]
Saarbrücker Straße 37a, 10405 Berlin



Links:
--
[1] http://www.adjust.com/


And we had production issue with pg_rewind which copied huge textual
logs from pg_log (20GB each, cause statements were logged for
statistic). It will be convenient to tell pg_rewind not to copy logs
too.

--
Sokolov Yura
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Konstantin Knizhnik



On 04.09.2017 05:38, Amit Langote wrote:

On 2017/09/02 12:44, Thomas Munro wrote:

On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:

postgres=# explain select * from bt where k between 1 and 2 and v = 100;
   QUERY PLAN
--
  Append  (cost=0.29..15.63 rows=2 width=8)
->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
  Index Cond: (v = 100)
->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
  Index Cond: (v = 100)
  Filter: (k <= 2)
(6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.
Please correct me if I wrong, but it seems to me that in case of table 
constraints it is not necessary to specially handle update case.
As far as I understand we need to leave predicate in the plan in case of 
partial indexes because due to "read committed" isolation policy
we may need to recheck that tuple still satisfies update condition 
(tuple can be changed by some other committed transaction while we are 
waiting for it and not satisfying this condition any more).
But no transaction can change tuple in such way that it violates table 
constraints,  right? So we do not need to recheck it.


Concerning your suggestion to merge check_index_predicates() and 
remove_restrictions_implied_by_constraints() functions: may be it can be 
done, but frankly speaking I do not see much sense in it - there are too 
much differences between this functions and too few code reusing.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposal: pg_rewind to skip config files

2017-09-04 Thread Chris Travers
In some experiments with pg_rewind and rep mgr I noticed that local testing
is complicated by the fact that pg_rewind appears to copy configuration
files from the source to target directory.

I would propose to make a modest patch to exclude postgresql.conf,
pg_hba.conf, and pg_ident.conf from the file tree traversal.

Any feedback before I create.a proof of concept?

-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


[HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
Migration to Version 10

"A dump/restore using pg_dumpall, or use of pg_upgrade, is required
for those wishing to migrate data from any previous release."

This isn't true and is seriously misleading since pglogical and other
proprietary tools exist that were designed specifically to allow this.
Suggested additional wording would be...

"If upgrading from a 9.4 server or later, external utilities using
logical decoding, such as pglogical or proprietary alternatives, can
also provide an alternate route, often with lower downtime."

Our docs mention pglogical already, so don't see an issue with
mentioning it here.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb failure on Debian sid/mips64el in EventTriggerEndCompleteQuery

2017-09-04 Thread Christoph Berg
Re: Tom Lane 2017-08-13 <14677.1502638...@sss.pgh.pa.us>
> Christoph Berg  writes:
> > Seems to be a gcc-7 problem affecting several packages on mips64el:
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871514
> 
> Hm, unless there is a use of sigsetjmp earlier in that clamav
> routine, I would not assume that that's the same issue.  The bug
> I suspect we are looking at here is very specific to sigsetjmp
> callers: it usually amounts to the compiler unsafely trying to
> use the same temporary location for multiple purposes.

It appears to have been the same issue - non-long ints spilled on the
stack and loaded back as long int:

Changes:
 gcc-7 (7.2.0-3) unstable; urgency=high
 .
   * Update to SVN 20170901 (r251583) from the gcc-7-branch.
 - Fix PR target/81504 (PPC), PR c++/82040.
   * Apply proposed patch for PR target/81803 (James Cowgill), conditionally
 for mips* targets. Closes: #871514.

The package built successfully on mips64el now.

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Bug in Physical Replication Slots (at least 9.5)?

2017-09-04 Thread Michael Paquier
On Mon, Sep 4, 2017 at 4:04 PM, Andres Freund  wrote:
> I've not read through the thread, but this seems like the wrong approach
> to me. The receiving side should use a correct value, instead of putting
> this complexity on the sender's side.

Yes I agree with that. The current patch gives me a bad feeling to be
honest with the way it does things..
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dropping partitioned tables without CASCADE

2017-09-04 Thread Ashutosh Bapat
On Mon, Sep 4, 2017 at 10:34 AM, Amit Langote
 wrote:
> Hi Ashutosh,
>
> On 2017/09/04 13:51, Ashutosh Bapat wrote:
>> Hi,
>> Thomas's application to track patches told me that this patch needs
>> rebase. It also required some changes to the code. Here's the updated
>> version. I have squashed those two patches together.
>
> Thanks for the rebased patch.
>
> Would it make sense to post them in a new thread?  When this message
> popped into my inbox in a thread titled "dropping partitioned tables
> without CASCADE", I wondered for a second if there is still something left
> to be done about that. :)

That would be good. We will have to update the thread in commitfest
entry though.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >