Re: [HACKERS] [PATCH] PostgresNode.pm enhancements, pg_lsn helper, and some more recovery tests

2017-01-04 Thread Simon Riggs
On 5 January 2017 at 04:50, Michael Paquier  wrote:

> The perldoc documentation is broken for the new routines.
...
> Attached is a patch to fix all those small issues.

Thanks, looks good, will apply.


> It may be a good idea to run perltidy on top of that to be honest...

Should we add that to the makefile? I guess start a new thread if you think so.

Anything that helps me check its correct is a good thing AFAICS.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical decoding of two-phase transactions

2017-01-04 Thread Simon Riggs
On 4 January 2017 at 21:20, Simon Riggs  wrote:
> On 31 December 2016 at 08:36, Stas Kelvich  wrote:
>> Here is resubmission of patch to implement logical decoding of two-phase 
>> transactions (instead of treating them
>> as usual transaction when commit) [1] I’ve slightly polished things and used 
>> test_decoding output plugin as client.
>
> Sounds good.
>
>> General idea quite simple here:
>>
>> * Write gid along with commit/prepare records in case of 2pc
>
> GID is now variable sized. You seem to have added this to every
> commit, not just 2PC

I've just realised that you're adding GID because it allows you to
uniquely identify the prepared xact. But then the prepared xact will
also have a regular TransactionId, which is also unique. GID exists
for users to specify things, but it is not needed internally and we
don't need to add it here. What we do need is for the commit prepared
message to remember what the xid of the prepare was and then re-find
it using the commit WAL record's twophase_xid field. So we don't need
to add GID to any WAL records, nor to any in-memory structures.

Please re-work the patch to include twophase_xid, which should make
the patch smaller and much faster too.

Please add comments to explain how and why patches work. Design
comments allow us to check the design makes sense and if it does
whether all the lines in the patch are needed to follow the design.
Without that patches are much harder to commit and we all want patches
to be easier to commit.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Etsuro Fujita

On 2017/01/05 13:19, Ashutosh Bapat wrote:

Hmm. If I understand the patch correctly, it does not return any path
when merge join is allowed and there are merge clauses but no hash
clauses. In this case we will not create a foreign join path, loosing
some optimization. If we remove GetExistingLocalJoinPath, which
returns a path in those cases as well, we have a regression in
performance.



Ok, will revise, but as I mentioned upthread, I'm not sure it's a good idea
to search the pathlist to get a merge join even in this case.  I'd vote for
creating a merge join path from the inner/outer paths in this case as well.
I think that would simplify the code as well.



Creating a new path 1. requires memory


The search approach would require memory for saving the path, too.


2. spends CPU cycles in costing
and creating it


The search approach would also need extra cycles in the cases mentioned 
in [1], wouldn't it?  Since it would be useless to cost the 
fdw_outerpath of a foreign join, we could skip that for the 
fdw_outerpath if necessary.



3. requires a search in inner and outer relations'
pathlists (see my earlier reply).


What I'm thinking is basically to use the cheapest-total-cost paths of 
the inner/outer relations, which wouldn't require any search.


Best regards,
Etsuro Fujita

[1] 
https://www.postgresql.org/message-id/c1075e4e-8297-5cf6-3f30-cb21266aa2ee%40lab.ntt.co.jp





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache Hash Index meta page.

2017-01-04 Thread Mithun Cy
On Wed, Jan 4, 2017 at 5:21 PM, Mithun Cy  wrote:
I have re-based the patch to fix one compilation warning
@_hash_doinsert where variable bucket was only used for Asserting but
was not declared about its purpose.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


cache_hash_index_meta_page_10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PoC: Make it possible to disallow WHERE-less UPDATE and DELETE

2017-01-04 Thread David Fetter
On Wed, Jan 04, 2017 at 09:58:26PM -0800, David Fetter wrote:
> On Sun, Jan 01, 2017 at 07:57:33PM +0900, Michael Paquier wrote:
> > On Sun, Jan 1, 2017 at 12:34 PM, David Fetter  wrote:
> > > I've rolled your patches into this next one and clarified the commit
> > > message, as there appears to have been some confusion about the scope.
> > 
> > Is there actually a meaning to have two options? Couldn't we leave
> > with just one? Actually, I'd just suggest to rip off the option and
> > just to make the checks enabled when the library is loaded, to get a
> > module as simple as passwordcheck.
> 
> Done.
> 
> > --- /dev/null
> > +++ b/contrib/require_where/data/test_require_where.data
> > @@ -0,0 +1,16 @@
> > +Four
> > There is no need to load fake data as you need to just check the
> > parsing of the query. So let's simplify the tests and remove that.
> 
> Removed.
> 
> > Except that the shape of the module is not that bad. The copyright
> > notices need to be updated to 2017, and it would be nice to have some
> > comments at the top of require_where.c to describe what it does.
> 
> Please find attached the next version of the patch including Pavel's
> suggestion that I provide some motivation for including those
> Asserts.

Here's one with the commit message included for easier review.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
commit 6e28e474982a3da5095dab2bfc0d9a3241feda05
Author: David Fetter 
Date:   Thu Jul 21 23:34:21 2016 -0700

require_where: a contrib hook

This adds a process utility hook which makes simple UPDATE and DELETE
statements require a WHERE clause when loaded.

It is not intended to provide a general capability.  Instead, its job is to
prevent common human errors made by people who only rarely use SQL.  The 
hook
is small enough to be usable as part of a short lesson on hooks.

diff --git a/contrib/Makefile b/contrib/Makefile
index 25263c0..4bd456f 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -40,6 +40,7 @@ SUBDIRS = \
pgstattuple \
pg_visibility   \
postgres_fdw\
+   require_where   \
seg \
spi \
tablefunc   \
diff --git a/contrib/require_where/Makefile b/contrib/require_where/Makefile
new file mode 100644
index 000..933eb00
--- /dev/null
+++ b/contrib/require_where/Makefile
@@ -0,0 +1,19 @@
+# contrib/require_where/Makefile
+
+MODULE_big = require_where
+OBJS = require_where.o
+
+PGFILEDESC = 'require_where - require simple DELETEs and UPDATEs to have a 
WHERE clause'
+
+REGRESS = require_where
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS = $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/require_where
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_builddir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/require_where/expected/require_where.out 
b/contrib/require_where/expected/require_where.out
new file mode 100644
index 000..a36dd1f
--- /dev/null
+++ b/contrib/require_where/expected/require_where.out
@@ -0,0 +1,16 @@
+--
+--   Test require_where
+--
+\set echo all
+CREATE TABLE test_require_where(t TEXT);
+UPDATE test_require_where SET t=t; -- succeeds
+DELETE FROM test_require_where; -- succeeds
+LOAD 'require_where';
+UPDATE test_require_where SET t=t; -- fails
+ERROR:  UPDATE requires a WHERE clause when require_where.delete is set to on
+HINT:  To update all rows, use "WHERE true" or similar.
+UPDATE test_require_where SET t=t WHERE true; -- succeeds
+DELETE FROM test_require_where; -- fails
+ERROR:  DELETE requires a WHERE clause when require_where.delete is set to on
+HINT:  To delete all rows, use "WHERE true" or similar.
+DELETE FROM test_require_where WHERE true; -- succeeds
diff --git a/contrib/require_where/require_where.c 
b/contrib/require_where/require_where.c
new file mode 100644
index 000..09f2578
--- /dev/null
+++ b/contrib/require_where/require_where.c
@@ -0,0 +1,73 @@
+/*
+ * --
+ *
+ * require_where.c
+ *
+ * Copyright (C) 2017, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * contrib/require_where/require_where.c
+ *
+ * --
+ */
+#include "postgres.h"
+
+#include "fmgr.h"
+
+#include "parser/analyze.h"
+
+#include "utils/elog.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+void   _PG_init(void);
+void   _PG_fini(void);
+
+static post_parse_analyze_hook_type original_post_parse_analyze_hook = 
NULL;
+
+/*
+ * This module makes 

Re: [HACKERS] PoC: Make it possible to disallow WHERE-less UPDATE and DELETE

2017-01-04 Thread David Fetter
On Sun, Jan 01, 2017 at 07:57:33PM +0900, Michael Paquier wrote:
> On Sun, Jan 1, 2017 at 12:34 PM, David Fetter  wrote:
> > I've rolled your patches into this next one and clarified the commit
> > message, as there appears to have been some confusion about the scope.
> 
> Is there actually a meaning to have two options? Couldn't we leave
> with just one? Actually, I'd just suggest to rip off the option and
> just to make the checks enabled when the library is loaded, to get a
> module as simple as passwordcheck.

Done.

> --- /dev/null
> +++ b/contrib/require_where/data/test_require_where.data
> @@ -0,0 +1,16 @@
> +Four
> There is no need to load fake data as you need to just check the
> parsing of the query. So let's simplify the tests and remove that.

Removed.

> Except that the shape of the module is not that bad. The copyright
> notices need to be updated to 2017, and it would be nice to have some
> comments at the top of require_where.c to describe what it does.

Please find attached the next version of the patch including Pavel's
suggestion that I provide some motivation for including those
Asserts.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/contrib/Makefile b/contrib/Makefile
index 25263c0..4bd456f 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -40,6 +40,7 @@ SUBDIRS = \
pgstattuple \
pg_visibility   \
postgres_fdw\
+   require_where   \
seg \
spi \
tablefunc   \
diff --git a/contrib/require_where/Makefile b/contrib/require_where/Makefile
new file mode 100644
index 000..933eb00
--- /dev/null
+++ b/contrib/require_where/Makefile
@@ -0,0 +1,19 @@
+# contrib/require_where/Makefile
+
+MODULE_big = require_where
+OBJS = require_where.o
+
+PGFILEDESC = 'require_where - require simple DELETEs and UPDATEs to have a 
WHERE clause'
+
+REGRESS = require_where
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS = $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/require_where
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_builddir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/require_where/expected/require_where.out 
b/contrib/require_where/expected/require_where.out
new file mode 100644
index 000..a36dd1f
--- /dev/null
+++ b/contrib/require_where/expected/require_where.out
@@ -0,0 +1,16 @@
+--
+--   Test require_where
+--
+\set echo all
+CREATE TABLE test_require_where(t TEXT);
+UPDATE test_require_where SET t=t; -- succeeds
+DELETE FROM test_require_where; -- succeeds
+LOAD 'require_where';
+UPDATE test_require_where SET t=t; -- fails
+ERROR:  UPDATE requires a WHERE clause when require_where.delete is set to on
+HINT:  To update all rows, use "WHERE true" or similar.
+UPDATE test_require_where SET t=t WHERE true; -- succeeds
+DELETE FROM test_require_where; -- fails
+ERROR:  DELETE requires a WHERE clause when require_where.delete is set to on
+HINT:  To delete all rows, use "WHERE true" or similar.
+DELETE FROM test_require_where WHERE true; -- succeeds
diff --git a/contrib/require_where/require_where.c 
b/contrib/require_where/require_where.c
new file mode 100644
index 000..09f2578
--- /dev/null
+++ b/contrib/require_where/require_where.c
@@ -0,0 +1,73 @@
+/*
+ * --
+ *
+ * require_where.c
+ *
+ * Copyright (C) 2017, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ * contrib/require_where/require_where.c
+ *
+ * --
+ */
+#include "postgres.h"
+
+#include "fmgr.h"
+
+#include "parser/analyze.h"
+
+#include "utils/elog.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+void   _PG_init(void);
+void   _PG_fini(void);
+
+static post_parse_analyze_hook_type original_post_parse_analyze_hook = 
NULL;
+
+/*
+ * This module makes simple UPDATE and DELETE statements require a WHERE clause
+ * and complains when this is not present.
+ */
+static void
+require_where_check(ParseState *pstate, Query *query)
+{
+
+   if (query->commandType == CMD_DELETE)
+   {
+   /* Make sure there's something to look at. */
+   Assert(query->jointree != NULL);
+   if (query->jointree->quals == NULL)
+   ereport(ERROR,
+   (errcode(ERRCODE_SYNTAX_ERROR),
+errmsg("DELETE requires a WHERE clause 
when require_where.delete is set to on"),
+errhint("To delete all rows, use 

Re: [HACKERS] WAL consistency check facility

2017-01-04 Thread Kuntal Ghosh
On Wed, Dec 21, 2016 at 10:53 PM, Robert Haas  wrote:

> On a first read-through of this patch -- I have not studied it in
> detail yet -- this looks pretty good to me.  One concern is that this
> patch adds a bit of code to XLogInsert(), which is a very hot piece of
> code.  Conceivably, that might produce a regression even when this is
> disabled; if so, we'd probably need to make it a build-time option.  I
> hope that's not necessary, because I think it would be great to
> compile this into the server by default, but we better make sure it's
> not a problem.  A bulk load into an existing table might be a good
> test case.
>
I've done some bulk load testing with 16,32,64 clients. I didn't
notice any regression
in the results.

> Aside from that, I think the biggest issue here is that the masking
> functions are virtually free of comments, whereas I think they should
> have extensive and detailed comments.  For example, in heap_mask, you
> have this:
>
> +page_htup->t_infomask =
> +HEAP_XMIN_COMMITTED | HEAP_XMIN_INVALID |
> +HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID;
>
> For something like this, you could write "We want to ignore
> differences in hint bits, since they can be set by SetHintBits without
> emitting WAL.  Force them all to be set so that we don't notice
> discrepancies."  Actually, though, I think that you could be a bit
> more nuanced here: HEAP_XMIN_COMMITTED + HEAP_XMIN_INVALID =
> HEAP_XMIN_FROZEN, so maybe what you should do is clear
> HEAP_XMAX_COMMITTED and HEAP_XMAX_INVALID but only clear the others if
> one is set but not both.
>
I've modified it as follows:
+
+   if (!HeapTupleHeaderXminFrozen(page_htup))
+   page_htup->t_infomask |= HEAP_XACT_MASK;
+   else
+   page_htup->t_infomask |=
HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID;

> Anyway, leaving that aside, I think every single change that gets
> masked in every single masking routine needs a similar comment,
> explaining why that change can happen on the master without also
> happening on the standby and hopefully referring to the code that
> makes that unlogged change.
>
I've added comments for all the masking routines.

> I think wal_consistency_checking, as proposed by Peter, is better than
> wal_consistency_check, as implemented.
>
Modified to wal_consistency_checking.

> Having StartupXLOG() call pfree() on the masking buffers is a waste of
> code.  The process is going to exit anyway.
>
> + "Inconsistent page found, rel %u/%u/%u, forknum %u, blkno 
> %u",
>
Done.

> Primary error messages aren't capitalized.
>
> +if (!XLogRecGetBlockTag(record, block_id, , , ))
> +{
> +/* Caller specified a bogus block_id. Do nothing. */
> +continue;
> +}
>
> Why would the caller do something so dastardly?
>
Modified to following comment:
+   /*
+* WAL record doesn't contain a block reference
+* with the given id. Do nothing.
+*/

I've attached the patch with the modified changes. PFA.

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


walconsistency_v16.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] background sessions

2017-01-04 Thread amul sul
On Wed, Jan 4, 2017 at 2:57 PM, Andrew Borodin  wrote:
> 2017-01-04 10:23 GMT+05:00 amul sul :
>> One more query, can we modify
>> BackgroundSessionStart()/BackgroundSession struct to get background
>> worker PID as well?
> I think since session always has a PID it's absoultley reasonable to return 
> PID.
>
>> I can understand this requirement could be sound useless for now,
>> because it only for the benefit of pg_background contrib module only.
> As far as i can unserstand BackgroundSession is not just a feature
> itself, it's the API. So PID would benefit to pg_background and all
> API use cases we didn't implement yet. I do not think that one PID in
> structure will waste huge amount of memory, cycles, dev time,
> readbility of docs, clearness of API etc. AFAIK the only reason may be
> if the PID is not always there.
>

+1, but to make BackgroundSession member accessible outside of
bgsession.c,  we might need to moved BackgroundSession definition to
bgsession.h.

Regards,
Amul Sul


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical decoding on standby

2017-01-04 Thread Michael Paquier
On Thu, Jan 5, 2017 at 10:21 AM, Craig Ringer  wrote:
> On 5 January 2017 at 09:19, Craig Ringer  wrote:
>
>> so here's a rebased series on top of master. No other changes.
>
> Now with actual patches.

Looking at the PostgresNode code in 0001...
+=pod $node->pg_recvlogical_upto(self, dbname, slot_name, endpos,
timeout_secs, ...)
+
This format is incorrect. I think that you also need to fix the
brackets for the do{} and the eval{] blocks.

+push @cmd, '--endpos', $endpos if ($endpos);
endpos should be made a mandatory argument. If it is not defined that
would make the test calling this routine being stuck.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PassDownLimitBound for ForeignScan/CustomScan [take-2]

2017-01-04 Thread Michael Paquier
On Thu, Jan 5, 2017 at 8:58 AM, Kouhei Kaigai  wrote:
> Unfortunately, it was not possible. Probably, administrator privilege will be
> needed for this operation.

Adding a patch to a CF in progress indeed requires administrator privileges,

> May I add it to the CF:2017-03?

I can notice that the previous CFM has switched this patch to
"returned with feedback" without mentioning the new status on this
thread. Perhaps that was not appropriate.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba_file_settings view patch

2017-01-04 Thread Michael Paquier
On Thu, Jan 5, 2017 at 5:10 AM, Simon Riggs  wrote:
> On 4 January 2017 at 03:54, Haribabu Kommi  wrote:
>
>> Latest patch is attached.
>
> The "method" column should be called "auth" or "auth_method" or 
> "authentication"
>
> I think we should have some tests, but I'll hear your views on that.
> Perhaps we can include a test/sample pg_hba.conf for use in tests.
>
> Since we've had crashes, I suggest running the test 1 times and
> checks for leaks and crashes.
>
> If its safe we can move towards commit. Thanks

Could you hold on a bit to commit that? I'd like to look at it in more
details. At quick glance, there is for example no need to use
CreateTemplateTupleDesc and list the columns both in pg_proc.h and the
C routine itself. And memset() can be used in fill_hba_line for the
error code path.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgresNode.pm enhancements, pg_lsn helper, and some more recovery tests

2017-01-04 Thread Michael Paquier
On Thu, Jan 5, 2017 at 1:58 AM, Simon Riggs  wrote:
> On 4 January 2017 at 08:16, Craig Ringer  wrote:
>> When committed I will update the decoding on standby series to omit
>> these pre-requisite patches.
>
> Committed, thanks.

I was planning a round of reviews of those patches, but you were faster than me.

So now looking at what has been committed.

The perldoc documentation is broken for the new routines. The commits
have added patterns like that:
=pod $node->routine_name
But what needs to be done is to use =pod and =item, like that:
=pod
=item $node->routine_name

+Look up xlog positions on the server:
This has better be "*WAL* positions". There is another reference with
xlog (see recent threads about renaming those functions for example).

+* insert position (master only, error on replica)
+* write position (master only, error on replica)
+* flush position
+* receive position (always undef on master)
+* replay position
Replay position is always undefined on primary, let's document it in
the description of the routine. And flush position generates an error
for a standby.

The documentation of $node->lsn is generated like that, with a very
unfriendly list of modes:
$node->lsn(mode)
Look up xlog positions on the server:

* insert position (master only, error on replica) * write position
(master only, error on replica) * flush position * receive position
(always undef on master) * replay position
A trick that I have found here is to add a space before the '*'.

It may be a good idea to run perltidy on top of that to be honest...

Attached is a patch to fix all those small issues.
-- 
Michael
diff --git a/src/test/perl/PostgresNode.pm b/src/test/perl/PostgresNode.pm
index 2a4ceb3d42..053c5ea787 100644
--- a/src/test/perl/PostgresNode.pm
+++ b/src/test/perl/PostgresNode.pm
@@ -1324,15 +1324,17 @@ sub run_log
TestLib::run_log(@_);
 }
 
-=pod $node->lsn(mode)
+=pod
+
+=item $node->lsn(mode)
 
-Look up xlog positions on the server:
+Look up WAL positions on the server:
 
-* insert position (master only, error on replica)
-* write position (master only, error on replica)
-* flush position
-* receive position (always undef on master)
-* replay position
+ * insert position (master only, error on replica)
+ * write position (master only, error on replica)
+ * flush position (master only, error on replica)
+ * receive position (always undef on master)
+ * replay position (always undef on master)
 
 mode must be specified.
 
@@ -1363,11 +1365,13 @@ sub lsn
}
 }
 
-=pod $node->wait_for_catchup(standby_name, mode, target_lsn)
+=pod
+
+=item $node->wait_for_catchup(standby_name, mode, target_lsn)
 
 Wait for the node with application_name standby_name (usually from node->name)
 until its replication position in pg_stat_replication equals or passes the
-upstream's xlog insert point at the time this function is called. By default
+upstream's WAL insert point at the time this function is called. By default
 the replay_location is waited for, but 'mode' may be specified to wait for any
 of sent|write|flush|replay.
 
@@ -1401,7 +1405,9 @@ sub wait_for_catchup
print "done\n";
 }
 
-=pod $node->wait_for_slot_catchup(slot_name, mode, target_lsn)
+=pod
+
+=item $node->wait_for_slot_catchup(slot_name, mode, target_lsn)
 
 Wait for the named replication slot to equal or pass the supplied target_lsn.
 The position used is the restart_lsn unless mode is given, in which case it may
@@ -1435,7 +1441,9 @@ sub wait_for_slot_catchup
print "done\n";
 }
 
-=pod $node->query_hash($dbname, $query, @columns)
+=pod
+
+=item $node->query_hash($dbname, $query, @columns)
 
 Execute $query on $dbname, replacing any appearance of the string __COLUMNS__
 within the query with a comma-separated list of @columns.
@@ -1473,7 +1481,9 @@ sub query_hash
return \%val;
 }
 
-=pod $node->slot(slot_name)
+=pod
+
+=item $node->slot(slot_name)
 
 Return hash-ref of replication slot data for the named slot, or a hash-ref with
 all values '' if not found. Does not differentiate between null and empty 
string

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Questionable tag usage

2017-01-04 Thread Tom Lane
Tatsuo Ishii  writes:
> In:
> https://www.postgresql.org/docs/devel/static/runtime-config-file-locations.html
> "Specifies the configuration file for Section 20.2, $B!H(BUser Name 
> Maps$B!I(B
> user name mapping" looks pretty strange to me because a raw section
> name appears.

Yeah, it's definitely duplicative.  It was less so before the recent
docs-toolchain changeover, because in the old toolchain the 
tag only printed "Section M.N" and didn't include a section title;
see the same page in prior versions.  I'm sure the markup was written
with that in mind.  Not that that makes it good style necessarily.

> Shouldn't we use a link tag instead of the xref tag here? Attached is
> a patch to fix this.

- Specifies the configuration file for
-  user name mapping
- (customarily called pg_ident.conf).
- This parameter can only be set at server start.
+ Specifies the configuration file
+ for user name mapping
+ (customarily called pg_ident.conf).  This parameter can
+ only be set at server start.

Well ... that will read nicely in output formats that have hyperlinks,
but not so well on plain dead trees where the cross-reference is either
invisible or an explicit footnote.  Our typical convention for this sort
of thing has been more like "... file for user name mapping (see )".  That used to expand like

file for user name mapping (see Section 20.2).

and now it expands like

file for user name mapping (see Section 20.2, "User Name Mapping").

In either case the text after "see" is a hotlink if supported.

I complained previously that this seems a bit duplicative now,
but didn't get any responses:
https://www.postgresql.org/message-id/31278.1479587695%40sss.pgh.pa.us

You could argue that nobody reads the PG docs on dead trees anymore
and we should embrace the hyperlink style with enthusiasm.  I wouldn't
be against that personally, but there are a lot of places to change if
we decide that parenthetical "(see Section M.N)" hotlinks are pass,Ai(B.

Anyway, bottom line is I'm not terribly excited about fixing just this
one place.  I think we need to decide whether we like the new more-verbose
output for links.  If we don't, we need to fix the markup rules to not do
that.  If we do, there are a lot of places that need adjustment to be less
duplicative, and we should try to be somewhat systematic about fixing
them.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE parent SET WITHOUT OIDS and the oid column

2017-01-04 Thread Amit Langote
On 2017/01/05 8:05, Tom Lane wrote:
> Ashutosh Bapat  writes:
>> Right. But I think it's better to use attribute id, in case the code
>> raising this error changes for any reason in future.
> 
> I agree.  The parent's "tdhasoid" flag is definitely based on the
> existence of an ObjectIdAttributeNumber system column, not on whether the
> column's name is "oid".  So doing a lookup by name to find the matching
> child column is just weird, and cannot possibly lead to anything good.

You beat me to revising the patch along the lines suggested by Ashutosh.

>> The code updating attinhcount and then updating the catalogs is same
>> for user defined attributes and OID. Should we separate it out into a
>> function and use that function instead of duplicating the code?
> 
> Didn't really seem worth the trouble ... maybe if it gets any longer
> it'd be appropriate to do that.
> 
>> Your test uses tablenames starting with "_". I have not seen that
>> style in the testcases. Is it intentional?
> 
> Yeah, I did not like that either.
> 
> Pushed with those corrections and some further fooling with the test case.

Thanks for reviewing and committing the patch!

Regards,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reporting planning time with EXPLAIN

2017-01-04 Thread Ashutosh Bapat
On Wed, Jan 4, 2017 at 8:24 PM, Stephen Frost  wrote:
> Ashutosh,
>
> I realize you were replying to yourself, but you probably didn't need to
> include the entire thread below or to top-post.

Sorry, that was unintentional.

>
> * Ashutosh Bapat (ashutosh.ba...@enterprisedb.com) wrote:
>> 1. pg_explain_plan_time_v3 adds SUMMARY option which behaves as:
>> SUMMARY when ON prints planning time. With ANALYZE ON, it also prints
>> execution time. When user explicitly uses SUMMARY OFF, it does not
>> print planning and execution time (even when ANALYZE is ON). By
>> default SUMMARY is ON when ANALYZE is ON, otherwise SUMMARY defaults
>> to OFF. Attached explain_summary_examples.out.txt shows examples.
>
> Right, this is how I had been thinking 'summary' would behave.
>
>> 2. explain_exec_timing adds support to print planning time in EXPLAIN
>> EXECUTE output with SUMMARY option. In this case, planning time
>> includes time required to fetch the plan from cache and plan the query
>> if necessary (i.e. after invalidation or the first time it's
>> executed.) E.g.
>
> Ok.
>
> diff --git a/doc/src/sgml/ref/explain.sgml b/doc/src/sgml/ref/explain.sgml
> [...]
> +  Include planning time and execution time. Execution time is included
> +  when ANALYZE is enabled.
> +  It defaults to TRUE when ANALYZE
> +  is enabled. Otherwise it defaults to FALSE.
>
> This seems to be part of the concern regarding the naming.  I would
> reword this along these lines:
>
> Include summary information (eg: totalled timing information) after the
> query plan.  Summary information is included by default when ANALYZE is
> used.  Without ANALYZE, summary information is not included by default
> but can be enabled using this option.
>
> (that's not great, but hopefully it gets the point across)

I think it's better than mine which was "overfitting", if we allow
some machine learning terminology here :).

>
> The code changes look alright on a cursory look, but we need to hammer
> down if we agree on this term or if we need to invent something else.

Agreed. Will wait for consensus.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Ashutosh Bapat
>
>> Hmm. If I understand the patch correctly, it does not return any path
>> when merge join is allowed and there are merge clauses but no hash
>> clauses. In this case we will not create a foreign join path, loosing
>> some optimization. If we remove GetExistingLocalJoinPath, which
>> returns a path in those cases as well, we have a regression in
>> performance.
>
>
> Ok, will revise, but as I mentioned upthread, I'm not sure it's a good idea
> to search the pathlist to get a merge join even in this case.  I'd vote for
> creating a merge join path from the inner/outer paths in this case as well.
> I think that would simplify the code as well.

Creating a new path 1. requires memory 2. spends CPU cycles in costing
and creating it 3. requires a search in inner and outer relations'
pathlists (see my earlier reply). Searching for an existing path just
requires a search in one relation's pathlist. The path should be
there. Why do we want to spend extra resources in creating a new path
when an old one exists and searching it is more efficient.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Ashutosh Bapat
On Thu, Jan 5, 2017 at 8:20 AM, Etsuro Fujita
 wrote:
> On 2016/12/27 16:41, Etsuro Fujita wrote:
>>
>> On 2016/12/22 1:04, Ashutosh Bapat wrote:
>>>
>>> 2. We should try to look for other not-so-cheap paths if the cheapest
>>> one is
>>> paramterized. You might want to use get_cheapest_path_for_pathkeys()
>>> to find a
>>> suitable unparameterized path by passing NULL for required_outer and
>>> NIL for
>>> pathkeys, that's a very strange usage, but I think it will serve the
>>> purpose.
>
>
>>> +/* Give up if the cheapest-total-cost paths are parameterized. */
>>> +if (!bms_is_empty(PATH_REQ_OUTER(outer_path)) ||
>>> +!bms_is_empty(PATH_REQ_OUTER(inner_path)))
>>> +return NULL;
>
>
>> I did that because I think that would work well for postgres_fdw, but I
>> agree with you.  Will revise.
>
>
> While working on this, I noticed that in that case
> get_cheapest_path_for_pathkeys() would return NULL because if the
> cheapest-total-cost path is parameterized, then there are no unparameterized
> paths in the rel's pathlist (see set_cheapest).

I guess, that happens when there are lateral references, and every
path for that relation in parameterized. Please correct me if I am
wrong. If that's true, we should be searching for minimum
parameterization for that relation instead of no parameterization.
IIUC the bit about outer relations in the following comment in
add_path()

 *We also remove from the rel's pathlist any old paths that are dominated
 *by new_path --- that is, new_path is cheaper, at least as well ordered,
 *generates no more rows, requires no outer rels not required by the old
 *path, and is no less parallel-safe.

we do not discard an unparameterized path in favour of a cheaper
parameterized path. So, if the cheapest path is unparameterized one,
it's parameterized by minimum required outer relations.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]

2017-01-04 Thread Ashutosh Bapat
On Thu, Jan 5, 2017 at 9:40 AM, Ashutosh Bapat
 wrote:
>>
>>
>>> Hmm. If I understand the patch correctly, it does not return any path
>>> when merge join is allowed and there are merge clauses but no hash
>>> clauses. In this case we will not create a foreign join path, loosing
>>> some optimization. If we remove GetExistingLocalJoinPath, which
>>> returns a path in those cases as well, we have a regression in
>>> performance.
>>
>>
>> Ok, will revise, but as I mentioned upthread, I'm not sure it's a good idea
>> to search the pathlist to get a merge join even in this case.  I'd vote for
>> creating a merge join path from the inner/outer paths in this case as well.
>> I think that would simplify the code as well.
>
> Creating a new path requires 1. memory 2. requires a search in inner
> and outer relations' pathlist (see my reply to your objection about
> unparameterized paths) 3. spends CPU cycles in costing the path as
> well as creating it. Searching for an existing path requires a search
> in only one relation's pathlist. The path should be there so we don't
> need to construct a new one.


The subject was removed from this reply for reasons unknown to me.
Will reply again on the relevant thread. Sorry for the inconvenience.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS]

2017-01-04 Thread Ashutosh Bapat
>
>
>> Hmm. If I understand the patch correctly, it does not return any path
>> when merge join is allowed and there are merge clauses but no hash
>> clauses. In this case we will not create a foreign join path, loosing
>> some optimization. If we remove GetExistingLocalJoinPath, which
>> returns a path in those cases as well, we have a regression in
>> performance.
>
>
> Ok, will revise, but as I mentioned upthread, I'm not sure it's a good idea
> to search the pathlist to get a merge join even in this case.  I'd vote for
> creating a merge join path from the inner/outer paths in this case as well.
> I think that would simplify the code as well.

Creating a new path requires 1. memory 2. requires a search in inner
and outer relations' pathlist (see my reply to your objection about
unparameterized paths) 3. spends CPU cycles in costing the path as
well as creating it. Searching for an existing path requires a search
in only one relation's pathlist. The path should be there so we don't
need to construct a new one.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-01-04 Thread David Rowley
On 4 January 2017 at 01:16, Michael Paquier  wrote:
> On Tue, Jan 3, 2017 at 6:23 AM, Jim Nasby  wrote:
>> +   /* Check if wal_segment_size is in the power of 2 */
>> +   for (i = 0;; i++, pow2 = pow(2, i))
>> +   if (pow2 >= wal_segment_size)
>> +   break;
>> +
>> +   if (wal_segment_size != 1 && pow2 > wal_segment_size)
>> +   {
>> +   fprintf(stderr, _("%s: WAL segment size must be in 
>> the power of 2\n"), progname);
>> +   exit(1);
>> +   }
>
> I recall taht pow(x, 2) and x * x result usually in the same assembly
> code, but pow() can never be more optimal than a simple
> multiplication. So I'd think that it is wiser to avoid it in this code
> path. Documentation is missing for the new replication command
> SHOW_WAL_SEG. Actually, why not just having an equivalent of the SQL
> command and be able to query parameter values?

This would probably be nicer written using a bitwise trick to ensure
that no lesser significant bits are set. If it's a power of 2, then
subtracting 1 should have all the lesser significant bits as 1, so
binary ANDing to that should be 0. i.e no common bits.

Something like:

/* ensure segment size is a power of 2 */
if ((wal_segment_size & (wal_segment_size - 1)) != 0)
{
   fprintf(stderr, _("%s: WAL segment size must be in the power of
2\n"), progname);
   exit(1);
}

There's a similar trick in bitmapset.c for RIGHTMOST_ONE, so looks
like we already have assumptions about two's complement arithmetic

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE parent SET WITHOUT OIDS and the oid column

2017-01-04 Thread Ashutosh Bapat
>
>> The code updating attinhcount and then updating the catalogs is same
>> for user defined attributes and OID. Should we separate it out into a
>> function and use that function instead of duplicating the code?
>
> Didn't really seem worth the trouble ... maybe if it gets any longer
> it'd be appropriate to do that.

Ok.

>
> Pushed with those corrections and some further fooling with the test case.
>

Thanks.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Etsuro Fujita

On 2016/12/28 17:34, Ashutosh Bapat wrote:

On Wed, Dec 28, 2016 at 1:29 PM, Etsuro Fujita
 wrote:

On 2016/12/28 15:54, Ashutosh Bapat wrote:

On Wed, Dec 28, 2016 at 9:20 AM, Etsuro Fujita
 wrote:

On 2016/12/27 22:03, Ashutosh Bapat wrote:

If mergejoin_allowed is true and mergeclauselist is non-NIL but
hashclauselist is NIL (that's rare but there can be types has merge
operators but not hash operators), we will end up returning NULL. I
think we want to create a merge join in that case. I think the order
of conditions should be 1. check hashclause_list then create hash join
2. else check if merge allowed, create merge join. It looks like that
would cover all the cases, if there aren't any hash clauses, and also
no merge clauses, we won't be able to implement a FULL join, so it
will get rejected during path creation itself.



Right, maybe we can do that by doing similar things as in
match_unsort_outer
and/or sort_inner_and_outer.  But as you mentioned, the case is rare, so
the
problem would be whether it's worth complicating the code (and if it's
worth, whether we should do that at the first version of the function).



All I am requesting is changing the order of conditions. That doesn't
complicate the code.



I might have misunderstood your words, but you are saying we should consider
mergejoin paths with some mergeclauses in the case where hashclauses is NIL,
right?  To do so, we would need to consider the sort orders of outer/inner
paths, which would make the code complicated.



Hmm. If I understand the patch correctly, it does not return any path
when merge join is allowed and there are merge clauses but no hash
clauses. In this case we will not create a foreign join path, loosing
some optimization. If we remove GetExistingLocalJoinPath, which
returns a path in those cases as well, we have a regression in
performance.


Ok, will revise, but as I mentioned upthread, I'm not sure it's a good 
idea to search the pathlist to get a merge join even in this case.  I'd 
vote for creating a merge join path from the inner/outer paths in this 
case as well.  I think that would simplify the code as well.


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Supporting huge pages on Windows

2017-01-04 Thread Tsunakawa, Takayuki
From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Magnus Hagander
> For the pg_ctl changes, we're going from removing all privilieges from the
> token, to removing none. Are there any other privileges that we should be
> worried about? I think you may be correct in that it's overkill to do it,
> but I think we need some more specifics to decide that.

This page lists the privileges.  Is there anyhing you are concerned about?

https://msdn.microsoft.com/ja-jp/library/windows/desktop/bb530716(v=vs.85).aspx



> Also, what happens with privileges that were granted to the groups that
> were removed? Are they retained or lost?

Are you referring to the privileges of Administrators and PowerUsers that 
pg_ctl removes?  They are lost.  The Windows user account who actually runs 
PostgreSQL needs SeLockMemory privilege.


> Should we perhaps consider having pg_ctl instead *disable* all the
> privileges (rather than removing them), using AdjustTokenPrivileges? As
> a middle ground?

Sorry, I may misunderstand what you are suggesting, but AdjustTokenPrivilege() 
cannot enable the privilege which is not assigned to the user.  Anyway, I think 
it's the user's responsibility (and freedom) to assign desired privileges, and 
pg_ctl's disabling all privileges is overkill.


> +errdetail("Failed system call was %s,
> error code %lu", "LookupPrivilegeValue", GetLastError(;
> 
> this seems to be a new pattern of code -- for other similar cases it just
> writes LookupPrivilegeValue inside the patch itself. I'm guessing the idea
> was for translatability, but I think it's better we stick to the existing
> pattern.

OK, modified.

> When AdjustTokenPrivileges() returns, you explicitly check for
> ERROR_NOT_ALL_ASSIGNED, which is good. But we should probably also
> explicitly check for ERROR_SUCCESS for that case. Right now that's the only
> two possible options that can be returned, but in a future version other
> result codes could be added and we'd miss them. Basically, "there should
> be an else branch".

OK, modified.

> Is there a reason the error messages for AdjustTokenPrivileges() returning
> false and ERROR_NOT_ALL_ASSIGNED is different?

As mentioned in the following page, the error cause is clearly defined.  So, I 
thought it'd be better to give a specific hint message to help users 
troubleshoot the error.


https://msdn.microsoft.com/ja-jp/library/windows/desktop/aa375202(v=vs.85).aspx

ERROR_NOT_ALL_ASSIGNED 
The token does not have one or more of the privileges specified in the NewState 
parameter. The function may succeed with this error value even if no privileges 
were adjusted. The PreviousState parameter indicates the privileges that were 
adjusted.


> There are three repeated blocks of
> +   if (huge_pages == HUGE_PAGES_ON || huge_pages == HUGE_PAGES_TRY)
> 
> It threw me off in the initial reading, until I realized the upper levels
> of them can change the value of huge_pages.

OK, I like your code.


> I don't think changing the global variable huge_pages like that is a very
> good idea.
 
Yes, actually, I was afraid that it might be controversial to change the GUC 
value.  But I thought it may be better for "SHOW huge_pages" to reflect whether 
the huge_pages feature is effective.  Otherwise, users don't know about that.  
For example, wal_buffers behaves similarly; if it's set to -1 (default), "SHOW 
wal_buffers" displays the actual wal buffer size, not -1.  What do you think?  
Surely, the Linux code for huge_pages doesn't do that.  I'm OK with either.


From: Amit Kapila [mailto:amit.kapil...@gmail.com]
> Your version of the patch looks better than the previous one.  Don't you
> need to consider MEM_LARGE_PAGES in VirtualAllocEx call (refer
> pgwin32_ReserveSharedMemoryRegion)?  At least that is what is mentioned
> in MSDN [1].  Another point worth considering is that currently for
> VirtualAllocEx() we use PAGE_READWRITE as flProtect value, shouldn't it
> be same as used in CreateFileMapping() by patch.
> 
> 
> [1] -
> https://msdn.microsoft.com/en-us/library/windows/desktop/aa366720(v=vs
> .85).aspx

No, it's not necessary.  Please see PGSharedMemoryReAttach(), where 
VirtualFree() is called to free the reserved address space and then call 
MapViewOfFile() to allocate the already created shared memory to that area.

Regards
Takayuki Tsunakawa




win_large_pages_v4.patch
Description: win_large_pages_v4.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding

2017-01-04 Thread Thomas Munro
On Tue, Nov 29, 2016 at 6:27 AM, Tom Lane  wrote:
> Thomas Munro  writes:
>> I bet other allocators also do badly with "32KB plus a smidgen".  To
>> minimise overhead we'd probably need to try to arrange for exactly
>> 32KB (or some other power of 2 or at least factor of common page/chunk
>> size?) to arrive into malloc, which means accounting for both
>> nodeHash.c's header and aset.c's headers in nodeHash.c, which seems a
>> bit horrible.  It may not be worth doing anything about.
>
> Yeah, the other problem is that without a lot more knowledge of the
> specific allocator, we shouldn't really assume that it's good or bad with
> an exact-power-of-2 request --- it might well have its own overhead.
> It is an issue though, and not only in nodeHash.c.  I'm pretty sure that
> StringInfo also makes exact-power-of-2 requests for no essential reason,
> and there are probably many other places.

Right, enlargeStringInfo doubles the size whenever it runs out, a
common technique.  Aside from the "plus a smidgen" thing mentioned
above, there is something else interesting about that:  Andrew Koenig
wrote a widely referenced comp.lang.c++.moderated post[1] about why
you should probably use a growth factor of 1.5, and the topic comes up
from time to time in standard library implementation discussions[2].
I have no idea whether it really matters in reality and how the space
vs time trade off works with whatever actual string growth patterns
someone might be optimising for, but it's fun to think about...

[1] 
https://groups.google.com/forum/#!msg/comp.lang.c++.moderated/asH_VojWKJw/Lo51JEmLVboJ
[2] https://gcc.gnu.org/ml/libstdc++/2013-03/msg00058.html

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Etsuro Fujita

On 2016/12/27 22:03, Ashutosh Bapat wrote:

You wrote:

3. Talking about saving some CPU cycles - if a clauseless full join can be
implemented only using merge join, probably that's the only path available
in
the list of paths for the given relation. Instead of building the same
path
anew, should we use the existing path like GetExistingLocalJoinPath() for
that
case?


I wrote:

Hm, that might be an idea, but my concern about that is: the existing path
wouldn't always be guaranteed to be unprameterized.



Why? We retain all the parameterizations (including no
parameterization) available in the pathlist, so if it's possible to
create an unparameterized path, there will be one.


OK, but another concern is: in cases when we consider parameterized 
paths, it might be inefficient to search the pathlist because the 
unparameterized path might be at the rear of the lengthy pathlist.  Note 
that patameterized paths would produce fewer rows and have reduced 
transfer and hence total cost, so they would be in the more front of the 
pathlist, and the unparameterized path would be in the more rear.  (Note 
that the pathlist is kept sorted by total cost.)


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-01-04 Thread Etsuro Fujita

On 2016/12/27 16:41, Etsuro Fujita wrote:

On 2016/12/22 1:04, Ashutosh Bapat wrote:

2. We should try to look for other not-so-cheap paths if the cheapest
one is
paramterized. You might want to use get_cheapest_path_for_pathkeys()
to find a
suitable unparameterized path by passing NULL for required_outer and
NIL for
pathkeys, that's a very strange usage, but I think it will serve the
purpose.



+/* Give up if the cheapest-total-cost paths are parameterized. */
+if (!bms_is_empty(PATH_REQ_OUTER(outer_path)) ||
+!bms_is_empty(PATH_REQ_OUTER(inner_path)))
+return NULL;



I did that because I think that would work well for postgres_fdw, but I
agree with you.  Will revise.


While working on this, I noticed that in that case 
get_cheapest_path_for_pathkeys() would return NULL because if the 
cheapest-total-cost path is parameterized, then there are no 
unparameterized paths in the rel's pathlist (see set_cheapest).


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical decoding on standby

2017-01-04 Thread Craig Ringer
On 4 January 2017 at 16:19, Craig Ringer  wrote:
> On 4 January 2017 at 12:15, Craig Ringer  wrote:
>
>> That's particularly relevant to you Simon as you expressed a wish to
>> commit the new streaming rep tests.

Simon committed 1, 2, 3 and 5:

* Extra PostgresNode methods
* pg_recvlogical --endpos
* Tests for pg_recvlogical
* Expand streaming replication tests to cover hot standby

so here's a rebased series on top of master. No other changes.

The first patch to add a pg_recvlogical wrapper to PostgresNode is
really only needed to test the rest of the patches, so it can be
committed together with them.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication/backup defaults

2017-01-04 Thread Tomas Vondra

On 01/03/2017 11:56 PM, Tomas Vondra wrote:

Hi,


...

I'll push results for larger ones once those tests complete (possibly
tomorrow).



I just pushed additional results (from the additional scales) to the git 
repositories. On the larger (16/32-cores) machine with 2x e5-2620, the 
results look like this


   scale minimal   replica logical
  -
   100 23968 24393   24393
   100023412 23656   23794
   15283  53205197

and on the smaller one (i5-2500k with 4 cores) I got this:

   scale minimal   replica logical
  -
   50   5884  58965873
   400  5324  53425478
   1000 5341  54395425

The scales were chosen so that the smallest one fits into shared 
buffers, the medium exceeds shared buffers but still fits into RAM, and 
the largest scale exceeds RAM.


The results seem to confirm that for this workload (regular pgbench), 
there's very little difference between the different WAL levels. 
Actually, the 'replica' seems a tad faster than 'minimal', but the 
difference may be easily due to noise.


I've also looked at the amount of WAL actually produced, by doing 
pgbench runs throttled to the same throughput, and counting the number 
of archived WAL segments & running pg_xlogdump. Interestingly enough, 
those two metrics differ quite a bit - for example for scale 1000 (on 
the 32-core machine), the 2h runs produced these number of WAL segments:


   minimal: 5515 (88.2GB)
   replica: 5587 (89.4GB)
   logical: 6058 (96.9GB)

so 'replica' adds ~1.3% and 'logical' ~9.8%. But per pg_xlogdump, the 
WAL amounts are only 73.3GB, 73.9GB and 74.4GB - a difference of only 
~1.5% between minimal and logical. The values are also much lower than 
raw WAL size, so I assume it's because pg_xlogdump ignores some extra 
overhead, present in the segments. Moreover, the sequential nature of 
WAL writes means even the +10% is not a big deal (unless it results in 
saturating the bandwidth, but running on >90% is a bad idea anyway).


My conclusion from these results is that using 'wal_level=replica' by 
default seems fine. Perhaps even wal_level=logical would be OK, but 
that's probably a too big step for 10.0.


Any ideas how to construct a plausible workload where the differences 
are significantly larger? Running the tests on non-SSD storage might 
also be useful.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-01-04 Thread Michael Paquier
On Thu, Jan 5, 2017 at 12:33 AM, Robert Haas  wrote:
> On Wed, Jan 4, 2017 at 9:47 AM, Simon Riggs  wrote:
>> On 4 January 2017 at 13:57, Robert Haas  wrote:
>>> On Wed, Jan 4, 2017 at 3:05 AM, Simon Riggs  wrote:
 Strange response. Nothing has been assumed. I asked for tests and you
 provided measurements.
>>>
>>> Sure, of zero-filling a file with dd.  But I also pointed out that in
>>> a real PostgreSQL cluster, the change could actually *reduce* latency.
>>
>> I think we are talking at cross purposes. We agree that the main
>> change is useful, but it causes another problem which I can't see how
>> you can characterize as reduced latency, based upon your own
>> measurements.
>
> Zero-filling files will take longer if the files are bigger.  That
> will increase latency.  But we will also have fewer forced
> end-of-segment syncs.  That will reduce latency.  Which effect is
> bigger?

It depends on if the environment is CPU-bounded or I/O bounded. If CPU
is at its limit, zero-filling takes time. If that's the I/O, fsync()
would take longer to complete.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PassDownLimitBound for ForeignScan/CustomScan [take-2]

2017-01-04 Thread Kouhei Kaigai



> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
> Sent: Thursday, January 05, 2017 5:29 AM
> To: Kohei KaiGai 
> Cc: Kaigai Kouhei(海外 浩平) ; Jeevan Chalke
> ; pgsql-hackers@postgresql.org; Etsuro
> Fujita ; Andres Freund 
> Subject: Re: [HACKERS] PassDownLimitBound for ForeignScan/CustomScan
> [take-2]
> 
> On Mon, Jan 2, 2017 at 10:07 PM, Kohei KaiGai  wrote:
> > Oops, I oversight this patch was marked as "returned with feedback",
> > not "moved to the next CF".
> >
> > Its status has not been changed since the last update. (Code was
> > revised according to the last comment by Jeevan, but CF-Nov was time
> > up at that time.)
> >
> > How do I handle the patch?
> 
> Can you just change the status to "Moved to Next CF" and then make it "Needs
> Review"?
> 
Unfortunately, it was not possible. Probably, administrator privilege will be
needed for this operation.
May I add it to the CF:2017-03?

PG-Strom Project / NEC OSS Promotion Center
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Questionable tag usage

2017-01-04 Thread Tatsuo Ishii
In:
https://www.postgresql.org/docs/devel/static/runtime-config-file-locations.html

---
ident_file (string)

Specifies the configuration file for Section 20.2, “User Name Maps” user 
name mapping (customarily called pg_ident.conf). This parameter can only be set 
at server start.
---

"Specifies the configuration file for Section 20.2, “User Name Maps”
user name mapping" looks pretty strange to me because a raw section
name appears. This is due to the corresponding SGML coding:

   
 Specifies the configuration file for
  user name mapping
 (customarily called pg_ident.conf).
 This parameter can only be set at server start.
   

Shouldn't we use a link tag instead of the xref tag here? Attached is
a patch to fix this.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 30dd54c..1707d40 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -532,10 +532,10 @@ include_dir 'conf.d'
   
   

- Specifies the configuration file for
-  user name mapping
- (customarily called pg_ident.conf).
- This parameter can only be set at server start.
+ Specifies the configuration file
+ for user name mapping
+ (customarily called pg_ident.conf).  This parameter can
+ only be set at server start.

   
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] identity columns

2017-01-04 Thread Vitaly Burovoy
Hello, Peter,

I apologize for a silence since the last CF.
I've tested your last patch and have several nitpickings:


1. The fact COPY ignores GENERATED ALWAYS constraint (treats as
GENERATED BY DEFAULT) should be mentioned as well as rules.


2. Usually error message for identical columns (with LIKE clause)
looks like this:
test=# CREATE TABLE def(i serial, n1 int NOT NULL, n2 int);
CREATE TABLE
test=# CREATE TABLE test(i serial, LIKE def INCLUDING ALL);
ERROR:  column "i" specified more than once

but for generated columns it is disorienting enough:
test=# CREATE TABLE idnt(i int GENERATED BY DEFAULT AS IDENTITY);
CREATE TABLE
test=# CREATE TABLE test(i int GENERATED BY DEFAULT AS IDENTITY, LIKE
idnt INCLUDING ALL);
ERROR:  relation "test_i_seq" already exists


3. Strange error (not about absence of a column; but see pp.5 and 8):
test=# ALTER TABLE idnt ALTER COLUMN o ADD GENERATED ALWAYS AS IDENTITY;
ERROR:  identity column type must be smallint, integer, or bigint


4. Altering type leads to an error:
test=# ALTER TABLE idnt ALTER COLUMN i TYPE bigint;
ERROR:  cannot alter data type of identity column "i"

it is understandable for non-integers. But why do you forbid changing
to the supported types?


5. Attempt to make a column be identity fails:
test=# ALTER TABLE idnt ALTER COLUMN n1 SET GENERATED BY DEFAULT;
ERROR:  column "n1" of relation "idnt" is not an identity column

As I understand from the Spec, chapter 11.20 General Rule 2)b) says
about the final state of a column without mentioning of the initial
state.
Therefore even if the column has the initial state "not generated",
after the command it changes the state to either "GENERATED ALWAYS" or
"GENERATED BY DEFAULT".


6. The docs mention a syntax:
ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS |
BY DEFAULT } | SET sequence_option | RESET
} [...]

but the command fails:
test=# ALTER TABLE idnt ALTER COLUMN i RESET;
ERROR:  syntax error at or near ";"
LINE 1: ALTER TABLE idnt ALTER COLUMN i RESET;


7. (the same line of the doc)
usually ellipsis appears in inside parenthesis with clauses can be
repeated, in other words it should be written as:
ALTER  column_name ( { SET GENERATED { ALWAYS |
BY DEFAULT } |  } [...] )


8. To avoid unnecessary syntax "ALTER COLUMN ... ADD GENERATED ..."
and use existing syntax "ALTER COLUMN ... SET GENERATED ..." you can
just make the "SET" word be optional as a PG's extension. I.e. instead
of:
test=# ALTER TABLE idnt ALTER i SET GENERATED ALWAYS SET INCREMENT BY 4;

you can write:
test=# ALTER TABLE idnt ALTER i SET GENERATED ALWAYS INCREMENT BY 4;
test=# ALTER TABLE idnt ALTER n1 SET GENERATED ALWAYS INCREMENT BY 4;
-- which sets identity constraint - see p.5

which is very similar to your extended syntax:
test=# ALTER TABLE idnt ALTER n1 ADD GENERATED ALWAYS AS IDENTITY
(INCREMENT BY 4);


9. The command fails:
test=# ALTER TABLE idnt ALTER n2 ADD GENERATED ALWAYS AS IDENTITY;
ERROR:  column "n2" of relation "idnt" must be declared NOT NULL
before identity can be added

whereas the Spec does not contains a requirement for a column to be a
NOT NULLABLE.
You can implicitly set a column as NOT NULL (as the "serial" macros
does), but not require it later.
Moreover you can get a NULLABLE identity column by:
test=# ALTER TABLE idnt ALTER COLUMN i DROP NOT NULL;
ALTER TABLE
test=# \d idnt
  Table "public.idnt"
 Column |  Type   | Collation | Nullable |   Default
+-+---+--+--
 i  | integer |   |  | generated always as identity
 n1 | integer |   | not null |
 n2 | integer |   |  |


10. Inherited tables are not allowed at all (even with explicit
identity columns):
test=# CREATE TABLE inh() INHERITS (idnt);
ERROR:  cannot inherit from table with identity column
test=# CREATE TABLE inh(i int GENERATED BY DEFAULT AS IDENTITY) INHERITS (idnt);
ERROR:  cannot inherit from table with identity column

What's the point of forbidding tables inherited from one with identity column?
It makes identity columns be useless for big tables.
Just declare identity constraint as non-inherited (as well as some
other constraints (which identity is) - PK, FK, UNIQUE)...
Also see p.11


11. The last CF added partitioned tables which are similar to
inherited, but slightly different.
Slightly changed example from [1] (identity column added):
test=# CREATE TABLE measurement (
test(# i   int GENERATED BY DEFAULT AS IDENTITY,
test(# logdate date not null,
test(# peaktempint,
test(# unitsales   int
test(# ) PARTITION BY RANGE (logdate);
CREATE TABLE
test=# CREATE TABLE measurement_y2016m07
test-# PARTITION OF measurement (
test(# unitsales DEFAULT 0
test(# ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');
ERROR:  cannot inherit from table with identity column
test=# INSERT INTO measurement(logdate) VALUES('2016-07-10');
ERROR:  no 

Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Craig Ringer
On 5 January 2017 at 08:35, Craig Ringer  wrote:
> On 5 January 2017 at 01:49, Fabien COELHO  wrote:
>>
>>> ok understand
>>
>>
>> Good. So we seem to agree that GUCS are transactional?
>
> No. We don't agree. They aren't.

Uh. I take that back.

craig=> SET x.s = 'x';
SET
craig=> BEGIN;
BEGIN
craig=> SET x.s = 'y';
SET
craig=> ROLLBACK;
ROLLBACK
craig=> SHOW x.s;
 x.s
-
 x
(1 row)


I'm surprised, I never knew this.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Craig Ringer
On 5 January 2017 at 01:49, Fabien COELHO  wrote:
>
>> ok understand
>
>
> Good. So we seem to agree that GUCS are transactional?

No. We don't agree. They aren't.

The effects of SET LOCAL are reverted whether you commit or rollback.

The effects of SET SESSION are never reverted, whether you commit or roll back.

craig=> SET x.s = 'x';
SET
craig=> BEGIN;
BEGIN
craig=> SET LOCAL x.s = 'y';
SET
craig=> COMMIT;
COMMIT
craig=> SHOW x.s;
 x.s
-
 x
(1 row)

There are simply different scopes, one of which is the transaction scope.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] macaddr 64 bit (EUI-64) datatype support

2017-01-04 Thread Haribabu Kommi
On Tue, Nov 29, 2016 at 8:36 PM, Haribabu Kommi 
wrote:

>
>
> On Sat, Nov 26, 2016 at 4:48 PM, Tom Lane  wrote:
>
>> Haribabu Kommi  writes:
>> > Currently the casting is supported from macaddr to macaddr8, but not the
>> > other way. This is because, not all 8 byte MAC addresses can be
>> converted
>> > into 6 byte addresses.
>>
>> Well, yeah, so you'd throw an error if it can't be converted.  This is
>> no different from casting int8 to int4, for example.  We don't refuse
>> to provide that cast just because it will fail for some values.
>>
>
> Updated patch attached with added cast function from macaddr8 to
> macaddr.
>
> Currently there are no support for cross operators. Is this required
> to be this patch only or can be handled later if required?
>

Updated patch attached to address the duplicate OID problem.
There are no other functional changes to the previous patch.

Regards,
Hari Babu
Fujitsu Australia


mac_eui64_support_4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Joe Conway
On 01/04/2017 04:36 PM, Craig Ringer wrote:
> On 5 January 2017 at 08:35, Craig Ringer  wrote:
>> On 5 January 2017 at 01:49, Fabien COELHO  wrote:

>>> Good. So we seem to agree that GUCS are transactional?
>>
>> No. We don't agree. They aren't.
> 
> Uh. I take that back.
> 
> craig=> SET x.s = 'x';
> SET
> craig=> BEGIN;
> BEGIN
> craig=> SET x.s = 'y';
> SET
> craig=> ROLLBACK;
> ROLLBACK
> craig=> SHOW x.s;
>  x.s
> -
>  x
> (1 row)
> 
> 
> I'm surprised, I never knew this.


(I have not been able to keep up with the shear volume on this thread,
 but this caught my eye...)

Yeah -- I found it surprising when I first discovered it too. My opinion
is that the design for variables should not behave this way.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Replication slot xmin is not reset if HS feedback is turned off while standby is shut down

2017-01-04 Thread Craig Ringer
On 2 January 2017 at 22:24, Craig Ringer  wrote:
>
>
> On 2 Jan. 2017 20:20, "Simon Riggs"  wrote:
>
> On 21 December 2016 at 13:23, Simon Riggs  wrote:
>
>> Fix it up and I'll commit. Thanks for the report.
>
> I was hoping for some more effort from Ants to correct this.
>
> I'll commit Craig's new tests for hs feedback before this, so it can
> go in with a Tap test, so I imagine we're about a week away from
> commit on this.
>
>
> I posted a revised version of Ants's patch that passes the shell script
> test.
>
> A TAP  test would likely make sene though, I agree.


Ants, do you think you'll have a chance to convert your shell script
test into a TAP test in src/test/recovery?

Simon has said he would like to commit this fix. I'd personally be
happier if it had a test to go with it.

You could probably just add to src/test/recover/t/001 which now
contains my additions for hot standby.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd behavior with PG_TRY

2017-01-04 Thread Tom Lane
Jim Nasby  writes:
> I do think it's interesting that the data appeared to be completely fine 
> until I actually ran whatever the first assembly instruction of the for 
> loop is, so presumably it was fine after the sigsetjmp() call (which I'm 
> assuming is what causes all the fuss to begin with...) From my 
> understanding of what volatile does, I can understand why it might be 
> necessary for something in the CATCH block to need it, but not in the TRY.

While I was at Salesforce we saw some *extremely* odd behaviors in PG_TRY
blocks that we eventually tracked down to the compiler effectively trying
to store two different local variables in the same register.  I don't
recall all the details anymore, but it had something to do with register
spills to stack locations and subsequent reloads happening at unfortunate
places with respect to the sigsetjmp call.  I figured it was a bug in the
specific compiler we were using, because nobody in the PG community was
reporting any problems with identical code.  Anyway, "volatile" was an
effective way of suppressing the issue.  I wonder whether you're seeing
something similar.  You'd have to grovel through the assembly code for
the function with and without "volatile" to be sure.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-04 Thread Lewis, Ian (Microstar Laboratories)
Robert Haas [mailto:robertmh...@gmail.com] wrote:
>> Where you get into trouble there is that you might run CREATE
EXTENSION 
>> from that session

Yes. I can see this problem. And, while I can imagine resolving it with
context belonging to the extension, separate from the current session's
context, any resolution gets to be pretty complex. Probably complex
enough that the resolution is worse than living with the problem as part
of the cost of the feature. Which means, it remains one of the arguments
against it.

>> Again, I'm not trying to rain down fire and brimstone
>> on your idea here and I clearly see the utility of it.

I do not feel this at all (though, since this is e-mail, it is helpful
that you state it explicitly). And, I have not felt like this was the
case at any point in these discussions. I have consistently received
thoughtful and remarkably good responses containing solid points. 

While I did not recognize all the impacts, and I probably still do not,
I never thought this was a trivial issue with no arguments against it
(not least, that what I was asking for is not standard compliant). Every
modal behavior of any sort in any software adds pain. At the very least
it increases the regression testing burden. And, this is a mode with
pretty fundamental impact. It has to be worth a lot to somebody to be
worth having. And, of course, it also has to work.

Ian Lewis (www.mstarlabs.com)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE parent SET WITHOUT OIDS and the oid column

2017-01-04 Thread Tom Lane
Ashutosh Bapat  writes:
> Right. But I think it's better to use attribute id, in case the code
> raising this error changes for any reason in future.

I agree.  The parent's "tdhasoid" flag is definitely based on the
existence of an ObjectIdAttributeNumber system column, not on whether the
column's name is "oid".  So doing a lookup by name to find the matching
child column is just weird, and cannot possibly lead to anything good.

> The code updating attinhcount and then updating the catalogs is same
> for user defined attributes and OID. Should we separate it out into a
> function and use that function instead of duplicating the code?

Didn't really seem worth the trouble ... maybe if it gets any longer
it'd be appropriate to do that.

> Your test uses tablenames starting with "_". I have not seen that
> style in the testcases. Is it intentional?

Yeah, I did not like that either.

Pushed with those corrections and some further fooling with the test case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Odd behavior with PG_TRY

2017-01-04 Thread Jim Nasby

On 1/3/17 9:20 PM, Amit Kapila wrote:

On Wed, Jan 4, 2017 at 3:47 AM, Jim Nasby  wrote:

On 1/2/17 9:47 PM, Tom Lane wrote:

Correct coding would be

volatile TupleDesc  desc = slot->tts_tupleDescriptor;
CallbackState * volatile myState = (CallbackState *) self;
PLyTypeInfo * volatile args = myState->args;

because what needs to be marked volatile is the pointer variable,
not what it points at.  I'm a bit surprised you're not getting
"cast away volatile" warnings from the code as you have it.



Unfortunately, that didn't make a difference. Amit's suggestion of isolating
the single statement in a PG_TRY() didn't work either, but assigning
args->in.r.atts[i] to a pointer did.



Good to know that it worked, but what is the theory?  From your
experiment, it appears that in some cases accessing local pointer
variables is okay and in other cases, it is not okay.


I can run some other experiments if you have any to suggest.

I do think it's interesting that the data appeared to be completely fine 
until I actually ran whatever the first assembly instruction of the for 
loop is, so presumably it was fine after the sigsetjmp() call (which I'm 
assuming is what causes all the fuss to begin with...) From my 
understanding of what volatile does, I can understand why it might be 
necessary for something in the CATCH block to need it, but not in the TRY.


Two other things of note that might possibly make a difference here:

- This is happening inside a function used as a DestReceiver receiver
- The original call is a plpython function, calling a plpython function, 
calling a plpython function (specifically, nested_call_one() in the 
plpython regression test).


That does mean that the call stack looks something like this:

plpython
SPI_execute_callback
(my customer DestReceiverer Setup function (PLy_CSSetup) is called 
somewhere in here, which is what populates myState)

plpython
SPI_execute_callback
(PLy_CSSetup gets called again)
plpython (this just returns a value)
After that plpython call, the executor is going to call PLy_CSreceive, 
which is the function with this problematic code. So by the time this 
error happens, there are two nested levels of plpython+SPI going on. I 
originally thought the re-entrant calls were causing the problem, but 
after monitoring what PLy_CSSetup was doing and what PLy_CSreceive was 
getting that's not the case, or at least not the only reason for this.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Crash reading pg_stat_activity

2017-01-04 Thread Thomas Munro
On Thu, Jan 5, 2017 at 10:23 AM, Robert Haas  wrote:
> On Tue, Dec 27, 2016 at 9:28 PM, Thomas Munro
>  wrote:
>> On Wed, Dec 28, 2016 at 11:57 AM, Thomas Munro
>>  wrote:
>>> But I'm starting to think that the best way might be to do BOTH of the
>>> things I said in my previous message: make dsa.c register on
>>> create/attach and also unregister before detaching iff the name was
>>> supplied at creation time for the benefit of extension writers, but
>>> make it not do anything at all about tranche name
>>> registration/unregistration if NULL was passed in at create time.
>>> Then register this particular tranche (LWTRANCHE_PARALLEL_QUERY_DSA)
>>> in every process in RegisterLWLockTranches.  That way, you'd get a
>>> useful name in pg_stat_activity for other backends that are running
>>> parallel queries if they are ever waiting for these locks (unlikely
>>> but interesting to know abotu if it happens).
>>
>> Maybe something like the attached.
>
> Now that array_base and array_stride are gone, I don't see any reason
> why the DSA machinery needs to be aware of tranche names at all.  So I
> propose to rip all that out, as in the attached.

The way I proposed makes it a lot easier to work with dynamic names so
you can differentiate variable numbers of areas; the names would have
exactly the right extent and they'd get unregistered in each backend
at just the right time.  On the other hand, I don't really like seeing
lock tranche stuff leaking into other APIs like this, and using
tranche IDs in any way other than allocating a small number of them at
startup isn't really supported anyway, so +1 for doing it your way.

At one stage I had an idea that that argument was naming the DSA area,
not the lock tranche, and then the lock tranche happened to use a name
that was built from that name, but that doesn't make any sense if it's
optional depending on whether you already registered the lock
tranche...

- char lwlock_tranche_name[DSA_MAXLEN];

You can remove the now-unused DSA_MAXLEN macro.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multivariate statistics (v19)

2017-01-04 Thread Tomas Vondra

On 01/04/2017 03:21 PM, Dilip Kumar wrote:

On Wed, Jan 4, 2017 at 8:05 AM, Tomas Vondra
 wrote:

Attached is v22 of the patch series, rebased to current master and fixing
the reported bug. I haven't made any other changes - the issues reported by
Petr are mostly minor, so I've decided to wait a bit more for (hopefully)
other reviews.


v22 fixes the problem, I reported.  In my test, I observed that group
by estimation is much better with ndistinct stat.

Here is one example:

postgres=# explain analyze select p_brand, p_type, p_size from part
group by p_brand, p_type, p_size;
  QUERY PLAN
---
 HashAggregate  (cost=37992.00..38992.00 rows=10 width=36) (actual
time=953.359..1011.302 rows=186607 loops=1)
   Group Key: p_brand, p_type, p_size
   ->  Seq Scan on part  (cost=0.00..30492.00 rows=100 width=36)
(actual time=0.013..163.672 rows=100 loops=1)
 Planning time: 0.194 ms
 Execution time: 1020.776 ms
(5 rows)

postgres=# CREATE STATISTICS s2  WITH (ndistinct) on (p_brand, p_type,
p_size) from part;
CREATE STATISTICS
postgres=# analyze part;
ANALYZE
postgres=# explain analyze select p_brand, p_type, p_size from part
group by p_brand, p_type, p_size;
  QUERY PLAN
---
 HashAggregate  (cost=37992.00..39622.46 rows=163046 width=36) (actual
time=935.162..992.944 rows=186607 loops=1)
   Group Key: p_brand, p_type, p_size
   ->  Seq Scan on part  (cost=0.00..30492.00 rows=100 width=36)
(actual time=0.013..156.746 rows=100 loops=1)
 Planning time: 0.308 ms
 Execution time: 1001.889 ms

In above example,
Without MVStat-> estimated: 10 Actual: 186607
With MVStat-> estimated: 163046 Actual: 186607



Thanks. Those plans match my experiments with the TPC-H data set, 
although I've been playing with the smallest scale (1GB).


It's not very difficult to make the estimation error arbitrary large, 
e.g. by using perfectly correlated (identical) columns.


regard

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] An isolation test for SERIALIZABLE READ ONLY DEFERRABLE

2017-01-04 Thread Thomas Munro
On Thu, Jan 5, 2017 at 7:41 AM, Robert Haas  wrote:
> On Wed, Jan 4, 2017 at 2:17 AM, Michael Paquier
>  wrote:
>> On Wed, Jan 4, 2017 at 12:48 AM, Robert Haas  wrote:
>>> On Sun, Jan 1, 2017 at 4:38 AM, Thomas Munro
>>>  wrote:
 To be able to do this, the patch modifies the isolation tester so that
 it recognises wait_event SafeSnapshot.
>>>
>>> I'm not going to say that's unacceptable, but it's certainly not beautiful.
>>
>> Perhaps being able to define in an isolation spec a step called
>> 'wait_event' with a value defined to the wait event to look for would
>> make more sense?
>
> That'd be a much bigger change, since currently waiting is entirely implicit.

It's a bit of a strange case: we're indirectly waiting for other
backends' transactions to end, but it's not exactly a lock or even a
predicate lock: it's waiting for a time when we could run safely with
predicate locking disabled.  So it's not at all clear that
pg_blocking_pids should try to get its hands on the appropriate pids
(or if it even could).  Hence my attempt to do this using our
wonderful wait introspection.

I don't think putting the particular wait_event name into the test
spec would be very useful.  It's really there as a whitelist to be
conservative about excluding random waits caused by concurrent
autovacuum etc.  It just happens to have only one thing in the
whitelist for now, and I'm not sure what else would ever go in it...

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic shared memory areas

2017-01-04 Thread Peter Geoghegan
On Tue, Nov 15, 2016 at 5:31 PM, Robert Haas  wrote:
> I think we should develop versions of this that (1) allocate from the
> main shared memory segment and (2) allocate from backend-private
> memory.  Per my previous benchmarking results, allocating from
> backend-private memory would be a substantial win for tuplesort.c
> because this allocator is substantially more memory-efficient for
> large memory contexts than aset.c, and Tomas Vondra tested it out and
> found that it is also faster for logical decoding than the approach he
> proposed.

The approach that I'd prefer to take with tuplesort.c is to have a
buffer for caller tuples that is written to sequentially, and
repalloc()'d as needed, much like the memtuples array. It would be
slightly tricky to make this work when memtuples needs to be a heap
(I'm mostly thinking of top-N heapsorts here). That has perhaps
unbeatable efficiency, while also helping cases with significant
physical/logical correlation in their input, which is pretty common.
Creating an index on a serial PK within pg_restore would probably get
notably faster if we went this way.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-04 Thread Peter Eisentraut
0003-Define-logical-replication-protocol-and-output-plugi-v16.patch.gz
looks good now, documentation is clear now.

Another fixup patch to remove excessive includes. ;-)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 85537d88ec1f79403d9db28b0bdfede28e60e975 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 4 Jan 2017 12:00:00 -0500
Subject: [PATCH] fixup! Define logical replication protocol and output plugin

Remove unused includes
---
 src/backend/replication/logical/proto.c | 32 +
 src/backend/replication/pgoutput/pgoutput.c |  8 +---
 2 files changed, 2 insertions(+), 38 deletions(-)

diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index 9a2a168b40..6f61357ea3 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -1,7 +1,7 @@
 /*-
  *
  * proto.c
- * 		logical replication protocol functions
+ *		logical replication protocol functions
  *
  * Copyright (c) 2015, PostgreSQL Global Development Group
  *
@@ -12,44 +12,14 @@
  */
 #include "postgres.h"
 
-#include "miscadmin.h"
-
-#include "access/htup_details.h"
-#include "access/heapam.h"
-
 #include "access/sysattr.h"
-#include "access/tuptoaster.h"
-#include "access/xact.h"
-
-#include "catalog/catversion.h"
-#include "catalog/index.h"
-
-#include "catalog/namespace.h"
-#include "catalog/pg_class.h"
-#include "catalog/pg_database.h"
 #include "catalog/pg_namespace.h"
 #include "catalog/pg_type.h"
-
-#include "commands/dbcommands.h"
-
-#include "executor/spi.h"
-
 #include "libpq/pqformat.h"
-
-#include "mb/pg_wchar.h"
-
-#include "nodes/makefuncs.h"
-
 #include "replication/logicalproto.h"
-#include "replication/reorderbuffer.h"
-
 #include "utils/builtins.h"
 #include "utils/lsyscache.h"
-#include "utils/memutils.h"
-#include "utils/rel.h"
 #include "utils/syscache.h"
-#include "utils/timestamp.h"
-#include "utils/typcache.h"
 
 /*
  * Protocol message flags.
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 5fc48ac312..04dde5d494 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -1,7 +1,7 @@
 /*-
  *
  * pgoutput.c
- *		  Logical Replication output plugin
+ *		Logical Replication output plugin
  *
  * Copyright (c) 2012-2015, PostgreSQL Global Development Group
  *
@@ -12,13 +12,8 @@
  */
 #include "postgres.h"
 
-#include "access/xact.h"
-
-#include "catalog/catalog.h"
 #include "catalog/pg_publication.h"
 
-#include "mb/pg_wchar.h"
-
 #include "replication/logical.h"
 #include "replication/logicalproto.h"
 #include "replication/origin.h"
@@ -27,7 +22,6 @@
 #include "utils/builtins.h"
 #include "utils/inval.h"
 #include "utils/int8.h"
-#include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/syscache.h"
 
-- 
2.11.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix for documentation of timestamp type

2017-01-04 Thread Robert Haas
On Tue, Dec 13, 2016 at 10:46 AM, Robert Haas  wrote:
> On Tue, Dec 13, 2016 at 10:41 AM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> I find this a bit unclear, because the revised text kind of jumps back
>>> and forth between the floating-point and integer formats.  Perhaps
>>> something like this:
>>
>> Your wording seems OK to me, although I'd drop the "instead".
>
> Good idea.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Crash reading pg_stat_activity

2017-01-04 Thread Robert Haas
On Tue, Dec 27, 2016 at 9:28 PM, Thomas Munro
 wrote:
> On Wed, Dec 28, 2016 at 11:57 AM, Thomas Munro
>  wrote:
>> But I'm starting to think that the best way might be to do BOTH of the
>> things I said in my previous message: make dsa.c register on
>> create/attach and also unregister before detaching iff the name was
>> supplied at creation time for the benefit of extension writers, but
>> make it not do anything at all about tranche name
>> registration/unregistration if NULL was passed in at create time.
>> Then register this particular tranche (LWTRANCHE_PARALLEL_QUERY_DSA)
>> in every process in RegisterLWLockTranches.  That way, you'd get a
>> useful name in pg_stat_activity for other backends that are running
>> parallel queries if they are ever waiting for these locks (unlikely
>> but interesting to know abotu if it happens).
>
> Maybe something like the attached.

Now that array_base and array_stride are gone, I don't see any reason
why the DSA machinery needs to be aware of tranche names at all.  So I
propose to rip all that out, as in the attached.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c
index 821f3e9..86d9fb5 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/backend/executor/execParallel.c
@@ -486,7 +486,6 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers)
 		shm_toc_insert(pcxt->toc, PARALLEL_KEY_DSA, area_space);
 		pei->area = dsa_create_in_place(area_space, dsa_minsize,
 		LWTRANCHE_PARALLEL_QUERY_DSA,
-		"parallel_query_dsa",
 		pcxt->seg);
 	}
 
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index f4be4b4..1cf0684 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -508,6 +508,8 @@ RegisterLWLockTranches(void)
 	LWLockRegisterTranche(LWTRANCHE_LOCK_MANAGER, "lock_manager");
 	LWLockRegisterTranche(LWTRANCHE_PREDICATE_LOCK_MANAGER,
 		  "predicate_lock_manager");
+	LWLockRegisterTranche(LWTRANCHE_PARALLEL_QUERY_DSA,
+		  "parallel_query_dsa");
 
 	/* Register named tranches. */
 	for (i = 0; i < NamedLWLockTrancheRequests; i++)
diff --git a/src/backend/utils/mmgr/dsa.c b/src/backend/utils/mmgr/dsa.c
index 200749f..6069b0c 100644
--- a/src/backend/utils/mmgr/dsa.c
+++ b/src/backend/utils/mmgr/dsa.c
@@ -326,7 +326,6 @@ typedef struct
 	Size		freed_segment_counter;
 	/* The LWLock tranche ID. */
 	int			lwlock_tranche_id;
-	char		lwlock_tranche_name[DSA_MAXLEN];
 	/* The general lock (protects everything except object pools). */
 	LWLock		lock;
 } dsa_area_control;
@@ -405,7 +404,7 @@ static void unlink_segment(dsa_area *area, dsa_segment_map *segment_map);
 static dsa_segment_map *get_best_segment(dsa_area *area, Size npages);
 static dsa_segment_map *make_new_segment(dsa_area *area, Size requested_pages);
 static dsa_area *create_internal(void *place, size_t size,
-int tranche_id, const char *tranche_name,
+int tranche_id,
 dsm_handle control_handle,
 dsm_segment *control_segment);
 static dsa_area *attach_internal(void *place, dsm_segment *segment,
@@ -419,12 +418,10 @@ static void check_for_freed_segments(dsa_area *area);
  * We can't allocate a LWLock tranche_id within this function, because tranche
  * IDs are a scarce resource; there are only 64k available, using low numbers
  * when possible matters, and we have no provision for recycling them.  So,
- * we require the caller to provide one.  The caller must also provide the
- * tranche name, so that we can distinguish LWLocks belonging to different
- * DSAs.
+ * we require the caller to provide one.
  */
 dsa_area *
-dsa_create(int tranche_id, const char *tranche_name)
+dsa_create(int tranche_id)
 {
 	dsm_segment *segment;
 	dsa_area   *area;
@@ -446,7 +443,7 @@ dsa_create(int tranche_id, const char *tranche_name)
 	/* Create a new DSA area with the control objet in this segment. */
 	area = create_internal(dsm_segment_address(segment),
 		   DSA_INITIAL_SEGMENT_SIZE,
-		   tranche_id, tranche_name,
+		   tranche_id,
 		   dsm_segment_handle(segment), segment);
 
 	/* Clean up when the control segment detaches. */
@@ -474,12 +471,11 @@ dsa_create(int tranche_id, const char *tranche_name)
  */
 dsa_area *
 dsa_create_in_place(void *place, size_t size,
-	int tranche_id, const char *tranche_name,
-	dsm_segment *segment)
+	int tranche_id, dsm_segment *segment)
 {
 	dsa_area   *area;
 
-	area = create_internal(place, size, tranche_id, tranche_name,
+	area = create_internal(place, size, tranche_id,
 		   DSM_HANDLE_INVALID, NULL);
 
 	/*
@@ -1139,7 +1135,7 @@ dsa_minimum_size(void)
  */
 static dsa_area *
 create_internal(void *place, size_t size,
-int tranche_id, const char *tranche_name,
+int tranche_id,
 dsm_handle control_handle,
 

Re: [HACKERS] logical decoding of two-phase transactions

2017-01-04 Thread Simon Riggs
On 31 December 2016 at 08:36, Stas Kelvich  wrote:
> Here is resubmission of patch to implement logical decoding of two-phase 
> transactions (instead of treating them
> as usual transaction when commit) [1] I’ve slightly polished things and used 
> test_decoding output plugin as client.

Sounds good.

> General idea quite simple here:
>
> * Write gid along with commit/prepare records in case of 2pc

GID is now variable sized. You seem to have added this to every
commit, not just 2PC

> * Add several routines to decode prepare records in the same way as it 
> already happens in logical decoding.
>
> I’ve also added explicit LOCK statement in test_decoding regression suit to 
> check that it doesn’t break thing.

Please explain that in comments in the patch.

>  If
> somebody can create scenario that will block decoding because of existing 
> dummy backend lock that will be great
> help. Right now all my tests passing (including TAP tests to check recovery 
> of twophase tx in case of failures from
> adjacent mail thread).
>
> If we will agree about current approach than I’m ready to add this stuff to 
> proposed in-core logical replication.
>
> [1] 
> https://www.postgresql.org/message-id/EE7452CA-3C39-4A0E-97EC-17A414972884%40postgrespro.ru

We'll need some measurements about additional WAL space or mem usage
from these approaches. Thanks.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generating fmgr prototypes automatically

2017-01-04 Thread Peter Eisentraut
On 1/4/17 3:35 PM, Pavel Stehule wrote:
> On 1/3/17 2:16 PM, Pavel Stehule wrote:
> > patch 0001 .. trivial cleaning
> > patch 0002 .. renaming lo_* to be_lo_* -- the prefix "be" is not what I
> > expect - maybe "pg" instead. More because the  be-fsstubs.h will be
> > holds only lo_read, lo_write on end
> 
> It's the naming that the OpenSSL functions use, e.g., be_tls_init.
> 
> 
> lo_* functions are used by OpenSSL ?

There are tls functions that exist in similar ways in the backend and in
libpq, and the backend versions are called be_*.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-01-04 Thread Tom Lane
Fabien COELHO  writes:
> Indeed. Here is the rebased version, which still get through my various 
> tests.

I looked through this again, and I still think that the syntactic design
of the new command is seriously misguided, leading to an ugly and
unmaintainable implementation that may well block further innovation
in pgbench.  I will not commit it in this form.

Possibly you can find some other committer whom you can convince this
is a good design --- but since the patch has gone untouched for two full
commitfest cycles, I rather imagine that whoever else has looked at it
has likewise decided they didn't want to be responsible for it.

Please look at changing \into to be a SQL-command-ending backslash
command as we previously discussed.  I think you will find that the
implementation is a great deal simpler that way and doesn't require
weird hackery on the shared lexer.

(BTW, said hackery is not just weird but broken.  You can't simply
remove comments.  Consider something like "SELECT foo/*as*/bar".
This code reduces that to "SELECT foobar" which is wrong.)

If you won't do that, and you can't find another committer who will
accept responsibility for this patch before the end of the current
commitfest, I think we should mark it Rejected.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Merlin Moncure
On Wed, Jan 4, 2017 at 1:49 PM, Pavel Stehule  wrote:
>
>>
>> so some possible design can be:
>>
>> DECLARE
>>   PRAGMA UNCACHED_PLANS;
>> BEGIN
>>   SELECT ... INTO ;
>>   SELECT ... INTO ;
>> END;
>>
>> This respects Ada and PL/SQL style - probably easy implementation
>>
>> Regards
>>
>> Pavel
>
>
> some examples based on Ada doc
>
> FUNCTION xxx RETURN int AS
>   PRAGMA yyy -- pragma has function scope
> BEGIN
>
> FUNCTION xxx RETURN int AS
> BEGIN
>   DECLARE
> PRAGMA yyy -- pragma has block scope

ok, sub-block makes sense over statement level IMO.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-04 Thread Robert Haas
On Tue, Dec 27, 2016 at 3:59 AM, Amit Langote
 wrote:
> Patches 0001 to 0006 unchanged.

Committed 0001 earlier, as mentioned in a separate email.  Committed
0002 and part of 0003.  But I'm skeptical that the as-patched-by-0003
logic in generate_partition_qual() makes sense.  You do this:

result = list_concat(generate_partition_qual(parent),
 copyObject(rel->rd_partcheck));

/* Mark Vars with correct attnos */
result = map_partition_varattnos(result, rel, parent);

But that has the effect of applying map_partition_varattnos to
everything in rel->rd_partcheck in addition to applying it to
everything returned by generate_partition_qual() on the parent, which
doesn't seem right.

Also, don't we want to do map_partition_varattnos() just ONCE, rather
than on every call to this function?  I think maybe your concern is
that the parent might be changed without a relcache flush on the
child, but I don't quite see how that could happen.  If the parent's
tuple descriptor changes, surely the child's tuple descriptor would
have to be altered at the same time...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER SYSTEM for pg_hba.conf

2017-01-04 Thread Simon Riggs
On 4 January 2017 at 20:30, Tom Lane  wrote:
> Simon Riggs  writes:
>> My next thought is ALTER SYSTEM support for pg_hba.conf, especially
>> since that would make it easier to do a formal test of Haribabu's
>> pg_hba view patch by adding each of the options one by one and then
>> juggling them.
>
> It's quite unclear from this spec what you have in mind to control the
> entry order.  Also, I'd personally be -1 on inventing a pile of new SQL
> keywords for this.  Why not do it with a function, instead?  Or for extra
> credit, finish the pg_hba view work first and then make it an updatable
> view.
>
>> and we can then have a nice simple
>> ALTER SYSTEM ENABLE REMOTE ACCESS FOR REPLICATION USING md5;
>
> I am minus a lot more than 1 on inventing a new SQL statement every time
> somebody thinks of a new way in which they'd like to frob pg_hba.conf.

Oh believe me, I'm with you on that thought!
...but I had thought the ALTER SYSTEM discussion was over now and SQL
was the agreed way, now.

I was chasing the "lets make replication simple as possible" direction
but this really doesn't add much apart from syntax.

Definitely have better things to do though, so happy to leave it there.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generating fmgr prototypes automatically

2017-01-04 Thread Pavel Stehule
2017-01-04 21:09 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 1/3/17 2:16 PM, Pavel Stehule wrote:
> > patch 0001 .. trivial cleaning
> > patch 0002 .. renaming lo_* to be_lo_* -- the prefix "be" is not what I
> > expect - maybe "pg" instead. More because the  be-fsstubs.h will be
> > holds only lo_read, lo_write on end
>
> It's the naming that the OpenSSL functions use, e.g., be_tls_init.
>

lo_* functions are used by OpenSSL ?


>
> > patch 0003 .. trivial, but doesn't mean so we have not regress tests for
> > these functions?
>
> OK, added tests.
>
> > patch 0004 .. bigger part - I miss a comments when there are a
> exceptions:
> >
> > extern Datum numeric_float8_no_overflow(PG_FUNCTION_ARGS);
> > extern Datum nextval(PG_FUNCTION_ARGS);
> > extern Datum fmgr_sql(PG_FUNCTION_ARGS);
> > extern Datum aggregate_dummy(PG_FUNCTION_ARGS);
>
> These functions and their special purpose are commented in the .c files,
> so I think that is covered OK.  I added an additional comment to the
> numeric_float8_no_overflow().
>
> > I found some obsolete definitions in c files
>
> OK, fixed those as well.  I think I initially only looked in .h files.
> That just underlines how inconsistent this is, e.g.,
>
> > pgstatfuncs.c
> >
> > /* bogus ... these externs should be in a header file */
>
> > namespace.c
> >
> > /* These don't really need to appear in any header file */
>
> New patch set attached.
>
>
Thank you

I'll check it tomorrow

Regards

Pavel

> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] ALTER SYSTEM for pg_hba.conf

2017-01-04 Thread Tom Lane
Simon Riggs  writes:
> My next thought is ALTER SYSTEM support for pg_hba.conf, especially
> since that would make it easier to do a formal test of Haribabu's
> pg_hba view patch by adding each of the options one by one and then
> juggling them.

It's quite unclear from this spec what you have in mind to control the
entry order.  Also, I'd personally be -1 on inventing a pile of new SQL
keywords for this.  Why not do it with a function, instead?  Or for extra
credit, finish the pg_hba view work first and then make it an updatable
view.

> and we can then have a nice simple
> ALTER SYSTEM ENABLE REMOTE ACCESS FOR REPLICATION USING md5;

I am minus a lot more than 1 on inventing a new SQL statement every time
somebody thinks of a new way in which they'd like to frob pg_hba.conf.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PassDownLimitBound for ForeignScan/CustomScan [take-2]

2017-01-04 Thread Robert Haas
On Mon, Jan 2, 2017 at 10:07 PM, Kohei KaiGai  wrote:
> Oops, I oversight this patch was marked as "returned with feedback",
> not "moved to the next CF".
>
> Its status has not been changed since the last update. (Code was
> revised according to the last
> comment by Jeevan, but CF-Nov was time up at that time.)
>
> How do I handle the patch?

Can you just change the status to "Moved to Next CF" and then make it
"Needs Review"?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication/backup defaults

2017-01-04 Thread Simon Riggs
On 3 January 2017 at 12:34, Michael Paquier  wrote:
> On Mon, Jan 2, 2017 at 10:55 PM, Simon Riggs  wrote:
>> In the hope of making things better in 10.0, I remove my objection. If
>> people want to use wal_level = minimal they can restart their server
>> and they can find that out in the release notes.
>>
>> Should we set wal_level = replica or wal_level = logical as the
>> default for 10.0?
>
> replica sounds like a better default to me as most users use at least
> archiving. Logical decoding is still fresh though, and its use is not
> that wide. Have there been any study on its performance impact
> compared to replica by the way?

Magnus' arguments should also be applied to wal_level = logical since
users will be surprised if they cannot use the logical replication
features we are adding as a main feature of 10.0. Why go through the
same pain again?

And if preventing their use is acceptable for the user, we should
treat it as a performance feature to reduce the wal_level.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER SYSTEM for pg_hba.conf

2017-01-04 Thread Simon Riggs
My next thought is ALTER SYSTEM support for pg_hba.conf, especially
since that would make it easier to do a formal test of Haribabu's
pg_hba view patch by adding each of the options one by one and then
juggling them.

ALTER SYSTEM
ENABLE [LOCAL | REMOTE] ACCESS
FOR
[DATABASE   | REPLICATION ]
[USER  ]
[NETWORK {ADDRESS [NETMASK] | hostname]
[options]
USING 
[WITH [[NO] ENCRYPTION] [PRIORITY n]]

and we can then have a nice simple

ALTER SYSTEM ENABLE REMOTE ACCESS FOR REPLICATION USING md5;

into which we can tie any required parameters (i.e. one command to
enable replication)

Note that the keyword HOST would be replaced by REMOTE and SSL by
ENCRYPTION to make it clearer.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba_file_settings view patch

2017-01-04 Thread Simon Riggs
On 4 January 2017 at 03:54, Haribabu Kommi  wrote:

> Latest patch is attached.

The "method" column should be called "auth" or "auth_method" or "authentication"

I think we should have some tests, but I'll hear your views on that.
Perhaps we can include a test/sample pg_hba.conf for use in tests.

Since we've had crashes, I suggest running the test 1 times and
checks for leaks and crashes.

If its safe we can move towards commit. Thanks

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Pavel Stehule
>
> so some possible design can be:
>
> DECLARE
>   PRAGMA UNCACHED_PLANS;
> BEGIN
>   SELECT ... INTO ;
>   SELECT ... INTO ;
> END;
>
> This respects Ada and PL/SQL style - probably easy implementation
>
> Regards
>
> Pavel
>

some examples based on Ada doc

FUNCTION xxx RETURN int AS
  PRAGMA yyy -- pragma has function scope
BEGIN

FUNCTION xxx RETURN int AS
BEGIN
  DECLARE
PRAGMA yyy -- pragma has block scope

Regards

Pavel



>
>
>>
>> Regards
>>
>>
>>
>>
>>
>>>
>>> merlin
>>>
>>
>>
>


Re: [HACKERS] Replication/backup defaults

2017-01-04 Thread Peter Eisentraut
On 1/4/17 9:46 AM, Magnus Hagander wrote:
> How about we default max_replication_slots to -1, which means to use the
> same value as max_wal_senders?

> But you don't necessarily want to adjust them together, do you? They are
> both capped by max_connections, but I don't think they have any other
> direct relation between each other? 

I think the most usual case is that you use approximately one
replication slot per wal sender slot.  So it would be a good default to
make them equal.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_recvlogical --endpos

2017-01-04 Thread Simon Riggs
On 4 January 2017 at 13:37, Craig Ringer  wrote:
>> Moved to next CF with "needs review" state.
>
> Here's an updated series. It's on top of the entry
> https://commitfest.postgresql.org/12/883/ for PostgresNode TAP test
> enhancements.
>
> It corresponds exactly to patches [2,3,4] in the logical decoding on
> standby post at
> https://www.postgresql.org/message-id/CAMsr+YEzC=-+eV09A=ra150fjtkmtqt5q70piqbwytbor3c...@mail.gmail.com
>
> If this is committed, the remaining decoding on standby patches will
> just be the meat of the feature.

Patches 2 and 3 committed for now. Please do everything else on the
logical decoding on standby posts. Thanks.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Pavel Stehule
2017-01-04 19:56 GMT+01:00 Fabien COELHO :

>
> [...] It is on critical path, so every check increase computer time for
>> transaction end.
>>
>
> Hmmm... Everything executed is on the critical path...
>
> It is a very good thing that GUCs are transactional, and this should not
>>> be changed, it is a useful feature! Much more useful than non
>>> transactional.
>>>
>>
>> Personally, I never used - although I using often nesting
>>
>
> Your position is contradictory:
>
> First you put forward a variable-with-permissions for a special use case,
> you insist that correctness is key and must be checked with static analysis
> tools that audit codes, that dynamic variables are too ugly for the
> purpose. Fine, even if I disagree with some details, there is some logic in
> that: security, audit, checks... why not.
>
> Then when one shows that correctness requires that the variable is
> transactional, this is not so important anymore based on the fact that some
> big companies do not do it like that, and suddenly it is enough that it
> probably works sometimes. And when the fact that pg already supports
> transactional variables is pointed out, just what the use case needs...
> then you suggest to remove the property.
>

The GUC are designed for different purpose - I don't know why somebody did
there transaction support - I understand and I remember the nesting
support.

look to code - the GUC related code has more about 10K lines, probably 5K
lines is important for this purpose.

There are more reasons, why I would not to use GUC

0. it is not designed be secure - there is different security model -
readonly, superuser, others
1. it is dynamic - not persistent - cannot be used as package variables
simply
2. there is different placing - custom requires prefix - I prefer using our
schemas, because schemas are used  in pg like packages in Oracle
3. large number of GUC decrease performace of end of transactions,
subtransactions
4. any RDBMS using untransactional variables - it should be default
optimized behave

Regards

Pavel


>
> What can I say? You've lost me, really.
>
> --
> Fabien.
>


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Fabien COELHO


[...] It is on critical path, so every check increase computer time for 
transaction end.


Hmmm... Everything executed is on the critical path...


It is a very good thing that GUCs are transactional, and this should not
be changed, it is a useful feature! Much more useful than non transactional.


Personally, I never used - although I using often nesting


Your position is contradictory:

First you put forward a variable-with-permissions for a special use case, 
you insist that correctness is key and must be checked with static 
analysis tools that audit codes, that dynamic variables are too ugly for 
the purpose. Fine, even if I disagree with some details, there is some 
logic in that: security, audit, checks... why not.


Then when one shows that correctness requires that the variable is 
transactional, this is not so important anymore based on the fact that 
some big companies do not do it like that, and suddenly it is enough that 
it probably works sometimes. And when the fact that pg already supports 
transactional variables is pointed out, just what the use case needs... 
then you suggest to remove the property.


What can I say? You've lost me, really.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Pavel Stehule
>
>
>>
>> >>
>> >> >> *) Some user visible mechanic other than forcing SQL through EXECUTE
>> >> >> to be able to control plan caching would be useful.
>> >> >
>> >> > fully agree.
>> >> >
>> >> > Have you some ideas?
>> >> >
>> >> > What about plpgsql option (function scope) -- WITHOUT-PLAN-CACHE -
>> any
>> >> > non
>> >> > trivial plans will not be cached - and evaluated as parametrized
>> query
>> >> > only.
>> >>
>> >> I have slight preference for syntax marker for each query, similar to
>> >> INTO.  Maybe 'UNCACHED'?
>> >
>> >
>> > I am not clean opinion - the statement level is nice, but what
>> readability?
>> >
>> > SELECT UNCACHED t.a, t.b FROM INTO a,b;
>>
>> Yeah -- this is pretty ugly admittedly.  Maybe control directive is
>> ok, as long as you can set it mid function?
>>
>
> ADA uses for this purpose PRAGMA keyword - it is used for everything in
> ADA - cycle iteration optimization, ...the scope can be statement, block,
> procedure.
>
> so something like
>
> BEGIN
>   PRAGMA uncached_plans;
>   SELECT ...
>   ..
> END;
>
> But it should be verified by some PL/SQL or Ada experts
>

Little bit better - if PRAGMA is used in DECLARE part, then it has block
scope

so some possible design can be:

DECLARE
  PRAGMA UNCACHED_PLANS;
BEGIN
  SELECT ... INTO ;
  SELECT ... INTO ;
END;

This respects Ada and PL/SQL style - probably easy implementation

Regards

Pavel


>
> Regards
>
>
>
>
>
>>
>> merlin
>>
>
>


Re: [HACKERS] An isolation test for SERIALIZABLE READ ONLY DEFERRABLE

2017-01-04 Thread Robert Haas
On Wed, Jan 4, 2017 at 2:17 AM, Michael Paquier
 wrote:
> On Wed, Jan 4, 2017 at 12:48 AM, Robert Haas  wrote:
>> On Sun, Jan 1, 2017 at 4:38 AM, Thomas Munro
>>  wrote:
>>> To be able to do this, the patch modifies the isolation tester so that
>>> it recognises wait_event SafeSnapshot.
>>
>> I'm not going to say that's unacceptable, but it's certainly not beautiful.
>
> Perhaps being able to define in an isolation spec a step called
> 'wait_event' with a value defined to the wait event to look for would
> make more sense?

That'd be a much bigger change, since currently waiting is entirely implicit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rewrite HeapSatisfiesHOTAndKey

2017-01-04 Thread Alvaro Herrera
Pavan Deolasee wrote:

> A transaction then updates the second column in the table. So the
> refactored patch will do heap_getattr() on more columns that the master
> while checking if HOT update is possible and before giving up.

Thanks.

> 1-client
>   Master Refactored
> Run1 8774.089935 8979.068604
> Run2 8509.2661 8943.613575
> Run3 8879.484019 8950.994425
> 
> 
> 8-clients
>   Master Refactored
> Run1 22520.422448 22672.798871
> Run2 21967.812303 22022.969747
> Run3 22305.073223 21909.945623

Wow, this is very surprising.  I was expecting a slight performance
decrease, not this.  I will try to reproduce these numbers.

One thing worth mentioning is that the current implementation is not
very good -- I just kept the attribute check loop as it was in the
original, which uses heap_getattr.  If we used incremental extraction
such as what we do in slot_getattr, it could be made a bit faster too.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-04 Thread Robert Haas
On Tue, Dec 27, 2016 at 8:41 PM, Amit Langote
 wrote:
> On 2016/12/27 19:07, Amit Langote wrote:
>> Attached should fix that.
>
> Here are the last two patches with additional information like other
> patches.  Forgot to do that yesterday.

0001 has the disadvantage that get_partition_for_tuple() acquires a
side effect.  That seems undesirable.  At the least, it needs to be
documented in the function's header comment.

It's unclear to me why we need to do 0002.  It doesn't seem like it
should be necessary, it doesn't seem like a good idea, and the commit
message you proposed is uninformative.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-01-04 Thread Robert Haas
On Mon, Dec 26, 2016 at 5:46 AM, Amit Langote
 wrote:
> On 2016/12/23 8:08, Robert Haas wrote:
>> On Thu, Dec 22, 2016 at 3:35 AM, Amit Langote
>>  wrote:
>>> While working on that, I discovered yet-another-bug having to do with the
>>> tuple descriptor that's used as we route a tuple down a partition tree. If
>>> attnums of given key attribute(s) are different on different levels, it
>>> would be incorrect to use the original slot's (one passed by ExecInsert())
>>> tuple descriptor to inspect the original slot's heap tuple, as we go down
>>> the tree.  It might cause spurious "partition not found" at some level due
>>> to looking at incorrect field in the input tuple because of using the
>>> wrong tuple descriptor (root table's attnums not always same as other
>>> partitioned tables in the tree).  Patch 0001 fixes that including a test.
>>
>> I committed this, but I'm a bit uncomfortable with it: should the
>> TupleTableSlot be part of the ModifyTableState rather than the EState?
>
> Done that way in 0001 of the attached patches.  So, instead of making the
> standalone partition_tuple_slot a field of EState (with the actual
> TupleTableSlot in its tupleTable), it is now allocated within
> ModifyTableState and CopyState, and released when ModifyTable node or
> CopyFrom finishes, respectively.

I dropped some comments from this and committed it.  They were
formatted in a way that wouldn't survive pgindent.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] rewrite HeapSatisfiesHOTAndKey

2017-01-04 Thread Pavan Deolasee
On Tue, Jan 3, 2017 at 9:33 PM, Robert Haas  wrote:
>
> On Mon, Jan 2, 2017 at 1:36 AM, Amit Kapila 
wrote:
> > Okay, but I think if we know how much is the additional cost in
> > average and worst case, then we can take a better call.
>
> Yeah.  We shouldn't just rip out optimizations that are inconvenient
> without doing some test of what the impact is on the cases where those
> optimizations are likely to matter.  I don't think it needs to be
> anything incredibly laborious and if there's no discernable impact,
> great.


So I performed some tests to measure if this causes any noticeable
regression. I used the following simple schema:

DROP TABLE IF EXISTS testtab;
CREATE UNLOGGED TABLE testtab (
col1 integer,
col2 text,
col3 float,
col4 text,
col5 text,
col6 char(30),
col7 text,
col8 date,
col9 text,
col10 text
);
INSERT INTO testtab
SELECT generate_series(1,10),
md5(random()::text),
random(),
md5(random()::text),
md5(random()::text),
md5(random()::text)::char(30),
md5(random()::text),
now(),
md5(random()::text),
md5(random()::text);
CREATE INDEX testindx ON testtab (col1, col2, col3, col4, col5, col6, col7,
col8, col9);

I used a rather wide UNLOGGED table with an index on first 9 columns, as
suggested by Amit. Also, the table has reasonable number of rows, but not
more than what shared buffers (set to 512MB for these tests) can hold. This
should make the test mostly CPU bound.

A transaction then updates the second column in the table. So the
refactored patch will do heap_getattr() on more columns that the master
while checking if HOT update is possible and before giving up. I believe we
are probably testing a somewhat worst case with this setup, though may be I
could have tuned some other configuration parameters.

\set value random(1, 10)
UPDATE testtab SET col2 = md5(random()::text) WHERE col1 = :value;

I tested with -c1 and -c8 -j4 and the results are:

1-client
  Master Refactored
Run1 8774.089935 8979.068604
Run2 8509.2661 8943.613575
Run3 8879.484019 8950.994425


8-clients
  Master Refactored
Run1 22520.422448 22672.798871
Run2 21967.812303 22022.969747
Run3 22305.073223 21909.945623


So at best there is some improvement with the patch, though I don't see any
reason why it should positively affect the performance. The results with
more number of clients look almost identical, probably because the
bottleneck shifts somewhere else. For all these tests, table was dropped
and recreated in every iteration, so I don't think there was any error in
testing. It might be a good idea for someone else to repeat the tests to
confirm the improvement that I noticed.

Apart from this, I also ran "make check" multiple times and couldn't find
any significant difference in the average time.

I will leave it to Alvaro's judgement to decide whether it's worth to
commit the patch now or later when he or other committer looks at
committing WARM/indirect indexes because without either of those patches
this change probably does not bring up much value, if we ignore the slight
improvement we see here.

Thanks,
Pavan

--
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Pavel Stehule
2017-01-04 18:49 GMT+01:00 Fabien COELHO :

>
> ok understand
>>
>
> Good. So we seem to agree that GUCS are transactional?
>
> The logic depends on transactions and on nesting level (nesting doesn't
>> depends on transactions only)
>>
>
> Yep, it probably also happens with LOCAL which hides the previous value
> and restores the initial one when exiting.
>
> void AtEOXact_GUC(bool isCommit, int nestLevel)
>>
>> Probably we should to use CallXactCallbacks instead - then is not a
>> performance impact when there are not transactional variables.
>>
>
> I do not understand your point.
>

It is on critical path, so every check increase computer time for
transaction end.

Regards

Pavel


>
> It is a very good thing that GUCs are transactional, and this should not
> be changed, it is a useful feature! Much more useful than non transactional.
>

Personally, I never used - although I using often nesting

regards

Pavel


>
> Moreover I think that transactional is expensive when writing things to
> disk, but in memory the overhead is reduced, and if you need it then you
> need it.
>
> --
> Fabien.
>


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Fabien COELHO



ok understand


Good. So we seem to agree that GUCS are transactional?


The logic depends on transactions and on nesting level (nesting doesn't
depends on transactions only)


Yep, it probably also happens with LOCAL which hides the previous value 
and restores the initial one when exiting.



void AtEOXact_GUC(bool isCommit, int nestLevel)

Probably we should to use CallXactCallbacks instead - then is not a
performance impact when there are not transactional variables.


I do not understand your point.

It is a very good thing that GUCs are transactional, and this should not 
be changed, it is a useful feature! Much more useful than non 
transactional.


Moreover I think that transactional is expensive when writing things to 
disk, but in memory the overhead is reduced, and if you need it then you 
need it.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-04 Thread Peter Eisentraut
Some small patches for 0002-Add-SUBSCRIPTION-catalog-and-DDL-v16.patch.gz:

- Add a get_subscription_name() function

- Remove call for ApplyLauncherWakeupAtCommit() (rebasing error?)

- Remove some unused include files (same as before)

- Rename pg_dump --no-create-subscription-slot to
--no-create-subscription-slots (plural), add documentation.

In CreateSubscription(), I don't think we should connect to the remote
if no slot creation is requested.  Arguably, the point of that option is
to not make network connections.  (That is what my documentation patch
above claims, in any case.)

I don't know why we need to check the PostgreSQL version number of the
remote.  We should rely on the protocol version number, and we should
just make it work.  When PG 11 comes around, subscribing from PG 10 to a
publisher on PG 11 should just work without any warnings, IMO.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 5fd71a1eb99761cca3cd53990b8302fb58e5a01e Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 4 Jan 2017 12:00:00 -0500
Subject: [PATCH 1/4] fixup! Add SUBSCRIPTION catalog and DDL

Add get_subscription_name()
---
 src/backend/catalog/objectaddress.c   | 18 +-
 src/backend/catalog/pg_subscription.c | 23 +++
 src/include/catalog/pg_subscription.h |  1 +
 3 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index 936adfc42d..b77fbf341d 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -3341,16 +3341,8 @@ getObjectDescription(const ObjectAddress *object)
 
 		case OCLASS_SUBSCRIPTION:
 			{
-HeapTuple	tup;
-
-tup = SearchSysCache1(SUBSCRIPTIONOID,
-	  ObjectIdGetDatum(object->objectId));
-if (!HeapTupleIsValid(tup))
-	elog(ERROR, "cache lookup failed for subscription %u",
-		 object->objectId);
 appendStringInfo(, _("subscription %s"),
-   NameStr(((Form_pg_subscription) GETSTRUCT(tup))->subname));
-ReleaseSysCache(tup);
+ get_subscription_name(object->objectId));
 break;
 			}
 
@@ -4865,13 +4857,13 @@ getObjectIdentityParts(const ObjectAddress *object,
 
 		case OCLASS_SUBSCRIPTION:
 			{
-Subscription *sub;
+char	   *subname;
 
-sub = GetSubscription(object->objectId, false);
+subname = get_subscription_name(object->objectId);
 appendStringInfoString(,
-	   quote_identifier(sub->name));
+	   quote_identifier(subname));
 if (objname)
-	*objname = list_make1(pstrdup(sub->name));
+	*objname = list_make1(subname);
 break;
 			}
 
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 2c58cc653e..c15e9b5d74 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -173,6 +173,29 @@ get_subscription_oid(const char *subname, bool missing_ok)
 }
 
 /*
+ * get_subscription_oid - given a subscription OID, look up the name
+ */
+char *
+get_subscription_name(Oid subid)
+{
+	HeapTuple		tup;
+	char		   *subname;
+	Form_pg_subscription subform;
+
+	tup = SearchSysCache1(SUBSCRIPTIONOID, ObjectIdGetDatum(subid));
+
+	if (!HeapTupleIsValid(tup))
+		elog(ERROR, "cache lookup failed for subscription %u", subid);
+
+	subform = (Form_pg_subscription) GETSTRUCT(tup);
+	subname = pstrdup(NameStr(subform->subname));
+
+	ReleaseSysCache(tup);
+
+	return subname;
+}
+
+/*
  * Convert text array to list of strings.
  *
  * Note: the resulting list of strings is pallocated here.
diff --git a/src/include/catalog/pg_subscription.h b/src/include/catalog/pg_subscription.h
index 855ed360fb..645541c9b4 100644
--- a/src/include/catalog/pg_subscription.h
+++ b/src/include/catalog/pg_subscription.h
@@ -70,6 +70,7 @@ typedef struct Subscription
 extern Subscription *GetSubscription(Oid subid, bool missing_ok);
 extern void FreeSubscription(Subscription *sub);
 extern Oid get_subscription_oid(const char *subname, bool missing_ok);
+extern char *get_subscription_name(Oid subid);
 
 extern int CountDBSubscriptions(Oid dbid);
 
-- 
2.11.0

From f5f5d6ae0fc627f0fbb3ca7497339c4cd77ab8b7 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 4 Jan 2017 12:00:00 -0500
Subject: [PATCH 2/4] fixup! Add SUBSCRIPTION catalog and DDL

Remove ApplyLauncherWakeupAtCommit() call for now.
---
 src/backend/commands/subscriptioncmds.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index a6679879db..9dbd6d4c59 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -359,8 +359,6 @@ CreateSubscription(CreateSubscriptionStmt *stmt)
 
 	InvokeObjectPostCreateHook(SubscriptionRelationId, subid, 0);
 
-	ApplyLauncherWakeupAtCommit();
-
 	return myself;
 }
 

Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Pavel Stehule
2017-01-04 17:58 GMT+01:00 Fabien COELHO :

>
>  See attached scripts for instance.
>>>
>>
>> Your test shows so SET SESSION has not transactional behaviour - the
>> transactions fails, but the value is not reverted to NULL.
>>
>
> There are *two* function calls, the first fails and the second succeeds.
> Here is the trace with a some comments:
>
>  [...]
>
>  ## SET SESSION x.x = 'null';
>  SET
>  -- previous has set x.x = 'null'
>
>  ## SELECT setupSecurityContext(3);
>  -- first setup... function call
>  NOTICE:  SET secured = FALSE
>  NOTICE:  SET secured = TRUE
>  -- there is a SET to 'ok' just after this print
>  -- at the end the transaction fails:
>  ERROR:  insert or update on table "log" violates foreign key constraint
> "log_sid_fkey"
>  DETAIL:  Key (sid)=(3) is not present in table "stuff".
>  -- no result is displayed from the SELECT
>
>  ## SHOW x.x;
>  nul
>  -- the value is the initial value, it has been reverted
>
>  ## SELECT setupSecurityContext(2);
>  -- second setup... function call
>  NOTICE:  SET secured = FALSE
>  NOTICE:  SET secured = TRUE
>  -- trues is displayed, the function succeeded
>   t
>
>  ## SHOW x.x;
>  ok
>  -- the new value is shown


ok understand

The logic depends on transactions and on nesting level (nesting doesn't
depends on transactions only)

/*
 * Do GUC processing at transaction or subtransaction commit or abort, or
 * when exiting a function that has proconfig settings, or when undoing a
 * transient assignment to some GUC variables.  (The name is thus a bit of
 * a misnomer; perhaps it should be ExitGUCNestLevel or some such.)
 * During abort, we discard all GUC settings that were applied at nesting
 * levels >= nestLevel.  nestLevel == 1 corresponds to the main transaction.
 */
void
AtEOXact_GUC(bool isCommit, int nestLevel)

Probably we should to use CallXactCallbacks instead - then is not a
performance impact when there are not transactional variables.

Regards

Pavel



>
>
> --
> Fabien.
>


Re: [HACKERS] Cluster wide option to control symbol case folding

2017-01-04 Thread Robert Haas
On Tue, Jan 3, 2017 at 6:45 PM, Lewis, Ian (Microstar Laboratories)
 wrote:
> One idea, which would likely be harder to implement on the server, but
> that would have less impact on third party tools and libraries, would be
> to configure case folding on a session basis. There would have to be
> some means to configure a session for the case folding your application
> wants to see. And, the general default would have to be the current
> PostgreSQL behavior so that an application that was designed for current
> behavior would never see a change.

Where you get into trouble there is that you might run CREATE
EXTENSION from that session, or call an SQL function defined in a
session with different settings (perhaps a function created by an
extension).  This is not unlike various problems we've had over the
years with search_path, which really ought to be lexically scoped but
is in fact dynamically scoped.

Again, I'm not trying to rain down fire and brimstone on your idea
here and I clearly see the utility of it.  I also think it's great
that you've engaged in the discussion in the way that you have.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Pavel Stehule
2017-01-04 17:30 GMT+01:00 Fabien COELHO :

>
>
> Now we can this feature emulate with dblink, and there are patches in
>> commitfest based on background workers, and emulation will be cheaper.
>>
>
> I had not noticed that "background session" proposal. That's definitely an
> interesting feature to have for some use cases. Dblink implies a new
> connection I think, pretty expensive. I wish that the proposal would be
> language independent, like DB2 simple AUTONOMOUS declaration on a function.
> It seems quite heavily linked to PL/pgSQL right now.


Maybe year ago here was a discussion about autonomous transaction design -
Robert proposed transaction scope - some like 'BEGIN AUTONOMOUS", I
proposed function level. The syntax is not pretty important - this
functionality is interesting - mainly for loging to tables - but there are
risks - it again border transactional| untransactional - autonomous
transactions are "untransactional" from outer transaction perspective - so
some unwanted artefacts or risks are possible there - and application
design should to respect it.

Regards

Pavel





>
>
> --
> Fabien.
>


Re: [HACKERS] [PATCH] PostgresNode.pm enhancements, pg_lsn helper, and some more recovery tests

2017-01-04 Thread Simon Riggs
On 4 January 2017 at 08:16, Craig Ringer  wrote:
> On 4 January 2017 at 12:39, Craig Ringer  wrote:
>
>> To keep things together, I've followed up on the logical decoding on
>> standby thread that now incorporates all these patches.
>
> Attached are the two patches discussed upthread, in their
> proposed-for-commit form, as requested by Simon.
>
> These correspond to patches 0001 and 0004 of the logical decoding on
> standby series at
> https://www.postgresql.org/message-id/CAMsr+YEzC=-+eV09A=ra150fjtkmtqt5q70piqbwytbor3c...@mail.gmail.com
> .
>
> This subset is tracked as https://commitfest.postgresql.org/12/883/ .
>
> When committed I will update the decoding on standby series to omit
> these pre-requisite patches.

Committed, thanks.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Fabien COELHO



 See attached scripts for instance.


Your test shows so SET SESSION has not transactional behaviour - the
transactions fails, but the value is not reverted to NULL.


There are *two* function calls, the first fails and the second succeeds. 
Here is the trace with a some comments:


 [...]

 ## SET SESSION x.x = 'null';
 SET
 -- previous has set x.x = 'null'

 ## SELECT setupSecurityContext(3);
 -- first setup... function call
 NOTICE:  SET secured = FALSE
 NOTICE:  SET secured = TRUE
 -- there is a SET to 'ok' just after this print
 -- at the end the transaction fails:
 ERROR:  insert or update on table "log" violates foreign key constraint 
"log_sid_fkey"
 DETAIL:  Key (sid)=(3) is not present in table "stuff".
 -- no result is displayed from the SELECT

 ## SHOW x.x;
 nul
 -- the value is the initial value, it has been reverted

 ## SELECT setupSecurityContext(2);
 -- second setup... function call
 NOTICE:  SET secured = FALSE
 NOTICE:  SET secured = TRUE
 -- trues is displayed, the function succeeded
  t

 ## SHOW x.x;
 ok
 -- the new value is shown

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Stephen Frost
* Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote:
> On 1/4/17 10:57 AM, Tom Lane wrote:
> > I still maintain that the existing solution for passphrases is useless,
> > but in the interest of removing objections to the current patch, I'll
> > go make that happen.
> 
> Sounds good.

Agreed, thanks.

> Looking around briefly (e.g., Apache, nginx), the standard approach
> appears to be a configuration setting that gets the password from an
> external program or file.  (Although the default still appears to be to
> get from tty.)

Right, the MIT Kerberos daemon will definitely prompt for the passphrase
for the master key on the terminal also.  They might also have a way to
get it from a program now, not sure, it's been a while, but it was a
requirement from NIST 800-53 to not have unencrypted keys on the
filesystem and I had to address that for the MIT Kerberos master key and
the private keys for various SSL-using services.

> systemd has support for getting passwords to services without tty.

Oh, that's interesting, I wasn't aware of that.

> So if someone is interested, there is some room for enhancement here.

Agreed.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Peter Eisentraut
On 1/4/17 10:57 AM, Tom Lane wrote:
> I still maintain that the existing solution for passphrases is useless,
> but in the interest of removing objections to the current patch, I'll
> go make that happen.

Sounds good.

Looking around briefly (e.g., Apache, nginx), the standard approach
appears to be a configuration setting that gets the password from an
external program or file.  (Although the default still appears to be to
get from tty.)

systemd has support for getting passwords to services without tty.

So if someone is interested, there is some room for enhancement here.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Pavel Stehule
>
>> Um, what? No, not at all.
>>
>> GUCs are scoped, but not transactional, [...]
>>
>
> The documentation is very scarse, so I have tested it.
>
> All tests I have done with commit & rollback on session variables (SET
> SESSION) have shown a clean transactional behavior, with the value reverted
> on ROLLBACK, whether intentional or automatic, or the new value set on
> COMMIT. See attached scripts for instance.
>

Your test shows so SET SESSION has not transactional behaviour - the
transactions fails, but the value is not reverted to NULL.

It is good example of antipattern for this routine type :)

Pavel



>
>
> Fabien.


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Fabien COELHO




Now we can this feature emulate with dblink, and there are patches in
commitfest based on background workers, and emulation will be cheaper.


I had not noticed that "background session" proposal. That's definitely an 
interesting feature to have for some use cases. Dblink implies a new 
connection I think, pretty expensive. I wish that the proposal would be 
language independent, like DB2 simple AUTONOMOUS declaration on a 
function. It seems quite heavily linked to PL/pgSQL right now.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: session server side variables

2017-01-04 Thread Fabien COELHO


Hello,

The security-related use-case you have presented stores the status of 
the verification in a variable. If the variable is untransactional, 
then it has been shown that the variable status > may say ok while the 
verification has really really failed.


That's only a concern if the setting xact performs writes.


Sure. However I do not see the point of proposing a feature which works 
only sometimes, on the condition that the security setup does NOT involve 
storing data in the database (!), otherwise it may be insecure in some 
cases, sorry mates.


That does not look too serious, esp if the use-case concern is all about 
security.



If it's a read-only lookup, all it has to do is set the variable last.


Yep, I guess it would probably work for read-only transactions.


I agree that transactional variables whose value assignments come into
effect on commit would be useful. Like we have for NOTIFY. I do not
agree that they are _necessary_ such that a feature is not useful
without them. Nor do I agree that they are necessary for security
related use.


The feature would be clearly misleading without transactional support, 
because people would use it with false expectation that it works securely, 
which is not the case.


Morover, there is no special cost in implementing transactional on 
session variables, has it is already done by pg. It can probably be 
reused.


Um, what? No, not at all.

GUCs are scoped, but not transactional, [...]


The documentation is very scarse, so I have tested it.

All tests I have done with commit & rollback on session variables (SET 
SESSION) have shown a clean transactional behavior, with the value 
reverted on ROLLBACK, whether intentional or automatic, or the new value 
set on COMMIT. See attached scripts for instance.


LOCAL variables are a different thing, they just disappear at the end of 
the session, it is more a scoping thing.



We'd _definitely_ need to be able to declare such variables, so we
could specify their ON COMMIT behaviour (which GUCs don't have)


Hmmm. We do not have to declare any ON COMMIT behaviour of TABLES, they
are just transactional.


and define their scope (like we do for GUCs).


I'm fine with defining scopes.

An alternative is to implement sub (nested) transactions, like Oracle 
and MS SQL Server... but that would be quite some work.


What? We have those already, see SAVEPOINT and ROLLBACK TO SAVEPOINT.


No, that is not what I meant.


Unless you mean autonomous transactions, which are not really nested,


Yes, that is why I wrote "nested" above.


they're closer to having the outer xact suspend while another xact
works, then resuming the outer xact.


Yep. The point is that you can test the success of the nested transaction 
before setting the status.


--
Fabien.

deferred.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Pavel Stehule
>> > SELECT UNCACHED t.a, t.b FROM INTO a,b;
>>
>> Yeah -- this is pretty ugly admittedly.  Maybe control directive is
>> ok, as long as you can set it mid function?
>>
>
> ADA uses for this purpose PRAGMA keyword - it is used for everything in
> ADA - cycle iteration optimization, ...the scope can be statement, block,
> procedure.
>

Pragma is used for changing (enforcing) behave. There are pragmas ada_05,
ada_2012, ..

>
> so something like
>
> BEGIN
>   PRAGMA uncached_plans;
>   SELECT ...
>   ..
> END;
>
> But it should be verified by some PL/SQL or Ada experts
>
> Regards
>
>
>
>
>
>>
>> merlin
>>
>
>


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Pavel Stehule
2017-01-04 16:49 GMT+01:00 Merlin Moncure :

> On Tue, Jan 3, 2017 at 2:15 PM, Pavel Stehule 
> wrote:
> >
> >
> > 2017-01-03 20:54 GMT+01:00 Merlin Moncure :
> >>
> >> On Tue, Jan 3, 2017 at 9:58 AM, Pavel Stehule 
> >> wrote:
> >> > 2017-01-03 16:23 GMT+01:00 Merlin Moncure :
> >> >> *) Would also like to have a FINALLY block
> >> >
> >> > What you can do there?
> >>
> >> This is syntax sugar so you don't need second begin/end/exception
> >> block or duplicated code.  It separates error handling from cleanup.
> >>
> >> BEGIN
> >>   PERFORM dblink_connect(...
> >>   
> >> EXCEPTION WHEN OTHERS THEN
> >>   
> >> FINALLY
> >>   PERFORM dblink_disconnect(...
> >> END;
> >
> >
> > Does know somebody this pattern from Ada or PL/SQL?
>
> I guess probably not.  It's a standard pattern in modern EH languages
> (for example, https://msdn.microsoft.com/en-us/library/dszsf989.aspx).
>
> >>
> >> >> *) Some user visible mechanic other than forcing SQL through EXECUTE
> >> >> to be able to control plan caching would be useful.
> >> >
> >> > fully agree.
> >> >
> >> > Have you some ideas?
> >> >
> >> > What about plpgsql option (function scope) -- WITHOUT-PLAN-CACHE - any
> >> > non
> >> > trivial plans will not be cached - and evaluated as parametrized query
> >> > only.
> >>
> >> I have slight preference for syntax marker for each query, similar to
> >> INTO.  Maybe 'UNCACHED'?
> >
> >
> > I am not clean opinion - the statement level is nice, but what
> readability?
> >
> > SELECT UNCACHED t.a, t.b FROM INTO a,b;
>
> Yeah -- this is pretty ugly admittedly.  Maybe control directive is
> ok, as long as you can set it mid function?
>

ADA uses for this purpose PRAGMA keyword - it is used for everything in ADA
- cycle iteration optimization, ...the scope can be statement, block,
procedure.

so something like

BEGIN
  PRAGMA uncached_plans;
  SELECT ...
  ..
END;

But it should be verified by some PL/SQL or Ada experts

Regards





>
> merlin
>


Re: [HACKERS] Proposal : Parallel Merge Join

2017-01-04 Thread Dilip Kumar
On Wed, Jan 4, 2017 at 12:02 PM, Amit Kapila  wrote:
> Review comments:
> 1.
> + bool is_partial);
> +
>
> Seems additional new line is not required.
Fixed
>
> 2.
> + * try_partial_mergejoin_path
> + *  Consider a partial merge join path; if it appears useful,
> push it into
> + *  the joinrel's pathlist via add_path().
> + */
>
> I think in above comment, you should say ".. joinrel's
> partial_pathlist via add_partial_path()"
Fixed
>
> 3.
> + /*
> + * See comments in try_partial_nestloop_path().
> + */
>
> This same code exists in try_partial_nestloop_path() and
> try_partial_hashjoin_path() with minor difference of code in
> try_partial_hashjoin_path().  Can't we write a separate inline
> function for this code and call from all the three places.

It's a small check, is it ok to make it the separate function. I have
also observed similar kind of duplicate code in try_mergejoin_path and
try_hashjoin_path. However, if you think that it will be better to
move that check to an inline function I can submit a seperate patch in
this thread as a refactoring patch?

I observed one more issue that in case of partial merge join
inner_paramrels should be empty not subset of outer, so fixed the same
in the attached version.  The code in the previous patch will also not
create any problem, because if the inner path is parameterized by
outerrel it will not create any merge join path.

>
> 4.
> + /*
> + * See comments in try_nestloop_path().
> + */
> + initial_cost_mergejoin(root,
> , jointype, mergeclauses,
> +   outer_path,
> inner_path,
> +   outersortkeys, innersortkeys,
> +
>   extra->sjinfo);
>
> I think in above comments, you want to say try_partial_nestloop_path().

Fixed
> 5.
> - if (joinrel->consider_parallel && nestjoinOK &&
> - save_jointype != JOIN_UNIQUE_OUTER &&
> - bms_is_empty(joinrel->lateral_relids))
> + if (!joinrel->consider_parallel ||
> + save_jointype == JOIN_UNIQUE_OUTER ||
> + !bms_is_empty(joinrel->lateral_relids) ||
> + jointype == JOIN_FULL ||
> + jointype == JOIN_RIGHT)
> + return;
> +
> + if (nestjoinOK)
>
> Here, we only want to create partial paths when
> outerrel->partial_pathlist is not NILL, so I think you can include
> that check as well.
Done.

 Another point to note is that in HEAD, we are not
> checking for jointype as JOIN_RIGHT or JOIN_FULL for considering
> parallel nestloop paths, whereas with your patch, it will do those
 is al> checks, is it a bug of HEAD or is there any other reason for including
> those checks for parallel nestloop paths?

Actually we can not create parallel join path in case of JOIN_RIGHT or
JOIN_FULL. nestjoinOK is marked false in case of JOIN_RIGHT or
JOIN_FULL. So it was not considered in base code as well. Now we have
mergejoin as well so it's better to keep a common check.
>
> 6.
> + /* Can't generate mergejoin path if inner rel is parameterized by outer */
> + if (inner_cheapest_total != NULL)
> + {
> + ListCell   *lc1;
> +
> + /* generate merge join path for each partial outer path */
> + foreach(lc1, outerrel->partial_pathlist)
> + {
> + Path   *outerpath = (Path *) lfirst(lc1);
> + List   *merge_pathkeys;
> +
> + /*
> + * Figure out what useful ordering any paths we create
> + * will have.
> + */
> + merge_pathkeys = build_join_pathkeys(root, joinrel, jointype,
> +   outerpath->pathkeys);
> +
> + generate_mergejoin_paths(root, joinrel, innerrel, outerpath,
> + save_jointype, extra, false,
> + inner_cheapest_total, merge_pathkeys,
> + true);
> + }
> +
> + }
>
> Won't it be better if you encapsulate the above chunk of code in
> function consider_parallel_merjejoin() similar to
> consider_parallel_nestloop()?  I think that way code will look
> cleaner.

Done
>
> 7.
> + /*
> + * Figure out what useful ordering any paths we create
> + * will have.
> + */
> + merge_pathkeys = build_join_pathkeys(root, joinrel, jointype,
> +   outerpath->pathkeys);
>
> indentation problem with variable outerpath->pathkeys.

Fixed
>
> 8.
> - try_mergejoin_path(root,
> -   joinrel,
> -   outerpath,
> -   inner_cheapest_total,
> -   merge_pathkeys,
> -   mergeclauses,
> -   NIL,
> -   innersortkeys,
> -   jointype,
> -   extra);
> + if (!is_partial)
> + try_mergejoin_path(root,
> + joinrel,
> + outerpath,
> + inner_cheapest_total,
> + merge_pathkeys,
> + mergeclauses,
> + NIL,
> + innersortkeys,
> + jointype,
> + extra);
> +
> + /* Generate partial path if inner is parallel safe. */
> + else if (inner_cheapest_total->parallel_safe)
> + try_partial_mergejoin_path(root,
> + joinrel,
> + outerpath,
> + inner_cheapest_total,
> + merge_pathkeys,
> + mergeclauses,
> + NIL,
> + innersortkeys,
> + jointype,
> + extra);
>
> In above code indentation is broken, similarly, there is another code
> in a patch where it is broken, try using pgindent.

Fixed with pgindent.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


parallel_mergejoin_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list 

Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Tom Lane
Peter Eisentraut  writes:
> On 1/4/17 10:26 AM, Tom Lane wrote:
>> How will you know whether there's a pass phrase?

> One could register a password callback that remembers whether it was called.

Hmm ... actually, we don't even need to work that hard.  If we simply
use the callback that's there now, but only during reloads not server
start, then we get the desired behavior.  Reloads will fail because
the wrong passphrase was returned by the callback, and we'll keep the
current SSL state.  It would probably be worth tweaking things to minimize
the amount of log spam that you get from that; but it would work, for
values of "work" similar to what was there before.

I still maintain that the existing solution for passphrases is useless,
but in the interest of removing objections to the current patch, I'll
go make that happen.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Peter Eisentraut
On 1/4/17 10:26 AM, Tom Lane wrote:
> Andreas Karlsson  writes:
>> Sorry, I was very unclear. I meant refusing the reload the SSL context 
>> if there is a pass phrase, but that the rest of the config will be 
>> reloaded just fine. This will lead to some log spam on every SIGHUP for 
>> people with a pass phrase but should otherwise work as before.
> 
> How will you know whether there's a pass phrase?

One could register a password callback that remembers whether it was called.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-04 Thread Merlin Moncure
On Tue, Jan 3, 2017 at 2:15 PM, Pavel Stehule  wrote:
>
>
> 2017-01-03 20:54 GMT+01:00 Merlin Moncure :
>>
>> On Tue, Jan 3, 2017 at 9:58 AM, Pavel Stehule 
>> wrote:
>> > 2017-01-03 16:23 GMT+01:00 Merlin Moncure :
>> >> *) Would also like to have a FINALLY block
>> >
>> > What you can do there?
>>
>> This is syntax sugar so you don't need second begin/end/exception
>> block or duplicated code.  It separates error handling from cleanup.
>>
>> BEGIN
>>   PERFORM dblink_connect(...
>>   
>> EXCEPTION WHEN OTHERS THEN
>>   
>> FINALLY
>>   PERFORM dblink_disconnect(...
>> END;
>
>
> Does know somebody this pattern from Ada or PL/SQL?

I guess probably not.  It's a standard pattern in modern EH languages
(for example, https://msdn.microsoft.com/en-us/library/dszsf989.aspx).

>>
>> >> *) Some user visible mechanic other than forcing SQL through EXECUTE
>> >> to be able to control plan caching would be useful.
>> >
>> > fully agree.
>> >
>> > Have you some ideas?
>> >
>> > What about plpgsql option (function scope) -- WITHOUT-PLAN-CACHE - any
>> > non
>> > trivial plans will not be cached - and evaluated as parametrized query
>> > only.
>>
>> I have slight preference for syntax marker for each query, similar to
>> INTO.  Maybe 'UNCACHED'?
>
>
> I am not clean opinion - the statement level is nice, but what readability?
>
> SELECT UNCACHED t.a, t.b FROM INTO a,b;

Yeah -- this is pretty ugly admittedly.  Maybe control directive is
ok, as long as you can set it mid function?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] emergency outage requiring database restart

2017-01-04 Thread Merlin Moncure
On Tue, Jan 3, 2017 at 1:05 PM, Peter Eisentraut
 wrote:
> On 11/7/16 5:31 PM, Merlin Moncure wrote:
>> Regardless, it seems like you might be on to something, and I'm
>> inclined to patch your change, test it, and roll it out to production.
>> If it helps or at least narrows the problem down, we ought to give it
>> consideration for inclusion (unless someone else can think of a good
>> reason not to do that, heh!).
>
> Any results yet?

Not yet unfortunately.  I compiled the server with the change, but was
not able get $libdir working so that I could just do a binary swap
over my pgdg compiled package.  If anyone has some pointers on how to
do that, I'd appreciated it.

Still getting checksum failures.   Over the last 30 days, I see the
following.  Since enabling checksums FWICT none of the damage is
permanent and rolls back with the transaction.   So creepy!

[root@rcdylsdbmpf001 pg_log]# cat *.log | grep "page verification failed"
2016-12-05 10:17:48 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 61797 but expected 61798
2016-12-05 11:15:31 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 37750 but expected 37749
2016-12-05 11:15:58 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 44483 but expected 44482
2016-12-05 11:16:33 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 58926 but expected 58925
2016-12-05 11:17:08 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 38527 but expected 38528
2016-12-05 11:18:34 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 61932 but expected 61933
2016-12-05 11:18:55 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 23757 but expected 23758
2016-12-05 12:13:48 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 44192 but expected 44225 at character 417
2016-12-08 14:18:37 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 36083 but expected 36082
2016-12-08 15:52:31 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 63414 but expected 63415 at
character 1096
2016-12-09 09:12:21 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 25781 but expected 25780
2016-12-09 09:13:20 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 63043 but expected 63044 at
character 4230
2016-12-12 08:57:45 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 31775 but expected 31771
2016-12-13 09:47:11 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 40802 but expected 40806
2016-12-15 12:49:04 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 11625 but expected 11592 at character 417
2016-12-15 12:51:08 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 51017 but expected 51018
2016-12-15 12:52:36 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 51017 but expected 51018 at character 417
2016-12-16 12:16:31 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 23580 but expected 23576
2016-12-20 13:59:33 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 45273 but expected 45285
2016-12-20 14:00:22 CST [postgres@castaging]: WARNING:  page
verification failed, calculated checksum 10524 but expected 10525

note second database 'mpf2'.  This is a new development.  Example of
query that is jacking things is this:
2016-12-15 12:51:08 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 51017 but expected 51018
2016-12-15 12:51:08 CST [rms@mpf2]: CONTEXT:  SQL statement "
  COPY (
SELECT 'DELETE FROM tblNAptCommonSample WHERE ReportPeriod = 201612;'
UNION ALL
SELECT format(
'INSERT INTO tblNAptCommonSample('
  'ReportPeriod, Period, AdjustmentType, PlanType, MSA, MSASubMkt, '
  'Sample, Occupancy, OccupancyChange, AverageRent,
AverageRentChange, RentSF, '
  'RentSFChange)'
'VALUES('
  '%s, %s, ''%s'', ''%s'', ''%s'', %s,'
  '%s, %s, %s, %s, %s, %s,'
  '%s)',
  ReportPeriod, Period, AdjustmentType, PlanType, MSA, MSASubMkt,
  c(Sample), c(Occupancy), c(OccupancyChange), c(AverageRent),
c(AverageRentChange), c(RentSF),
  c(RentSFChange))
FROM tblNAptCommonSample
  WHERE Period = 201612
  AND MSA != '5610'
UNION ALL
  SELECT 'go'
  ) TO '/tmp/tblnaptcommonsample.sql';
"
PL/pgSQL function writempf1history(integer) line 75 at EXECUTE


or this:
2016-12-15 12:52:36 CST [rms@mpf2]: WARNING:  page verification
failed, calculated checksum 51017 but expected 51018 at character 417
2016-12-15 12:52:36 CST [rms@mpf2]: QUERY:
  COPY (
SELECT 'DELETE FROM tbltwrexistingunits WHERE ReportPeriod = 

Re: [HACKERS] increasing the default WAL segment size

2017-01-04 Thread Robert Haas
On Wed, Jan 4, 2017 at 9:47 AM, Simon Riggs  wrote:
> On 4 January 2017 at 13:57, Robert Haas  wrote:
>> On Wed, Jan 4, 2017 at 3:05 AM, Simon Riggs  wrote:
>>> Strange response. Nothing has been assumed. I asked for tests and you
>>> provided measurements.
>>
>> Sure, of zero-filling a file with dd.  But I also pointed out that in
>> a real PostgreSQL cluster, the change could actually *reduce* latency.
>
> I think we are talking at cross purposes. We agree that the main
> change is useful, but it causes another problem which I can't see how
> you can characterize as reduced latency, based upon your own
> measurements.

Zero-filling files will take longer if the files are bigger.  That
will increase latency.  But we will also have fewer forced
end-of-segment syncs.  That will reduce latency.  Which effect is
bigger?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Tom Lane
Andreas Karlsson  writes:
> Sorry, I was very unclear. I meant refusing the reload the SSL context 
> if there is a pass phrase, but that the rest of the config will be 
> reloaded just fine. This will lead to some log spam on every SIGHUP for 
> people with a pass phrase but should otherwise work as before.

How will you know whether there's a pass phrase?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Tom Lane
Stephen Frost  writes:
> Indeed, this is important functionality that people are using.

Who exactly are these people, and why haven't they complained about how
crappy the support is now?  I'm *completely* unconvinced by the argument
that the way it has worked in the past is an important feature that we
have to preserve.  It's an accident that it worked at all, and as far as
I can tell it didn't work very well.  Have you tried it?  On my machine,
it could not accept a passphrase at all unless I didn't detach the
postmaster from the terminal, which is entirely silly as a production
solution.

In short, I reject the above argument 100%.  I'm all for inventing
a solution in which passphrases work usefully, but don't tell me
that what we had was such a solution.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Temporal query processing with range types

2017-01-04 Thread Peter Eisentraut
So I'm looking at this patch in the commit fest.  I have only a general
understanding of temporal query processing.

What this patch does is to add two new clauses for FROM-list items,
NORMALIZE and ALIGN, which reshuffle a set of ranges into a new list
that can then be aggregated more easily.  From the original message:

> For NORMALIZE the tuples' ranges need to be split into all sub-ranges
> according to all matching ranges of the second relation. For this we
> create a subquery that first joins one relation with the range
> boundaries of the other and then sorts the result. The executor
> function splits the ranges in a sweep-line based manner.
>
> For ALIGN the tuples' ranges must be split into all intersections and
> differences with the other relation according to the join condition.
> For this we create a subquery that first joins the two relations and
> then sorts the result. The executor function splits the ranges
> accordingly in a sweep-line based manner.

So there isn't really temporal query processing as such here, only some
helpers that can make it easier.

I can see how those operations can be useful, but it would help if there
were a more formal definition to be able to check that further.

What I'm missing here is some references: existing implementations,
standards, documentation, research papers, alternative ideas, rejected
alternatives, etc.

Also, the submission is missing documentation and test cases.  There are
technical terms used in the code that I don't understand.

I think there are probably many interesting applications for normalizing
or otherwise adjusting ranges.  I'd like to see an overview and
consideration of other applications.

Ideally, I'd like to see these things implemented as some kind of
user-space construct, like an operator or function.  I think we'd need a
clearer definition of what it is they do before we can evaluate that.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Stephen Frost
* Andreas Karlsson (andr...@proxel.se) wrote:
> On 01/04/2017 04:14 PM, Stephen Frost wrote:
> >* Andreas Karlsson (andr...@proxel.se) wrote:
> >>A possible solution might be to only add the error throwing hook
> >>when loading certificates during SIGHUP (and at Windows) and to work
> >>as before on startup. Would that be an acceptable solution? I could
> >>write a patch for this if people are interested.
> >
> >I'm not sure I see how that's a solution..?  Wouldn't that mean that a
> >SIGHUP with an encrypted key would result in a failure?
> >
> >The solution, at least in my view, seems to be to say "sorry, we can't
> >reload the SSL stuff if you used a passphrase to unlock the key on
> >startup, you will have to perform a restart if you want the SSL bits to
> >be changed."
> 
> Sorry, I was very unclear. I meant refusing the reload the SSL
> context if there is a pass phrase, but that the rest of the config
> will be reloaded just fine. This will lead to some log spam on every
> SIGHUP for people with a pass phrase but should otherwise work as
> before.

Right, that sounds like it'd work for me, at least.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Andreas Karlsson

On 01/04/2017 04:14 PM, Stephen Frost wrote:

* Andreas Karlsson (andr...@proxel.se) wrote:

A possible solution might be to only add the error throwing hook
when loading certificates during SIGHUP (and at Windows) and to work
as before on startup. Would that be an acceptable solution? I could
write a patch for this if people are interested.


I'm not sure I see how that's a solution..?  Wouldn't that mean that a
SIGHUP with an encrypted key would result in a failure?

The solution, at least in my view, seems to be to say "sorry, we can't
reload the SSL stuff if you used a passphrase to unlock the key on
startup, you will have to perform a restart if you want the SSL bits to
be changed."


Sorry, I was very unclear. I meant refusing the reload the SSL context 
if there is a pass phrase, but that the rest of the config will be 
reloaded just fine. This will lead to some log spam on every SIGHUP for 
people with a pass phrase but should otherwise work as before.


Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Stephen Frost
* Andreas Karlsson (andr...@proxel.se) wrote:
> On 01/04/2017 03:48 PM, Magnus Hagander wrote:
> >On Wed, Jan 4, 2017 at 3:47 PM, Tom Lane  >It does not; what would be the point, if the key would be lost at
> >SIGHUP?
> >
> >If we lost it, yes. But we could keep the old key around if it hasn't
> >changed, thus behave just like we did in <= 9.6.
> 
> That means storing the pass phrase in the memory of the postmaster,
> which does not sound like a terribly good idea to me, but I have
> never used keys with pass phrases for daemons so it might be a
> common solution which is acceptable by many.

I'm not sure I see that as a big deal- if you can access the
postmaster's memory then you can just pull out the actual private key
itself, no?  There's a bigger issue here though, which has to do with
what happens if the user actually does change the passphrase on the key
and we then reload, what do we do then?

> >If any of those number of people want to step up and design/implement
> >a non-broken solution for passphrases, that'd be fine with me.  But
> >I would want to see something that's actually a credible solution,
> >allowing the postmaster to be started as a normal daemon.  And working
> >on Windows.
> >
> >Well, for all those people 9.6 worked significantly better... Because
> >they could reload *other* config parameters without failure.
> 
> A possible solution might be to only add the error throwing hook
> when loading certificates during SIGHUP (and at Windows) and to work
> as before on startup. Would that be an acceptable solution? I could
> write a patch for this if people are interested.

I'm not sure I see how that's a solution..?  Wouldn't that mean that a
SIGHUP with an encrypted key would result in a failure?

The solution, at least in my view, seems to be to say "sorry, we can't
reload the SSL stuff if you used a passphrase to unlock the key on
startup, you will have to perform a restart if you want the SSL bits to
be changed."

No, I've not looked at what that would require in terms of code.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] UNDO and in-place update

2017-01-04 Thread Dilip Kumar
On Wed, Jan 4, 2017 at 4:35 PM, Amit Kapila  wrote:
> In this new system, I
> think we can't remove undo entries of heap page till we clear
> corresponding index entries. I think we need to somehow collect the
> old values from undo corresponding to index and then scan the index
> remove the index entry and after that corresponding undo entry can be
> removed.

Do we really need to keep undo for heap until index entry is not
removed? IIUC, we anyway need to revalidate the index key with heap
tuple. What I am trying the say is that if we no longer needed UNDO
for the heap page (e.g because of rollback) then we can apply the UNDO
and remove it. I agree that there will be multiple index entries will
be pointing to this tuple, but only one of them can pass the key
revalidation with the heap. isn't it?


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP

2017-01-04 Thread Andreas Karlsson

On 01/04/2017 03:48 PM, Magnus Hagander wrote:

On Wed, Jan 4, 2017 at 3:47 PM, Tom Lane 

  1   2   >