Re: [HACKERS] PSQL commands: \quit_if, \quit_unless

2016-12-03 Thread Pavel Stehule
2016-12-03 8:16 GMT+01:00 Fabien COELHO :

>
> Hello,
>
> My guess is that something comparable to where pgbench is would be a
>> reasonable target --- not least because I think we should strive to
>> reduce unnecessary differences between psql and pgbench metalanguages.
>>
>> I'm not sure that I'm ready to propose that they should share the same
>> expression engine, but perhaps it's not a totally wacky idea.
>>
>
> I'm trying to summarize a proposal for a conditional structure:
>
>  - existing psql ":"-variables can be used, allowing anything from SQL
>   (eg querying about available objects, features, extensions,
>current settings...)
>
>  - target psql conditional syntax could be:
>
> \if 
>   ...
> \elif <...>
>   ...
> \else
>   ...
> \endif
>
>  - possible incremental implemention steps on this path:
>
>   (1) minimal condition and expression, compatible with
>   a possible future full-blown expression syntax
>
>  \if :variable
>  \if not :variable -- maybe \if ! :variable
>...
>  \endif
>
>   (2) add "\else"
>
>   (3) add "\elif ..." (or maybe "\elsif ..."?)
>
>   (4) add greater but limited expressions, compatible with a full blown
>   expression syntax (eg \if :var/const 
> :var/const)
>
>   (5) add full-blown  support for \if, which suggest that
>   it would also be available for \set
>
>
> Does this looks okay, or does it need to be amended?
>
> A few comments:
>
> Given the experience with pgbench and the psql context, I do not think
> that it would really need to go beyond step 2 above, but I agree that I may
> be wrong and it is best to be prepared for that from the start. Given the
> complexity and effort involved with (5), it seems wise to wait for a
> clearer motivation with actual use-cases before going that far.
>
> In the benchmarking context, the point is to test performance for a
> client-server scenario, in which case the client has to do some things,
> thus needs miminal computation capabilities which were available early in
> pgbench (\setrandom, \set with one arithmetic operation...) because they
> were necessary. Probably \if ... would make sense in pgbench, so I'll think
> about it.


> In psql interactive context, conditions and expressions do not make sense
> as the user typing the command knows what they want, thus will do
> evaluations in their head to avoid typing stuff...
>
> In psql scripting context, conditions are likely to be about what to do
> with the database, and what I understand of the use-case which started this
> discussion is that it is about versions, settings, available objects,
> typically "if this is already installed, skip this part" or "if version
> beyond YYY, cannot install because of missing features" when installing and
> initializing an application. For this purpose, ISTM that the query is
> necessarily performed server-side, thus the actual need for a full-blown
> client-side expression is limited or void, although as already said being
> prepared is a good thing.
>

Some possibilities from pgbench can have sense in psql too - generating
some random numbers from a range.  In the end we use one parser for psql
and for pgbench.

I agree, so step 2 should be enough, and I accept so there is opened door
for any future enhancing.

We can implement some client side boolean functions (similar to pgbench
functions that can cover often tasks: version_less, version_greather,
user_exists, tables_exists, index_exists, variable_exists, schema_exists,

Regards

Pavel

>
> --
> Fabien.
>


Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8

2016-12-03 Thread Noah Misch
On Mon, Nov 21, 2016 at 11:58:34AM +0800, Craig Ringer wrote:
> Updated docs patch. Since configure checks for 5.8.0 that's what's
> specified. Anyone who wants to argue about the actual version we
> _should_ target can do so elsewhere, all I'm interested in is what we
> _do_ officially target so I can document this.

That's a proper separation of efforts; thanks.

On Mon, Nov 21, 2016 at 05:37:50PM +0900, Kyotaro HORIGUCHI wrote:
> At Mon, 21 Nov 2016 16:45:10 +0900, Michael Paquier 
>  wrote in 
> 
> > No objections to this version. It's a good idea to document that.
> 
> +1

Committed after re-filling paragraphs.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2016-12-03 Thread Tomas Vondra
On Sat, 2016-12-03 at 18:37 -0800, Peter Geoghegan wrote:
> On Sat, Dec 3, 2016 at 5:45 PM, Alvaro Herrera  com> wrote:
> > 
> > I don't think a patch must necessarily consider all possible uses
> > that
> > the new feature may have.  If we introduce parallel index creation,
> > that's great; if pg_restore doesn't start using it right away,
> > that's
> > okay.  You, or somebody else, can still patch it later.  The patch
> > is
> > still a step forward.
> While I agree, right now pg_restore will tend to use or not use
> parallelism for CREATE INDEX more or less by accident, based on
> whether or not pg_class.reltuples has already been set by something
> else (e.g., an earlier CREATE INDEX against the same table in the
> restoration). That seems unacceptable. I haven't just suppressed the
> use of parallel CREATE INDEX within pg_restore because that would be
> taking a position on something I have a hard time defending any
> particular position on. And so, I am slightly concerned about the
> entire ecosystem of tools that could implicitly use parallel CREATE
> INDEX, with undesirable consequences. Especially pg_restore.
> 
> It's not so much a hard question as it is an awkward one. I want to
> handle any possible objection about there being future compatibility
> issues with going one way or the other ("This paints us into a corner
> with..."). And, there is no existing, simple way for pg_restore and
> other tools to disable the use of parallelism due to the cost model
> automatically kicking in, while still allowing the proposed new index
> storage parameter ("parallel_workers") to force the use of
> parallelism, which seems like something that should happen. (I might
> have to add a new GUC like "enable_maintenance_paralleism", since
> "max_parallel_workers_maintenance = 0" disables parallelism no matter
> how it might be invoked).

I do share your concerns about unpredictable behavior - that's
particularly worrying for pg_restore, which may be used for time-
sensitive use cases (DR, migrations between versions), so unpredictable
changes in behavior / duration are unwelcome.

But isn't this more a deficiency in pg_restore, than in CREATE INDEX?
The issue seems to be that the reltuples value may or may not get
updated, so maybe forcing ANALYZE (even very low statistics_target
values would do the trick, I think) would be more appropriate solution?
Or maybe it's time add at least some rudimentary statistics into the
dumps (the reltuples field seems like a good candidate).

Trying to fix this by adding more GUCs seems a bit strange to me.

> 
> In general, I have a positive outlook on this patch, since it appears
> to compete well with similar implementations in other systems
> scalability-wise. It does what it's supposed to do.
> 

+1 to that

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2016-12-03 Thread Peter Geoghegan
On Sat, Dec 3, 2016 at 5:45 PM, Alvaro Herrera  wrote:
> I don't think a patch must necessarily consider all possible uses that
> the new feature may have.  If we introduce parallel index creation,
> that's great; if pg_restore doesn't start using it right away, that's
> okay.  You, or somebody else, can still patch it later.  The patch is
> still a step forward.

While I agree, right now pg_restore will tend to use or not use
parallelism for CREATE INDEX more or less by accident, based on
whether or not pg_class.reltuples has already been set by something
else (e.g., an earlier CREATE INDEX against the same table in the
restoration). That seems unacceptable. I haven't just suppressed the
use of parallel CREATE INDEX within pg_restore because that would be
taking a position on something I have a hard time defending any
particular position on. And so, I am slightly concerned about the
entire ecosystem of tools that could implicitly use parallel CREATE
INDEX, with undesirable consequences. Especially pg_restore.

It's not so much a hard question as it is an awkward one. I want to
handle any possible objection about there being future compatibility
issues with going one way or the other ("This paints us into a corner
with..."). And, there is no existing, simple way for pg_restore and
other tools to disable the use of parallelism due to the cost model
automatically kicking in, while still allowing the proposed new index
storage parameter ("parallel_workers") to force the use of
parallelism, which seems like something that should happen. (I might
have to add a new GUC like "enable_maintenance_paralleism", since
"max_parallel_workers_maintenance = 0" disables parallelism no matter
how it might be invoked).

In general, I have a positive outlook on this patch, since it appears
to compete well with similar implementations in other systems
scalability-wise. It does what it's supposed to do.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2016-12-03 Thread Alvaro Herrera
Peter Geoghegan wrote:
> On Mon, Nov 7, 2016 at 8:28 PM, Peter Geoghegan  wrote:
> > What do we need to teach pg_restore about parallel CREATE INDEX, if
> > anything at all? Could this be as simple as a blanket disabling of
> > parallelism for CREATE INDEX from pg_restore? Or, does it need to be
> > more sophisticated than that? I suppose that tools like reindexdb and
> > pgbench must be considered in a similar way.
> 
> I still haven't resolved this question, which seems like the most
> important outstanding question,

I don't think a patch must necessarily consider all possible uses that
the new feature may have.  If we introduce parallel index creation,
that's great; if pg_restore doesn't start using it right away, that's
okay.  You, or somebody else, can still patch it later.  The patch is
still a step forward.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2016-12-03 Thread Peter Geoghegan
On Mon, Nov 7, 2016 at 8:28 PM, Peter Geoghegan  wrote:
> What do we need to teach pg_restore about parallel CREATE INDEX, if
> anything at all? Could this be as simple as a blanket disabling of
> parallelism for CREATE INDEX from pg_restore? Or, does it need to be
> more sophisticated than that? I suppose that tools like reindexdb and
> pgbench must be considered in a similar way.

I still haven't resolved this question, which seems like the most
important outstanding question, but I attach V6. Changes:

* tuplesort.c was adapted to use the recently committed condition
variables stuff. This made things cleaner. No more ad-hoc WaitLatch()
looping.

* Adapted docs to mention the newly committed max_parallel_workers GUC
in the context of discussing proposed max_parallel_workers_maintenance
GUC.

* Fixed trivial assertion failure bug that could be tripped when a
conventional sort uses very little memory.

-- 
Peter Geoghegan


0002-Add-temporary-testing-tools.patch.gz
Description: GNU Zip compressed data


0001-Add-parallel-B-tree-index-build-sorting.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Use latch instead of select() in walreceiver

2016-12-03 Thread Peter Eisentraut
On 12/1/16 8:50 PM, Michael Paquier wrote:
> +   if (rc & WL_POSTMASTER_DEATH)
> +   exit(1);
> Hmm. We have always been very careful about not leaving immediately
> from libpqwalreceiver.c which is an independent shared library so as
> the WAL receiver can take cleanup actions. See here for more details:
> https://www.postgresql.org/message-id/CAEepm=0hg_fx7kduhostpj_kpsuzw6k-7nuqny-dgaoaetn...@mail.gmail.com

walreceiver.c does the same thing.  The message you pointed out is not
exactly clear why that would be wrong.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2016-12-03 Thread Peter Eisentraut
I massaged the temporary replication slot patch a bit.  I changed the
column name in pg_stat_replication_slots from "persistent" to
"temporary" and flipped the logical sense, so that it is consistent with
the creation commands.  I also adjusted some comments and removed some
changes in ReplicationSlotCreate() that didn't seem to do anything
useful (might have been from a previous patch).

The attached patch looks good to me.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From e4a814f7abb86e78daa63bbbd37cb00f2b7d9180 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Sat, 3 Dec 2016 12:00:00 -0500
Subject: [PATCH] Add support for temporary replication slots

This allows creating temporary replication slots that are removed
automatically at the end of the session or on error.
---
 contrib/test_decoding/Makefile  |  2 +-
 contrib/test_decoding/expected/ddl.out  |  4 +--
 contrib/test_decoding/expected/slot.out | 35 ++
 contrib/test_decoding/sql/slot.sql  | 13 +++
 doc/src/sgml/func.sgml  | 16 ++---
 doc/src/sgml/protocol.sgml  | 13 ++-
 src/backend/catalog/system_views.sql| 11 ++
 src/backend/replication/repl_gram.y | 22 
 src/backend/replication/repl_scanner.l  |  1 +
 src/backend/replication/slot.c  | 63 +++--
 src/backend/replication/slotfuncs.c | 24 +
 src/backend/replication/walsender.c | 28 +--
 src/backend/storage/lmgr/proc.c |  3 ++
 src/backend/tcop/postgres.c |  3 ++
 src/include/catalog/pg_proc.h   |  6 ++--
 src/include/nodes/replnodes.h   |  1 +
 src/include/replication/slot.h  |  4 ++-
 src/test/regress/expected/rules.out |  3 +-
 18 files changed, 204 insertions(+), 48 deletions(-)
 create mode 100644 contrib/test_decoding/expected/slot.out
 create mode 100644 contrib/test_decoding/sql/slot.sql

diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile
index a6641f5..d2bc8b8 100644
--- a/contrib/test_decoding/Makefile
+++ b/contrib/test_decoding/Makefile
@@ -39,7 +39,7 @@ submake-test_decoding:
 
 REGRESSCHECKS=ddl xact rewrite toast permissions decoding_in_xact \
 	decoding_into_rel binary prepared replorigin time messages \
-	spill
+	spill slot
 
 regresscheck: | submake-regress submake-test_decoding temp-install
 	$(MKDIR_P) regression_output
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index a9ba615..c104c48 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -702,7 +702,7 @@ SELECT pg_drop_replication_slot('regression_slot');
 
 /* check that the slot is gone */
 SELECT * FROM pg_replication_slots;
- slot_name | plugin | slot_type | datoid | database | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
++---++--+++--+--+-+-
+ slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
+---++---++--+---+++--+--+-+-
 (0 rows)
 
diff --git a/contrib/test_decoding/expected/slot.out b/contrib/test_decoding/expected/slot.out
new file mode 100644
index 000..1a8c8df
--- /dev/null
+++ b/contrib/test_decoding/expected/slot.out
@@ -0,0 +1,35 @@
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot_p', 'test_decoding');
+ ?column? 
+--
+ init
+(1 row)
+
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot_t', 'test_decoding', true);
+ ?column? 
+--
+ init
+(1 row)
+
+SELECT pg_drop_replication_slot('regression_slot_p');
+ pg_drop_replication_slot 
+--
+ 
+(1 row)
+
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot_p', 'test_decoding', false);
+ ?column? 
+--
+ init
+(1 row)
+
+-- reconnect to clean temp slots
+\c
+SELECT pg_drop_replication_slot('regression_slot_p');
+ pg_drop_replication_slot 
+--
+ 
+(1 row)
+
+-- should fail because the temporary slot was dropped automatically
+SELECT pg_drop_replication_slot('regression_slot_t');
+ERROR:  replication slot "regression_slot_t" does not exist
diff --git a/contrib/test_decoding/sql/slot.sql b/contrib/test_decoding/sql/slot.sql
new file mode 100644
index 000..8728db5
--- /dev/null
+++ b/contrib/test_decoding/sql/slot.sql
@@ -0,0 +1,13 @@
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot_p', 'test_decoding');
+SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot_t', 'test_decoding', true);
+
+SELECT 

Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013

2016-12-03 Thread Noah Misch
On Wed, Nov 16, 2016 at 08:45:20PM +, Christian Ullrich wrote:
> * Noah Misch wrote:
> > I prefer the simplicity of abandoning the cache (patch 4), if it
> > performs decently.  Would you compare the performance of patch 1,
> > patches 1+2+3, and patches 1+2+4?  This should measure the right 
> > thing (after substituting @libdir@ for your environment):
> 
> Tested with release builds; Core i7-6700K (quad/HT; 4000 MHz).
> I did three runs each, and they were always within 0.5 % of each
> other's run time.
> 
> - patch 1 (now 1-3): 24 μs/iteration (24 s for 1e6)
> - patch 1+2+3 (now 1-5 + 8): 29 μs/iteration
> - patch 1+2+4 (now 1-7): 30 μs/iteration

Thanks for measuring; 6μs*9=54μs is a negligible addition to Windows backend
startup time.

On Wed, Nov 30, 2016 at 04:24:34PM +, Christian Ullrich wrote:
> * From: Michael Paquier
> > would be nice to mention in a code comment that this what Noah has
> > mentioned upthread: if a CRT loads while pgwin32_putenv() is
> > executing, the newly-loaded CRT will always have the latest value.
> 
> I fixed the existing comment by removing the last sentence that is in the 
> wrong place now, but I don't see the point in suddenly starting to explain 
> why it is done this way and not the other.
> 
> > Based on that 0006 will need a rebase, nothing huge though.
> 
> Done, new revisions attached.

I committed patches 1-7 with some comment changes, a pgindent, and other
cosmetic trivia.  (The file was pgindent-clean before this work.  Patch 6
still achieved the more-compact formatting you sought.)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel execution and prepared statements

2016-12-03 Thread Tobias Bussmann

> I think if we don't see any impact then we should backpatch this
> patch. However, if we decide to go some other way, then I can provide
> a separate patch for back branches.  BTW, what is your opinion?

I could not find anything on backporting guidelines in the wiki but my opinion 
would be to backpatch the patch in total. With a different behaviour between 
the simple and extended query protocol it would be hard to debug query 
performance issue in user applications that uses PQprepare. If the user tries 
to replicate a query with a PREPARE in psql and tries to EXPLAIN EXECUTE it, 
the results would be different then what happens within the application. That 
behaviour could be confusing, like differences between EXPLAIN SELECT and 
EXPLAIN EXECUTE can be to less experienced users.

Best regards
Tobias




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add max_parallel_workers GUC.

2016-12-03 Thread Tom Lane
Robert Haas  writes:
> On Dec 2, 2016, at 5:45 PM, Tom Lane  wrote:
>> Might work.  We've had very bad luck with GUC variables with
>> interdependent defaults, but maybe the user-visible knob could be a
>> percentage of max_connections or something like that.

> Seems like overkill. Let's just reduce the values a bit.

Agreed.  How about max_worker_processes = 8 as before, with
max_parallel_workers of maybe 6?  Or just set them both to 8.
I'm not sure that the out-of-the-box configuration needs to
leave backend slots locked down for non-parallel worker processes.
Any such process would require manual configuration anyway no?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: scan key push down to heap [WIP]

2016-12-03 Thread Dilip Kumar
On Fri, Dec 2, 2016 at 3:11 AM, Andres Freund  wrote:
> Hm. I'm more than a bit doubful about this approach. Shouldn't we just
> *always* do this as part of expression evaluation, instead of
> special-casing for seqscans?

That make sense, we can actually do this as part of expression
evaluation and we can cover more cases.
>
> I.e. during planning recognize that an OpExpr can be evaluated as a
> scankey and then emit different qual evaluation instructions?  Because
> then the benefit can be everywhere, instead of just seqscans.

I will experiment with this..
>
> I'll post my new expression evaluation stuff - which doesn't do this
> atm, but makes ExecQual faster in other ways - later this week.  If we
> could get the planner (or parse-analysis?) to set an OpExpr flag that
> signals that the expression can be evaluated as a scankey, that'd be
> easy.

Isn't it better to directly make two separate lists during planning
itself, one for regular qual and other which can be converted to
scankey. Instead of keeping the flag in OpExpr ?

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2016-12-03 Thread Alvaro Herrera
Pavel Stehule wrote:

> 2016-12-02 23:25 GMT+01:00 Alvaro Herrera :

> > This is looking much better now, but it still needs at least the
> > following changes.
> >
> > First, we need to fix is the per_rowset_memcxt thingy.  I think the way
> > it's currently being used is rather ugly; it looks to me like the memory
> > context does not belong into the XmlTableContext struct at all.
> > Instead, the executor code should keep the memcxt pointer in a state
> > struct of its own, and it should be the executor's responsibility to
> > change to the appropriate context before calling the table builder
> > functions.  In particular, this means that the table context can no
> > longer be a void * pointer; it needs to be a struct that's defined by
> > the executor (probably a primnodes.h one).  The void * pointer is
> > stashed inside that struct.  Also, the "routine" pointer should not be
> > part of the void * struct, but of the executor's struct.  So the
> > execQual code can switch to the memory context, and destroy it
> > appropriately.
> >
> > Second, we should make gram.y set a new "function type" value in the
> > TableExpr it creates, so that the downstream code (transformTableExpr,
> > ExecInitExpr, ruleutils.c) really knows that the given function is
> > XmlTableExpr, instead of guessing just because it's the only implemented
> > case.  Probably this "function type" is an enum (currently with a single
> > value TableExprTypeXml or something like that) in primnodes.
> 
> It has sense - I was not sure about it - because currently it is only one
> value, you mentioned it.

True.  This is a minor point.

Are you able to do the memory context change I describe?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Better support for symlinks on Windows...

2016-12-03 Thread Michael Paquier
Hi all.

I just bumped into the following post:
https://blogs.windows.com/buildingapps/2016/12/02/symlinks-windows-10/#IAHXCu04bDmrrhxi.97

But... That's still Windows-specific as it uses a non-POSIX routine
CreateSymbolicLink():
https://msdn.microsoft.com/en-us/library/windows/desktop/aa363866(v=vs.85).aspx

I am wondering why they continue making the life of devs more
complicated as any application is going to need to have a wrapper
calling this function just for WIN32 (readlink of course does not
exist there, we have pgreadlink that uses junction points).
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in _hash_splitbucket_guts

2016-12-03 Thread Amit Kapila
On Sat, Dec 3, 2016 at 3:44 PM, Andreas Seltenreich  wrote:
> Amit Kapila writes:
>
>> How should I connect to this database?  If I use the user fdw
>> mentioned in pg_hba.conf (changed authentication method to trust in
>> pg_hba.conf), it says the user doesn't exist.  Can you create a user
>> in the database which I can use?
>
> There is also a superuser "postgres" and an unprivileged user "smith"
> you should be able to login with.  You could also start postgres in
> single-user mode to bypass the authentication altogether.
>

Thanks.  I have checked and found that my above speculation seems to
be right which means that old bucket contains tuples from previous
split.  At the location of Assert, I have printed the values of old
bucket, new bucket and actual bucket to which tuple belongs and below
is the result.

regression=# update public.hash_i4_heap set seqno = public.hash_i4_heap.random;
ERROR:  wrong bucket, old bucket:37, new bucket:549, actual bucket:293

So what above means is that tuple should either belong to bucket 37 or
549, but it actually belongs to 293.  Both 293 and 549 are the buckets
that are split from splitted from bucket 37 (you can find that by
using calculation as used in _hash_expandtable).  I have again checked
the code and couldn't find any other reason execpt from what I
mentioned in my previous mail.  So, let us wait for the results of
your new test run.

> Amit Kapila writes:
>
>> Please find attached patch to fix above code.  Now, if this is the
>> reason of the problem you are seeing, it won't fix your existing
>> database as it already contains some tuples in the wrong bucket.  Can
>> you please re-run the test to see if you can reproduce the problem?
>
> Ok, I'll do testing with the patch applied.
>
> Btw, I also find entries like following in the logging database:
>
> ERROR:  could not read block 2638 in file "base/16384/17256": read only 0 of 
> 8192 bytes
>
> …with relfilenode being an hash index.  I usually ignore these as they
> naturally start occuring after a recovery because of an unrelated crash.
> But since 11003eb, they also occur when the cluster has not yet suffered
> a crash.
>

Hmm, I am not sure if this is related to previous problem, but it
could be.  Is it possible to get the operation and or callstack for
above failure?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add support for restrictive RLS policies

2016-12-03 Thread Dean Rasheed
Stephen,

I looked through this in a little more detail and I found one other
issue: the documentation for the system catalog table pg_policy and
the view pg_policies needs to be updated to include the new columns
that have been added.

Other than that, it all looks good to me, subject to the previous comments.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix checkpoint skip logic on idle systems by tracking LSN progress

2016-12-03 Thread Amit Kapila
On Fri, Dec 2, 2016 at 9:50 AM, Michael Paquier
 wrote:
> On Wed, Nov 30, 2016 at 7:53 PM, Amit Kapila  wrote:
>>> + * Switch segment only when WAL has done some progress since the
>> + * > last time a segment has switched because of a timeout.
>>
>>> + if (GetProgressRecPtr() > last_switch_lsn)
>>
>> Either the above comment is wrong or the code after it has a problem.
>> last_switch_lsn aka XLogCtl->lastSegSwitchLSN is updated not only for
>> a timeout but also when there is a lot of WAL activity which makes WAL
>> Write to cross a segment boundary.
>
> Right, this should be reworded a bit better to mention both. Done as attached.
>

+ * Switch segment only when WAL has done some progress since the
+ * last time a segment has switched because of a timeout or because
+ * of some WAL activity.

I think it could be better written as below, but it is up to you to
retain your version or use below one.

Switch segment only when WAL has done some progress since the last
time a segment has switched due to timeout or WAL activity.  Apart
from that patch looks good to me.

Note to Committer:  As discussed above [1], this patch skips logging
for LogAccessExclusiveLocks which can be called from multiple places,
so for clarity purpose either we should document it or skip it only
when absolutely necessary.


[1] - 
https://www.postgresql.org/message-id/CAA4eK1KJAXA3PdxH4T1QJKBNOvyUK8UKm_GCvTuT%2BFC5jpjmjg%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in _hash_splitbucket_guts

2016-12-03 Thread Andreas Seltenreich
Amit Kapila writes:

> How should I connect to this database?  If I use the user fdw
> mentioned in pg_hba.conf (changed authentication method to trust in
> pg_hba.conf), it says the user doesn't exist.  Can you create a user
> in the database which I can use?

There is also a superuser "postgres" and an unprivileged user "smith"
you should be able to login with.  You could also start postgres in
single-user mode to bypass the authentication altogether.

Amit Kapila writes:

> Please find attached patch to fix above code.  Now, if this is the
> reason of the problem you are seeing, it won't fix your existing
> database as it already contains some tuples in the wrong bucket.  Can
> you please re-run the test to see if you can reproduce the problem?

Ok, I'll do testing with the patch applied.

Btw, I also find entries like following in the logging database:

ERROR:  could not read block 2638 in file "base/16384/17256": read only 0 of 
8192 bytes

…with relfilenode being an hash index.  I usually ignore these as they
naturally start occuring after a recovery because of an unrelated crash.
But since 11003eb, they also occur when the cluster has not yet suffered
a crash.

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow backslash continuations in \set expressions

2016-12-03 Thread Fabien COELHO



FWIW, I looked a bit further and concluded that probably psqlscan.l
doesn't need to be modified; so likely you could do this across all of
pgbench's commands without touching psql.  That might be an acceptable
compromise for now, though I still think that as soon as we have this
for pgbench, users will start wanting it in psql.


The attached patch adds backslash-return (well newline really) 
continuations to all pgbench backslash-commands.


The attached test uses continuations on all such commands (sleep set 
setshell and shell).


I think that adding continuations to psql should be a distinct patch.

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 3a65729..b155db5 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -814,6 +814,7 @@ pgbench  options  dbname
   
Script file meta commands begin with a backslash (\) and
extend to the end of the line.
+   They can spread over several lines with backslash-return continuations.
Arguments to a meta command are separated by white space.
These meta commands are supported:
   
@@ -842,7 +843,8 @@ pgbench  options  dbname
   Examples:
 
 \set ntellers 10 * :scale
-\set aid (1021 * random(1, 10 * :scale)) % (10 * :scale) + 1
+\set aid (1021 * random(1, 10 * :scale)) % \
+   (10 * :scale) + 1
 
 

diff --git a/src/bin/pgbench/exprscan.l b/src/bin/pgbench/exprscan.l
index 20891a3..3c428de 100644
--- a/src/bin/pgbench/exprscan.l
+++ b/src/bin/pgbench/exprscan.l
@@ -65,6 +65,7 @@ alnum			[a-zA-Z0-9_]
 space			[ \t\r\f\v]
 nonspace		[^ \t\r\f\v\n]
 newline			[\n]
+continuation	\\{newline}
 
 /* Exclusive states */
 %x EXPR
@@ -104,6 +105,8 @@ newline			[\n]
 	return 0;
 }
 
+{continuation}	{ /* ignore */ }
+
 	/* EXPR state */
 
 {
@@ -138,6 +141,8 @@ newline			[\n]
 	return FUNCTION;
 }
 
+{continuation}	{ /* ignore */ }
+
 {newline}		{
 	/* report end of command */
 	last_was_newline = true;


cont.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers