Re: [HACKERS] [sqlsmith] Crash in apply_projection_to_path

2016-04-29 Thread Amit Kapila
On Fri, Apr 29, 2016 at 7:33 PM, Tom Lane  wrote:
>
> Amit Kapila  writes:
> >> On Thu, Apr 28, 2016 at 10:06 PM, Tom Lane  wrote:
> >>> I'd be inclined to think that it's silly to build GatherPaths in
advance
> >>> of having finalized the list of partial paths for a rel.
>
> > What's happening here is that to form joinrel, we need to call
> > add_paths_to_joinrel() with both outer and inner relation twice, once
with
> > rel1 as outer relation and rel1 as inner relation and vice versa.  So
now
> > the second call to add_paths_to_joinrel() can replace a partial path
which
> > is being referenced by GatherPath generated in first call.  I think we
> > should generate gather paths for join rel after both the calls
> > to add_paths_to_joinrel() aka in make_join_rel().  Attached patch on
above
> > lines fixes the problem for me.
>
> make_join_rel is certainly not far enough down the call stack to solve
> this problem.  It can, and typically will, be invoked multiple times
> for the same target join relation.
>
> One possible answer is to do it in standard_join_search, just before
> the set_cheapest call for each join relation.
>

Yes, that makes sense to me.

>  You'd need to account
> for the issue in GEQO search as well.
>

How about doing it in merge_clump() before calling set_cheapest()?

Yet, another idea could be to create a copy of partial path before passing
it to create_gather_path()?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] psql: tab completion for \l

2016-04-29 Thread Ian Barwick
Hi

Evidently over the past 15 or so years I've never needed to type "\l ",
but when isolating a single database entry in a cluster with a lot more
databases than most I've encountered, was suprised to notice it didn't work.

Trivial patch attached, will add to next commitfest.


Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
new file mode 100644
index a62ffe6..a87b483
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
*** psql_completion(const char *text, int st
*** 2994,2999 
--- 2994,3001 
COMPLETE_WITH_QUERY(Query_for_list_of_encodings);
else if (TailMatchesCS1("\\h") || TailMatchesCS1("\\help"))
COMPLETE_WITH_LIST(sql_commands);
+   else if (TailMatchesCS1("\\l*") && !TailMatchesCS1("\\lo_*"))
+   COMPLETE_WITH_QUERY(Query_for_list_of_databases);
else if (TailMatchesCS1("\\password"))
COMPLETE_WITH_QUERY(Query_for_list_of_roles);
else if (TailMatchesCS1("\\pset"))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Andres Freund
On 2016-04-30 02:28:22 +0200, Andreas Seltenreich wrote:
> This sounds like it should work to capture more context when the
> Assertion fails the next time.  I have to purge the catalogs a bit
> though to avoid stopping early on boring core dumps.  Most of them are
> currently caused by acl.c using text for syscache lookups and triggering
> an NAMEDATALEN assertion.
> 
> E.g.: select 
> has_language_privilege('smithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmith',
>  smith');

Yuck. We got to fix those.   Does anybody remember how these functions
came to use text instead of name for things that pretty clearly should
have accepted name (i.e. the objects not the priv string)?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Andres Freund
On 2016-04-30 02:34:35 +0200, Andreas Seltenreich wrote:
> I didn't think the effort of creating this kind of clean-room testing
> was worth it.  If reports of failed assertions with backtrace without a
> recipe to reproduce them are a nuisance, I'll avoid them in the future.

It's obviously better to have a recipe for reproducing the issue, but I
think the reports are quite useful even without that.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Andreas Seltenreich
Simon Riggs writes:

> It's good that the input is fuzzed, but there needs to be a way to re-run
> identical fuzzing or a way to backtrack to find what broke. Not much point
> finding bugs we can't identify later.

sqlsmith is deterministic and allows re-generating a sequence of random
queries with the --seed argument.  Finding a testing methodology that
ensures a repeatable server-side is a harder problem though.

One would have to avoid touching any kind of concurrency, disable
autovacuum, autoanalyze and invoke explicit analyzes/vacuums in concert
with query generation.  Further, one would have to avoid any kind of
concurrency while testing.  Even then, 1% of the queries run into a
statement_timeout due to randomly generated excessive cross joins.  If a
timeout just barely happens, it might not do so on the repeated run and
the deterministic state is gone from then on.  I'm afraid this list is
not complete yet.

I didn't think the effort of creating this kind of clean-room testing
was worth it.  If reports of failed assertions with backtrace without a
recipe to reproduce them are a nuisance, I'll avoid them in the future.

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Andreas Seltenreich
Alvaro Herrera writes:
> Amit Kapila wrote:
>> It will be helpful if you can find the offending query or plan
>> corresponding to it?
>
> So I suppose the PID of the process starting the workers should be in
> the stack somewhere.

Ja, it's right on the top, but long gone by now…

> With that one should be able to attach to that process and get another
> stack trace.  I'm curious on whether you would need to have started
> the server with "postgres -T"

This sounds like it should work to capture more context when the
Assertion fails the next time.  I have to purge the catalogs a bit
though to avoid stopping early on boring core dumps.  Most of them are
currently caused by acl.c using text for syscache lookups and triggering
an NAMEDATALEN assertion.

E.g.: select 
has_language_privilege('smithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmithsmith',
 smith');

thanks,
andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] atomic pin/unpin causing errors

2016-04-29 Thread Andres Freund
Hi,

On 2016-04-29 10:38:55 -0700, Jeff Janes wrote:
> I've bisected the errors I was seeing, discussed in
> http://www.postgresql.org/message-id/CAMkU=1xqehc0ok4d+tkjfq1nvuho37pyrkhjp6q8oxifmx7...@mail.gmail.com
> 
> It look like they first appear in:
> 
> commit 48354581a49c30f5757c203415aa8412d85b0f70
> Author: Andres Freund 
> Date:   Sun Apr 10 20:12:32 2016 -0700
> 
> Allow Pin/UnpinBuffer to operate in a lockfree manner.
> 
> 
> I get the errors:
> 
> ERROR:  attempted to delete invisible tuple
> STATEMENT:  update foo set count=count+1,text_array=$1 where text_array @> $2
> 
> And also:
> 
> ERROR:  unexpected chunk number 1 (expected 2) for toast value
> 85223889 in pg_toast_16424
> STATEMENT:  update foo set count=count+1 where text_array @> $1
> 
> Once these errors start occurring, they happen often.  Usually the
> "attempted to delete invisible tuple" happens first.

That kind of seems to implicate clog/vacuuming or something like that
being involved.


> These errors show up after about 9 hours of run time.  The timing is
> predictable enough that I don't think it is a purely stochastic race
> condition.

Hm. I've a bit of a hard time believing that such a timing could be
caused by the above patch. How confident that it's that patch, and not
just changed timing due to performance changes?  And you definitely can
only reproduce the problem with the regular crash cycles?


> It seems like some counter variable is overflowing.  But
> it is not the ShmemVariableCache->nextXid counter, as I previously
> speculated.  This test does not advance that fast enough to for it to
> wrap around within 9 hours of run time.  But I am at a loss of what
> other variable it might be. Since the system goes through a crash and
> recovery every few seconds, any backend-local counters or
> shared-memory counters would get reset upon recovery.  Right?

A lot of those counters will be re-set based on WAL contents. So if
they're corrupted once, several of them are prone to continue to be
corrupted.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Breakage with VACUUM ANALYSE + partitions

2016-04-29 Thread Andres Freund
On 2016-04-28 17:41:29 +0100, Thom Brown wrote:
> I've noticed another breakage, which I can reproduce consistently.

> 2016-04-28 17:36:08 BST [18108]: [47-1] user=,db=,client= DEBUG:  could not
> fsync file "base/24581/24594.1" but retrying: No such file or directory
> 2016-04-28 17:36:08 BST [18108]: [48-1] user=,db=,client= ERROR:  could not
> fsync file "base/24581/24594.1": No such file or directory
> 2016-04-28 17:36:08 BST [18605]: [17-1]
> user=thom,db=postgres,client=[local] ERROR:  checkpoint request failed
> 2016-04-28 17:36:08 BST [18605]: [18-1]
> user=thom,db=postgres,client=[local] HINT:  Consult recent messages in the
> server log for details.

Yuck. md.c is so crummy :(


Basically the reason for the problem is that mdsync() needs to access
"formally non-existant segments" (as in ones where previous segments are
< RELSEG_SIZE), because we queue (and the might be preexistant) fsync
requests via register_dirty_segment() in mdtruncate().

I'm a bit of a loss of how to reconcile that view with the original
issue in this thread.  The best I can come up with this moment is doing
a _mdfd_openseg() in mdsync() to open the truncated segment if
_mdfd_getseg() returned NULL. We don't want to normally use that in
either function because it'll imply a separate open() etc, which is
pretty expensive - but doing in the fallback case would be kind of ok.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accidentally parallel unsafe functions

2016-04-29 Thread Andreas Karlsson

On 04/30/2016 01:19 AM, Tom Lane wrote:

Alvaro Herrera  writes:

Surely CREATE OR REPLACE should keep whatever the flag was, rather than
ovewrite it with a bogus value if not specified?  In other words IMO the
CREATE OR REPLACE code needs changing, not system_views.sql.


Absolutely not!  The definition of CREATE OR REPLACE is that at the end,
the state of the object is predictable from only what the command says.
This is not open for renegotiation.


An example to support Tom is that it already works like the for other 
options.


postgres=# CREATE FUNCTION f() RETURNS int LANGUAGE sql AS $$ SELECT 1 
$$ SECURITY DEFINER;

CREATE FUNCTION
postgres=# SELECT pg_get_functiondef('f'::regproc);
  pg_get_functiondef
---
 CREATE OR REPLACE FUNCTION public.f()+
  RETURNS integer +
  LANGUAGE sql+
  SECURITY DEFINER+
 AS $function$ SELECT 1 $function$+

(1 row)

postgres=# CREATE OR REPLACE FUNCTION f() RETURNS int LANGUAGE sql AS $$ 
SELECT 1 $$;

CREATE FUNCTION
postgres=# SELECT pg_get_functiondef('f'::regproc);
  pg_get_functiondef
---
 CREATE OR REPLACE FUNCTION public.f()+
  RETURNS integer +
  LANGUAGE sql+
 AS $function$ SELECT 1 $function$+

(1 row)

Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accidentally parallel unsafe functions

2016-04-29 Thread David G. Johnston
On Fri, Apr 29, 2016 at 4:19 PM, Tom Lane  wrote:

> Alvaro Herrera  writes:
> > Andreas Karlsson wrote:
> >> I am currently looking into adding the correct parallel options to all
> >> functions in the extensions and I noticed that some built-in functions
> seems
> >> to have been marked as unsafe by accident. The main culprit is
> >> system_views.sql which redefines these functions and removes the
> parallel
> >> safe flag.
>
> > Surely CREATE OR REPLACE should keep whatever the flag was, rather than
> > ovewrite it with a bogus value if not specified?  In other words IMO the
> > CREATE OR REPLACE code needs changing, not system_views.sql.
>
> Absolutely not!  The definition of CREATE OR REPLACE is that at the end,
> the state of the object is predictable from only what the command says.
> This is not open for renegotiation.
>

​To whit:​

​http://www.postgresql.org/docs/current/static/sql-createfunction.html​

​"""
​
When CREATE OR REPLACE FUNCTION is used to replace an existing function,
the ownership and permissions of the function do not change. All other
function properties are assigned the values specified or implied in the
command. You must own the function to replace it (this includes being a
member of the owning role).
​"""

I'd interpret "specified or implied in the command" to mean exactly what
Tom said - and it applies to "all [other] function properties".

The ownership bit is a nice side-effect since you can use superuser to
replace existing functions that are already owned by another user.  Those
are the only implied dynamic of function creation that exists within the
CREATE FUNCTION command - everything else is contained within the CREATE
FUNCTION DDL.
​
David J.


Re: [HACKERS] Accidentally parallel unsafe functions

2016-04-29 Thread Tom Lane
Alvaro Herrera  writes:
> Andreas Karlsson wrote:
>> I am currently looking into adding the correct parallel options to all
>> functions in the extensions and I noticed that some built-in functions seems
>> to have been marked as unsafe by accident. The main culprit is
>> system_views.sql which redefines these functions and removes the parallel
>> safe flag.

> Surely CREATE OR REPLACE should keep whatever the flag was, rather than
> ovewrite it with a bogus value if not specified?  In other words IMO the
> CREATE OR REPLACE code needs changing, not system_views.sql.

Absolutely not!  The definition of CREATE OR REPLACE is that at the end,
the state of the object is predictable from only what the command says.
This is not open for renegotiation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accidentally parallel unsafe functions

2016-04-29 Thread Alvaro Herrera
Andreas Karlsson wrote:
> Hi,
> 
> I am currently looking into adding the correct parallel options to all
> functions in the extensions and I noticed that some built-in functions seems
> to have been marked as unsafe by accident. The main culprit is
> system_views.sql which redefines these functions and removes the parallel
> safe flag.

Surely CREATE OR REPLACE should keep whatever the flag was, rather than
ovewrite it with a bogus value if not specified?  In other words IMO the
CREATE OR REPLACE code needs changing, not system_views.sql.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-04-29 Thread Michael Paquier
On Sat, Apr 30, 2016 at 12:22 AM, Petr Jelinek  wrote:
> After bit of fighting with the system the "caecilian" reported first
> successful build to the buildfarm.

Thanks! The fight was there as well.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Merlin Moncure
On Fri, Apr 29, 2016 at 4:06 PM, Andrew Dunstan  wrote:
> One other point: I think we really need most of these pieces - if we are
> going to squash the whitespace we need functions to do that cleanly for json
> and to pretty-print json.

I don't think it should be squashed per se -- we just don't *add* any
whitespace.  So the current behavior of to_json,

postgres=# select to_json(q) from (select 1, '{"a"  :  "foo"}'::json) q;
to_json
───
 {"?column?":1,"json":{"a"  :  "foo"}}

...is correct to me on the premise that the user deliberately chose
the whitespace preserving json type and did not run compat on it.
However,
postgres=# select row_to_json(q) from (select 1, '{"a"  :  "foo"}'::jsonb) q;
 row_to_json
─
 {"?column?":1,"jsonb":{"a": "foo"}}

really ought to render (note lack of space after "a"):
 {"?column?":1,"jsonb":{"a":"foo"}}

This is a simple matter of removing spaces in the occasional C string
literal in the serialization routines and adding a json_pretty
function.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-04-29 Thread Kevin Grittner
On Fri, Apr 22, 2016 at 8:06 AM, Kevin Grittner  wrote:
> On Thu, Apr 21, 2016 at 4:13 PM, Kevin Grittner  wrote:
>
>> I have your test case running, and it is not immediately
>> clear why old rows are not being vacuumed away.
>
> I have not found the reason that the vacuuming is not as aggressive
> as it should be with this old_snapshot_threshold, but I left your
> test case running overnight and found that it eventually did kick
> in.  So the question is why it was not nearly as aggressive as one
> would expect.

Once I found it, it turned out to be a bit of a "forehead-slapper".
Because the array of entries mapping time to TransactionId was
exactly the same size as old_snapshot_threshold, the override of
the xmin for pruning or vacuum would not be seen if another
transaction got in fast enough, and this python test case was
pounding hard enough that the override was rarely seen.  By
expanding the array by 10 entries, we will only miss the more
aggressive cleanup if the thread stalls at that point for more than
10 minutes, which seems like a reasonable degree of patience, given
that there is no correctness problem if that does happen.

Ants, I think you'll find your test case behaving as you expected
now.

Now to continue with the performance benchmarks.  I'm pretty sure
we've fixed the problems when the feature is disabled
(old_snapshot_threshold = -1), and there are several suggestions
for improving performance while it is on that need to be compared
and benchmarked.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Accidentally parallel unsafe functions

2016-04-29 Thread Andreas Karlsson

Hi,

I am currently looking into adding the correct parallel options to all 
functions in the extensions and I noticed that some built-in functions 
seems to have been marked as unsafe by accident. The main culprit is 
system_views.sql which redefines these functions and removes the 
parallel safe flag.


I think this counts as a 9.6 bug unlike my work on adding the flags to 
all extensions which is for 9.7.


I have attached a patch which marks them and all conversion functions as 
parallel safe. I also added the flag to ts_debug() when I was already 
editing system_views.sql, feel free to ignore that one if you like.


Affected functions:

- json_populate_record()
- json_populate_recordset()
- jsonb_insert()
- jsonb_set()
- make_interval()
- parse_ident()
- Loads of conversion functions

Andreas
commit 9afcc5f1ed22be18d69dc0b70a0f057a023cc5ec
Author: Andreas Karlsson 
Date:   Fri Apr 29 23:29:42 2016 +0200

Mark functions as parallel safe

- Conversion fucntions
- Functions which are redfined in system_views.sql

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d3cc848..e08bc67 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -890,7 +890,7 @@ FROM pg_catalog.ts_parse(
 ) AS tt
 WHERE tt.tokid = parse.tokid
 $$
-LANGUAGE SQL STRICT STABLE;
+LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
 
 COMMENT ON FUNCTION ts_debug(regconfig,text) IS
 'debug function for text search configuration';
@@ -906,7 +906,7 @@ RETURNS SETOF record AS
 $$
 SELECT * FROM pg_catalog.ts_debug( pg_catalog.get_current_ts_config(), $1);
 $$
-LANGUAGE SQL STRICT STABLE;
+LANGUAGE SQL STRICT STABLE PARALLEL SAFE;
 
 COMMENT ON FUNCTION ts_debug(text) IS
 'debug function for current text search configuration';
@@ -922,17 +922,17 @@ COMMENT ON FUNCTION ts_debug(text) IS
 
 CREATE OR REPLACE FUNCTION
   pg_start_backup(label text, fast boolean DEFAULT false, exclusive boolean DEFAULT true)
-  RETURNS pg_lsn STRICT VOLATILE LANGUAGE internal AS 'pg_start_backup';
+  RETURNS pg_lsn STRICT VOLATILE LANGUAGE internal AS 'pg_start_backup' PARALLEL SAFE;
 
 -- legacy definition for compatibility with 9.3
 CREATE OR REPLACE FUNCTION
   json_populate_record(base anyelement, from_json json, use_json_as_text boolean DEFAULT false)
-  RETURNS anyelement LANGUAGE internal STABLE AS 'json_populate_record';
+  RETURNS anyelement LANGUAGE internal STABLE AS 'json_populate_record' PARALLEL SAFE;
 
 -- legacy definition for compatibility with 9.3
 CREATE OR REPLACE FUNCTION
   json_populate_recordset(base anyelement, from_json json, use_json_as_text boolean DEFAULT false)
-  RETURNS SETOF anyelement LANGUAGE internal STABLE ROWS 100  AS 'json_populate_recordset';
+  RETURNS SETOF anyelement LANGUAGE internal STABLE ROWS 100  AS 'json_populate_recordset' PARALLEL SAFE;
 
 CREATE OR REPLACE FUNCTION pg_logical_slot_get_changes(
 IN slot_name name, IN upto_lsn pg_lsn, IN upto_nchanges int, VARIADIC options text[] DEFAULT '{}',
@@ -980,7 +980,7 @@ CREATE OR REPLACE FUNCTION
 secs double precision DEFAULT 0.0)
 RETURNS interval
 LANGUAGE INTERNAL
-STRICT IMMUTABLE
+STRICT IMMUTABLE PARALLEL SAFE
 AS 'make_interval';
 
 CREATE OR REPLACE FUNCTION
@@ -988,14 +988,14 @@ CREATE OR REPLACE FUNCTION
 create_if_missing boolean DEFAULT true)
 RETURNS jsonb
 LANGUAGE INTERNAL
-STRICT IMMUTABLE
+STRICT IMMUTABLE PARALLEL SAFE
 AS 'jsonb_set';
 
 CREATE OR REPLACE FUNCTION
   parse_ident(str text, strict boolean DEFAULT true)
 RETURNS text[]
 LANGUAGE INTERNAL
-STRICT IMMUTABLE
+STRICT IMMUTABLE PARALLEL SAFE
 AS 'parse_ident';
 
 CREATE OR REPLACE FUNCTION
@@ -1003,7 +1003,7 @@ CREATE OR REPLACE FUNCTION
 insert_after boolean DEFAULT false)
 RETURNS jsonb
 LANGUAGE INTERNAL
-STRICT IMMUTABLE
+STRICT IMMUTABLE PARALLEL SAFE
 AS 'jsonb_insert';
 
 -- The default permissions for functions mean that anyone can execute them.
diff --git a/src/backend/utils/mb/conversion_procs/Makefile b/src/backend/utils/mb/conversion_procs/Makefile
index 8b97803..879467e 100644
--- a/src/backend/utils/mb/conversion_procs/Makefile
+++ b/src/backend/utils/mb/conversion_procs/Makefile
@@ -173,7 +173,7 @@ $(SQLSCRIPT): Makefile
 		func=$$1; shift; \
 		obj=$$1; shift; \
 		echo "-- $$se --> $$de"; \
-		echo "CREATE OR REPLACE FUNCTION $$func (INTEGER, INTEGER, CSTRING, INTERNAL, INTEGER) RETURNS VOID AS '$$"libdir"/$$obj', '$$func' LANGUAGE C STRICT;"; \
+		echo "CREATE OR REPLACE FUNCTION $$func (INTEGER, INTEGER, CSTRING, INTERNAL, INTEGER) RETURNS VOID AS '$$"libdir"/$$obj', '$$func' LANGUAGE C STRICT PARALLEL SAFE;"; \
 	echo "COMMENT ON FUNCTION $$func(INTEGER, INTEGER, CSTRING, INTERNAL, INTEGER) IS 'internal conversion function for $$se to $$de';"; \
 		echo "DROP CONVERSION pg_catalog.$$name;"; \
 		echo "CREATE DEFAULT CONVERSION pg_catalog.$$name FOR '$$se' TO '$$de' FROM $$func;"; \

-- 
Sent via 

Re: [HACKERS] [COMMITTERS] pgsql: Support building with Visual Studio 2015

2016-04-29 Thread Alvaro Herrera
Andrew Dunstan wrote:
> Support building with Visual Studio 2015
> 
> Adjust the way we detect the locale. As a result the minumum Windows
> version supported by VS2015 and later is Windows Vista. Add some tweaks
> to remove new compiler warnings. Remove documentation references to the
> now obsolete msysGit.
> 
> Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich.

Great team work here, kudos.  You forgot to mention Petr Jelínek as
co-author in the credits.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Html parsing and inline elements

2016-04-29 Thread David G. Johnston
On Fri, Apr 29, 2016 at 1:47 PM, Bruce Momjian  wrote:

> On Wed, Apr 13, 2016 at 12:57:19PM -0300, Marcelo Zabani wrote:
> > Hi, Tom,
> >
> > You're right, I don't think one can argue that the default parser should
> know
> > HTML.
> > How about your suggestion of there being an HTML parser, is it feasible?
> I ask
> > this because I think that a lot of people store HTML documents these
> days, and
> > although there probably aren't lots of HTML with words written along
> multiple
> > inline elements, it would certainly be nice to have a proper parser for
> these
> > use cases.
> >
> > What do you think?
>
> It sounds useful.
>

​It sounds like an external project/extension...

David J.
​


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Andrew Dunstan



On 04/29/2016 02:34 PM, Merlin Moncure wrote:


I wouldn't necessarily be opposed to us having one or more of the following:

a) suppressing whitespace addition in all json generation and text output,
possibly governed by a GUC setting so we could maintain behaviour
compatibility if required

Sounds great to me, because we can unify the code so that we have *one*
piece to convert json to text instead of N, and not worry about the
non-relevant whitespace.

hurk -- no objection to unifying the text serialization code (if that
proves reasonable to do).   However I think using GUC to control
output format is not a good idea.  We did this for bytea and it did
not turn out well; much better to have code anticipating precise
formats to check the server version.  This comes up over and over
again: the GUC is not a solution for backwards compatibility...in
fact, it's pandora's box (see:
https://dev.mysql.com/doc/refman/5.5/en/sql-mode.html) .





OK, fine by me. It's trivial to do for jsonb - all the white space comes 
from on function, AFAIK. For json it's a bit more spread out, but only 
in  one or two files. Here's a question: say we have this table: 
mytable:(x text, y json). now we do: "select to_json(r) from mytable r;" 
Now y is a json field, which preserves the whitespace of the input. Do 
we squash the whitespace out or not when producing the output of this 
query? I'm inclined to say yes we do, but it's not a slam-dunk no-brainer.


One other point: I think we really need most of these pieces - if we are 
going to squash the whitespace we need functions to do that cleanly for 
json and to pretty-print json.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactor pg_dump as a library?

2016-04-29 Thread Bruce Momjian
On Thu, Apr 14, 2016 at 01:40:21PM -0400, David Steele wrote:
> On 4/14/16 1:33 PM, Tom Lane wrote:
> > David Steele  writes:
> >> On 4/14/16 7:16 AM, Andreas Karlsson wrote:
> >>> I am personally not a fan of the pg_get_Xdef() functions due to their
> >>> heavy reliance on the syscache which feels rather unsafe in combination
> >>> with concurrent DDL.
> > 
> >> As far as I know pg_dump share locks everything before it starts so
> >> there shouldn't be issues with concurrent DDL.  Try creating a new
> >> inherited table with FKs, etc. during a pg_dump and you'll see lots of
> >> fun lock waits.
> > 
> > I think pg_dump is reasonably proof against DDL on tables.  It is not
> > at all proof against DDL on other sorts of objects, such as functions,
> > because of the fact that the syscache will follow catalog updates that
> > occur after pg_dump's transaction snapshot.
> 
> Hmm, OK.  I'll need to go look at that.
> 
> I thought that the backend running the pg_dump would fill it's syscache
> when it took all the locks and then not update them during the actual
> dump.  If that's not the case then it's a bit scary, yes.
> 
> It seems to make a good case for physical backups vs. logical.

I think another issue is that the pg_dump backend gets cache
invalidations from other backends that cause it to reload the cache with
new contents, so even if you pre-loaded the cache at snapshot time, you
would still need to ignore cache invalidations from other backends.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SPI_exec ERROR in pl/r of R 3.2.4 on PostgreSQL on Windows 7

2016-04-29 Thread Joe Conway
On 04/29/2016 07:58 AM, Andre Mikulec wrote:
> I am working with pl/r compiled for R.3.2.4 64 bit on PostgreSQL 9.5.1
> 64 bit on Windows 7 64 bit

Who did the compiling? Did you compile everything yourself, or use
binary installers for some of it? If so, which ones?

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Html parsing and inline elements

2016-04-29 Thread Bruce Momjian
On Wed, Apr 13, 2016 at 12:57:19PM -0300, Marcelo Zabani wrote:
> Hi, Tom,
> 
> You're right, I don't think one can argue that the default parser should know
> HTML.
> How about your suggestion of there being an HTML parser, is it feasible? I ask
> this because I think that a lot of people store HTML documents these days, and
> although there probably aren't lots of HTML with words written along multiple
> inline elements, it would certainly be nice to have a proper parser for these
> use cases.
> 
> What do you think?

It sounds useful.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Bruce Momjian
On Fri, Apr 29, 2016 at 02:20:40PM -0300, Alvaro Herrera wrote:
> Now, if you still live in a cave and don't use Gmail (like, say, me),
> you could still change the options in Majordomo to send a unique copy of
> each message, that is to say change the delivery class to "unique"
> rather than "each".  Then it will see that you have the same message in
> two lists and send you only one.

OK, I set every Majordomo email list to "each unduplicated message" ---
who says I am behind the times.  :-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Joshua D. Drake

On 04/29/2016 11:36 AM, Simon Riggs wrote:


Egos.

Consider PgLogical, who is working on this outside of 2Q?


Thank you for volunteering to assist. What would you like to work on?


You are very welcome. I have been testing as you know. I would be happy 
to continue that and also was going to look into having a subscriber 
validate if it is connecting to a subscribed node or not which is the 
error I ran into.


I am also interested in creating user docs (versus reference docs).

Sincerely,

JD



--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Andres Freund
On 2016-04-29 15:27:15 -0300, Alvaro Herrera wrote:
> If consensus is that we should completely forbid cross-posting, we can
> talk about that.

I find xposts rather useful. WRT committers vs. hackers thing, I'll
e.g. be far more likely to be able to keep up with committers than
hackers.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Simon Riggs
On 29 April 2016 at 18:40, Joshua D. Drake  wrote:

> On 04/29/2016 08:44 AM, Bruce Momjian wrote:
>
>> On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote:
>>
>>> Our roadmap http://www.postgresql.org/developer/roadmap/ is the
>>> problem. We
>>> don't have clear roadmap and that's why we cannot plan future feature
>>> full
>>> release. There are several postgres-centric companies, which have most of
>>> developers, who do all major contributions. All these companies has their
>>> roadmaps, but not the community.
>>>
>>
>> I would be concerned if company roadmaps overtly affected the community
>> roadmap.  In general, I find company roadmaps to be very short-sighted
>> and quickly changed based on the demands of specific users/customers ---
>> something we don't want to imitate.
>>
>> We do want company roadmaps to affect the community roadmap, but in a
>> healthy, long-term way, and I think, in general, that is happening.
>>
>>
> The roadmap is not the problem it is the lack of cooperation. Many
> companies are now developing features in a silo and then presenting them to
> the community. Instead we should be working with those companies to have
> them develop transparently so others can be a part of the process.
>
> If the feature is going to be submitted to core anyway (or open source)
> why wouldn't we just do that? Why wouldn't EDB develop directly within the
> Pg infrastructure. Why wouldn't we build teams around the best and
> brightest between EDB, 2Q and Citus?
>
> Egos.
>
> Consider PgLogical, who is working on this outside of 2Q?


Thank you for volunteering to assist. What would you like to work on?

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Merlin Moncure
On Fri, Apr 29, 2016 at 12:31 PM, Alvaro Herrera
 wrote:
> Thanks Alex for finding the previous thread.
>
> Andrew Dunstan wrote:
>>
>> On 04/28/2016 04:29 PM, Alvaro Herrera wrote:
>
>> >Actually we did have someone come up with a patch to "normalize" how
>> >JSON stuff was output, because our code seems to do it in three
>> >different, inconsistent ways.  And our response was for them to get the
>> >heck outta here, because we're so much in love with our current
>> >practice that we don't need to refactor the three implementations into a
>> >single one.
>> That's a pretty bad mischaracterization of the discussion.
>
> Sorry, I don't agree with that.
>
>> What was proposed
>> was broken, as I pointed out to the OP, and then again later to you when you
>> asked about it.
>
> I didn't find your argument convincing back then, but this doesn't have
> enough value to me that I can spend much time arguing about it.
>
>> I wouldn't necessarily be opposed to us having one or more of the following:
>>
>> a) suppressing whitespace addition in all json generation and text output,
>> possibly governed by a GUC setting so we could maintain behaviour
>> compatibility if required
>
> Sounds great to me, because we can unify the code so that we have *one*
> piece to convert json to text instead of N, and not worry about the
> non-relevant whitespace.

hurk -- no objection to unifying the text serialization code (if that
proves reasonable to do).   However I think using GUC to control
output format is not a good idea.  We did this for bytea and it did
not turn out well; much better to have code anticipating precise
formats to check the server version.  This comes up over and over
again: the GUC is not a solution for backwards compatibility...in
fact, it's pandora's box (see:
https://dev.mysql.com/doc/refman/5.5/en/sql-mode.html) .

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Alvaro Herrera
Bruce Momjian wrote:
> On Fri, Apr 29, 2016 at 02:49:38PM -0300, Alvaro Herrera wrote:
> > Joshua D. Drake wrote:
> > > On 04/29/2016 10:20 AM, Alvaro Herrera wrote:
> > > 
> > > >:0 Wh: msgid.lock
> > > >| formail -D 65536 $HOME/.msgid.cache
> > > 
> > > And Alvaro drowns in the drool of his own sarcasm.
> > 
> > I was going to add an apology that this wasn't supposed to be insulting,
> > only funny, but refrained.  There was no offense meant.
> 
> I took it as humorous.

Great, thanks.

> I didn't turn on duplicate removal on Majordomo so I could police
> people who cross-posted unnecessarily.  Are you saying I shouldn't
> worry about that and just turn on duplicate removal?

Yeah, I've done that for years.  I notice duplicates by looking at CCs
anyway.  Note that our archives system is perfectly prepared to deal
with messages cross-posted to several mailing lists, as a first-class
feature.  I don't think there's a strong need to police them
specifically.

If consensus is that we should completely forbid cross-posting, we can
talk about that.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Bruce Momjian
On Fri, Apr 29, 2016 at 02:49:38PM -0300, Alvaro Herrera wrote:
> Joshua D. Drake wrote:
> > On 04/29/2016 10:20 AM, Alvaro Herrera wrote:
> > 
> > >:0 Wh: msgid.lock
> > >| formail -D 65536 $HOME/.msgid.cache
> > 
> > And Alvaro drowns in the drool of his own sarcasm.
> 
> I was going to add an apology that this wasn't supposed to be insulting,
> only funny, but refrained.  There was no offense meant.

I took it as humorous.  I didn't turn on duplicate removal on Majordomo
so I could police people who cross-posted unnecessarily.  Are you saying
I shouldn't worry about that and just turn on duplicate removal?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Alvaro Herrera
Joshua D. Drake wrote:
> On 04/29/2016 10:20 AM, Alvaro Herrera wrote:
> 
> >:0 Wh: msgid.lock
> >| formail -D 65536 $HOME/.msgid.cache
> 
> And Alvaro drowns in the drool of his own sarcasm.

I was going to add an apology that this wasn't supposed to be insulting,
only funny, but refrained.  There was no offense meant.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] atomic pin/unpin causing errors

2016-04-29 Thread Jeff Janes
I've bisected the errors I was seeing, discussed in
http://www.postgresql.org/message-id/CAMkU=1xqehc0ok4d+tkjfq1nvuho37pyrkhjp6q8oxifmx7...@mail.gmail.com

It look like they first appear in:

commit 48354581a49c30f5757c203415aa8412d85b0f70
Author: Andres Freund 
Date:   Sun Apr 10 20:12:32 2016 -0700

Allow Pin/UnpinBuffer to operate in a lockfree manner.


I get the errors:

ERROR:  attempted to delete invisible tuple
STATEMENT:  update foo set count=count+1,text_array=$1 where text_array @> $2

And also:

ERROR:  unexpected chunk number 1 (expected 2) for toast value
85223889 in pg_toast_16424
STATEMENT:  update foo set count=count+1 where text_array @> $1

Once these errors start occurring, they happen often.  Usually the
"attempted to delete invisible tuple" happens first.

These errors show up after about 9 hours of run time.  The timing is
predictable enough that I don't think it is a purely stochastic race
condition.  It seems like some counter variable is overflowing.  But
it is not the ShmemVariableCache->nextXid counter, as I previously
speculated.  This test does not advance that fast enough to for it to
wrap around within 9 hours of run time.  But I am at a loss of what
other variable it might be. Since the system goes through a crash and
recovery every few seconds, any backend-local counters or
shared-memory counters would get reset upon recovery.  Right?

I think the invisible tuple referred to might be a tuple in the toast
table, not in the parent table.

I don't see the problem with an cassert-enabled, probably because it
is just too slow to ever reach the point where the problem occurs.

Any suggestions about where or how to look?  I don't know if the
"attempted to delete invisible tuple" is the bug itself, or is just
tripping over corruption left behind by someone else.

(This was all run using Teodor's test-enabling patch
gin_alone_cleanup-4.patch, so as not to change horses in midstream.
Now that a version of that patch has been committed, I will try to
repeat this in HEAD)

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Alvaro Herrera
Thanks Alex for finding the previous thread.

Andrew Dunstan wrote:
> 
> On 04/28/2016 04:29 PM, Alvaro Herrera wrote:

> >Actually we did have someone come up with a patch to "normalize" how
> >JSON stuff was output, because our code seems to do it in three
> >different, inconsistent ways.  And our response was for them to get the
> >heck outta here, because we're so much in love with our current
> >practice that we don't need to refactor the three implementations into a
> >single one.
> That's a pretty bad mischaracterization of the discussion.

Sorry, I don't agree with that.

> What was proposed
> was broken, as I pointed out to the OP, and then again later to you when you
> asked about it.

I didn't find your argument convincing back then, but this doesn't have
enough value to me that I can spend much time arguing about it.

 
> I wouldn't necessarily be opposed to us having one or more of the following:
> 
> a) suppressing whitespace addition in all json generation and text output,
> possibly governed by a GUC setting so we could maintain behaviour
> compatibility if required

Sounds great to me, because we can unify the code so that we have *one*
piece to convert json to text instead of N, and not worry about the
non-relevant whitespace.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Joshua D. Drake

On 04/29/2016 10:20 AM, Alvaro Herrera wrote:


:0 Wh: msgid.lock
| formail -D 65536 $HOME/.msgid.cache



And Alvaro drowns in the drool of his own sarcasm.


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Joshua D. Drake

On 04/29/2016 09:40 AM, Joshua D. Drake wrote:

On 04/29/2016 08:44 AM, Bruce Momjian wrote:



Consider PgLogical, who is working on this outside of 2Q? Where is the
git repo for it? Where is the bug tracker? Where is the mailing list?
Oh, its -hackers, except that it isn't, is it?



FTR: I am not attacking any one entity here. It is just that PgLogical 
is a good example of my point. I certainly recognize and have publicly 
applauded on multiple occasions the good work 2Q is doing. Just as I 
have with EDB, Citus and Crunchy.


Sincerely,

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Alvaro Herrera
Bruce Momjian wrote:
> 
> What is the recommended procedure for replying to a pgsql-committers
> messsage?  Is cross-posting to hackers really the right approach, as it
> causes duplicate messages.  (pgsql-committers CC removed.)

CCing pgsql-hackers when replying to -committers was discussed some
time ago and the consensus seemed to be that that's preferrable than
keeping the discussion in -committers only, because that one is so much
smaller.  Whether -committers is kept in cc/to or not seems not to be
important.  This is all in the archives somewhere ...


You must be new to this email thing.

Any millennial will tell you that there's no duplicate because Gmail
already de-duplicates them.  You would only see one copy, ever.

Now, if you still live in a cave and don't use Gmail (like, say, me),
you could still change the options in Majordomo to send a unique copy of
each message, that is to say change the delivery class to "unique"
rather than "each".  Then it will see that you have the same message in
two lists and send you only one.

Now, whenever you're in the CC list of a message (something which I'm
told is somewhat common around here) you would additionally get that
copy too!  There's nothing majordomo could do about that of course.
Again the solution is to use Gmail (what else!).  If you won't do that,
you can install a procmail recipe to remove the dupes, say

:0 Wh: msgid.lock
| formail -D 65536 $HOME/.msgid.cache

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Replying to a pgsql-committers email by CC'ing hackers

2016-04-29 Thread Bruce Momjian

What is the recommended procedure for replying to a pgsql-committers
messsage?  Is cross-posting to hackers really the right approach, as it
causes duplicate messages.  (pgsql-committers CC removed.)

---

On Tue, Apr 12, 2016 at 10:38:56AM -0700, Andres Freund wrote:
> On 2016-04-12 16:49:25 +, Kevin Grittner wrote:
> > On a big NUMA machine with 1000 connections in saturation load
> > there was a performance regression due to spinlock contention, for
> > acquiring values which were never used.  Just fill with dummy
> > values if we're not going to use them.
> 
> FWIW, I could see massive regressions with just 64 connections.
> 
> I'm a bit scared of having an innoccuous sounding option regress things
> by a factor of 10. I think, in addition to this fix, we need to actually
> solve the scalability issue here to a good degree.  One way to do so is
> to apply the parts of 0001 in
> http://archives.postgresql.org/message-id/20160330230914.GH13305%40awork2.anarazel.de
> defining PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY and rely on that. Another
> to apply the whole patch and simply put the lsn in an 8 byte atomic.
> 
> - Andres
> 
> 
> -- 
> Sent via pgsql-committers mailing list (pgsql-committ...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-committers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Joshua D. Drake

On 04/29/2016 08:44 AM, Bruce Momjian wrote:

On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote:

Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We
don't have clear roadmap and that's why we cannot plan future feature full
release. There are several postgres-centric companies, which have most of
developers, who do all major contributions. All these companies has their
roadmaps, but not the community.


I would be concerned if company roadmaps overtly affected the community
roadmap.  In general, I find company roadmaps to be very short-sighted
and quickly changed based on the demands of specific users/customers ---
something we don't want to imitate.

We do want company roadmaps to affect the community roadmap, but in a
healthy, long-term way, and I think, in general, that is happening.



The roadmap is not the problem it is the lack of cooperation. Many 
companies are now developing features in a silo and then presenting them 
to the community. Instead we should be working with those companies to 
have them develop transparently so others can be a part of the process.


If the feature is going to be submitted to core anyway (or open source) 
why wouldn't we just do that? Why wouldn't EDB develop directly within 
the Pg infrastructure. Why wouldn't we build teams around the best and 
brightest between EDB, 2Q and Citus?


Egos.

Consider PgLogical, who is working on this outside of 2Q? Where is the 
git repo for it? Where is the bug tracker? Where is the mailing list? 
Oh, its -hackers, except that it isn't, is it?


It used to be that everyone got together and worked together before the 
patch review process. Now it seems like it is a competition between 
companies to see whose ego can get the most inflated via press releases 
because they developed X for Y.


If the companies were to come together and truly recognize that profit 
is the reward not the goal then our community would be much stronger for it.


Sincerely,

JD


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Julien Rouhaud
On 29/04/2016 18:05, Tom Lane wrote:
> Julien Rouhaud  writes:
>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>> and get_rightop() on a ScalarArrayOpExpr node.
> 
>> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
>> beginning of the struct so the get_*op() work with either (as in
>> attached patch) fixes the issue.
> 
>> I'm not sure that assuming this compatibility is the right way to fix
>> this though.
> 
> It certainly isn't.
> 

Agreed. New attached patch handles explicitly each node tag.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/src/backend/optimizer/util/clauses.c 
b/src/backend/optimizer/util/clauses.c
index 759566a..3b9715b 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -100,6 +100,7 @@ typedef struct
boolallow_restricted;
 } has_parallel_hazard_arg;
 
+static List *get_opargs(const Expr *clause);
 static bool aggregates_allow_partial_walker(Node *node,

partial_agg_context *context);
 static bool contain_agg_clause_walker(Node *node, void *context);
@@ -197,6 +198,19 @@ make_opclause(Oid opno, Oid opresulttype, bool opretset,
return (Expr *) expr;
 }
 
+static List *
+get_opargs(const Expr *clause)
+{
+   if (IsA(clause, OpExpr))
+   return ((OpExpr *) clause)->args;
+
+   if (IsA(clause, ScalarArrayOpExpr))
+   return ((ScalarArrayOpExpr *) clause)->args;
+
+   elog(ERROR, "unrecognized node type: %d",
+   (int) nodeTag(clause));
+}
+
 /*
  * get_leftop
  *
@@ -206,10 +220,10 @@ make_opclause(Oid opno, Oid opresulttype, bool opretset,
 Node *
 get_leftop(const Expr *clause)
 {
-   const OpExpr *expr = (const OpExpr *) clause;
+   const List *args = get_opargs(clause);
 
-   if (expr->args != NIL)
-   return linitial(expr->args);
+   if (args != NIL)
+   return linitial(args);
else
return NULL;
 }
@@ -223,10 +237,10 @@ get_leftop(const Expr *clause)
 Node *
 get_rightop(const Expr *clause)
 {
-   const OpExpr *expr = (const OpExpr *) clause;
+   const List *args = get_opargs(clause);
 
-   if (list_length(expr->args) >= 2)
-   return lsecond(expr->args);
+   if (list_length(args) >= 2)
+   return lsecond(args);
else
return NULL;
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Merlin Moncure
On Fri, Apr 29, 2016 at 11:02 AM, Simon Riggs  wrote:
> On 12 April 2016 at 20:25, Josh berkus  wrote:
>
>>
>> Here's the features I can imagine being worth major backwards
>> compatibility breaks:
>>
>> 1. Fully pluggable storage with a clean API.
>>
>> 2. Total elimination of VACUUM or XID freezing
>>
>> 3. Fully transparent-to-the user MM replication/clustering or sharding.
>>
>> 4. Perfect partitioning (i.e. transparent to the user, supports keys &
>> joins, supports expressions on partition key, etc.)
>>
>> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
>> without pg_upgrade or other modification).
>>
>> 6. Fully pluggable parser/executor with a good API
>>
>> That's pretty much it.  I can't imagine anything else which would
>> justify imposing a huge upgrade barrier on users.  And, I'll point out,
>> that in the above list:
>>
>> * nobody is currently working on anything in core except #4.
>>
>> * we don't *know* that any of the above items will require a backwards
>> compatibility break.
>>
>> People keep talking about "we might want to break compatibility/file
>> format one day".  But nobody is working on anything which will and
>> justifies it.
>
> Of your list, I know 2ndQuadrant developers are working on 1, 3, 5.
> 6 has being discussed recently on list by other hackers.

#5 (upgrade without pg_upgrade or dump/restore) from my perspective
would be the most useful feature of all time, and would justify the
9.x to 10.x all by itself using the existing standard (I think?) of
major project milestones to advance that number.

7.x ?? (just coming on the scene then)
8.x windows, pitr
9.x replication
10.x easy upgrades

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread David G. Johnston
On Fri, Apr 29, 2016 at 7:15 AM, Merlin Moncure  wrote:

> Andrew mentions several solutions.  I like them all except I would
> prefer not to introduce a GUC for controlling the output format.  I do
> not think it's a good idea to set the expectation that clients can
> rely on text out byte for byte for any type including json.
>
> ​+1​

​I agree on the GUC point and on the general ​desirability of making jsonb
output not include insignificant whitespace.

​There seems to be enough coders who agree to this principle: could one of
you please write a patch and start a new thread specifically for this
change.  If we go that route the need for the subject of this thread
becomes moot.

Thanks!

David J.


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Tom Lane
Julien Rouhaud  writes:
> The segfault is caused by quals_match_foreign_key() calling get_leftop()
> and get_rightop() on a ScalarArrayOpExpr node.

> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
> beginning of the struct so the get_*op() work with either (as in
> attached patch) fixes the issue.

> I'm not sure that assuming this compatibility is the right way to fix
> this though.

It certainly isn't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Simon Riggs
On 12 April 2016 at 20:25, Josh berkus  wrote:


> Here's the features I can imagine being worth major backwards
> compatibility breaks:
>
> 1. Fully pluggable storage with a clean API.
>
> 2. Total elimination of VACUUM or XID freezing
>
> 3. Fully transparent-to-the user MM replication/clustering or sharding.
>
> 4. Perfect partitioning (i.e. transparent to the user, supports keys &
> joins, supports expressions on partition key, etc.)
>
> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
> without pg_upgrade or other modification).
>
> 6. Fully pluggable parser/executor with a good API
>
> That's pretty much it.  I can't imagine anything else which would
> justify imposing a huge upgrade barrier on users.  And, I'll point out,
> that in the above list:
>
> * nobody is currently working on anything in core except #4.
>
> * we don't *know* that any of the above items will require a backwards
> compatibility break.
>
> People keep talking about "we might want to break compatibility/file
> format one day".  But nobody is working on anything which will and
> justifies it.
>

Of your list, I know 2ndQuadrant developers are working on 1, 3, 5.
6 has being discussed recently on list by other hackers.

I'm not really sure what 2 consists of; presumably this means "take the
pain away" rather than removal of MVCC, which is the root cause of those
secondary effects.

I don't think the current focus on manually intensive DDL partitioning is
the right way forwards. I did once; I don't now.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Julien Rouhaud
On 29/04/2016 13:20, Michael Paquier wrote:
> On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner  wrote:
>> If you need any more info or testing done just let me know.
> 
> The information you are providing is sufficient to reproduce the
> problem, thanks! I have added this bug to the list of open items for
> 9.6.
> 

The segfault is caused by quals_match_foreign_key() calling get_leftop()
and get_rightop() on a ScalarArrayOpExpr node.

Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
beginning of the struct so the get_*op() work with either (as in
attached patch) fixes the issue.

I'm not sure that assuming this compatibility is the right way to fix
this though.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 1ffc0a1..dffe129 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -468,12 +468,12 @@ typedef struct OpExpr
Exprxpr;
Oid opno;   /* PG_OPERATOR OID of 
the operator */
Oid opfuncid;   /* PG_PROC OID of 
underlying function */
-   Oid opresulttype;   /* PG_TYPE OID of result value 
*/
-   boolopretset;   /* true if operator returns set 
*/
-   Oid opcollid;   /* OID of collation of 
result */
Oid inputcollid;/* OID of collation that 
operator should use */
List   *args;   /* arguments to the operator (1 
or 2) */
int location;   /* token location, or 
-1 if unknown */
+   Oid opresulttype;   /* PG_TYPE OID of result value 
*/
+   boolopretset;   /* true if operator returns set 
*/
+   Oid opcollid;   /* OID of collation of 
result */
 } OpExpr;
 
 /*
@@ -511,10 +511,10 @@ typedef struct ScalarArrayOpExpr
Exprxpr;
Oid opno;   /* PG_OPERATOR OID of 
the operator */
Oid opfuncid;   /* PG_PROC OID of 
underlying function */
-   booluseOr;  /* true for ANY, false for ALL 
*/
Oid inputcollid;/* OID of collation that 
operator should use */
List   *args;   /* the scalar and array 
operands */
int location;   /* token location, or 
-1 if unknown */
+   booluseOr;  /* true for ANY, false for ALL 
*/
 } ScalarArrayOpExpr;
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Bruce Momjian
On Tue, Apr 12, 2016 at 11:07:04PM +0300, Oleg Bartunov wrote:
> Our roadmap http://www.postgresql.org/developer/roadmap/ is the problem. We
> don't have clear roadmap and that's why we cannot plan future feature full
> release. There are several postgres-centric companies, which have most of
> developers, who do all major contributions. All these companies has their
> roadmaps, but not the community.

I would be concerned if company roadmaps overtly affected the community
roadmap.  In general, I find company roadmaps to be very short-sighted
and quickly changed based on the demands of specific users/customers ---
something we don't want to imitate.

We do want company roadmaps to affect the community roadmap, but in a
healthy, long-term way, and I think, in general, that is happening.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] postgresql 9.3.10, FIPS mode and DRBG issues.

2016-04-29 Thread Rodney Lott
Sorry for this repost: I forgot the subject line! My bad ... :-<

Hi, there.

First, my particulars:

* Ubuntu Trusty build and runtime environment

* PostgreSQL 9.3.10 Ubuntu source code

* Using a FIPS enabled version of OpenSSL (i.e. 1.0.1p version of the 
library and 2.0.9 of the FIPS canister source code)

* I initially posted this to the pgsql-general list a few weeks ago, 
but I wasn't able to get enough specific information to resolve my issues. 
Hence, why I am posting this here.

* I am new to FIPS and postgresql in general (i.e. working with them 
for a few months)

I've been trying to get the postgresql packages to work in FIPS mode. To 
accomplish this, I've patched the Ubuntu source code with the patch that is 
attached to this message.

The main postgresql server runs fine as expected in either FIPS or non-FIPS 
modes. However, when I try to use the psql command in FIPS mode, I get the 
following error:

# psql -h 127.0.0.1 -U postgres -d sslmode=require
psql: SSL SYSCALL error: EOF detected

I used the gdb debugger to try to find where in the backend the command was 
failing. The backtrace on the server side suggests that the problem involves 
the key-exchange failing:

(gdb) bt
#0  0x7f40183e8f20 in __nanosleep_nocancel () at 
../sysdeps/unix/syscall-template.S:81
#1  0x7f40183e8dd4 in __sleep (seconds=0) at 
../sysdeps/unix/sysv/linux/sleep.c:137
#2  0x7f40196a95ce in DH_generate_key () from 
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0
#3  0x7f40199e8ba6 in ssl3_send_server_key_exchange () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#4  0x7f40199ec18b in ssl3_accept () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#5  0x7f40199fb8b3 in ssl23_accept () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#6  0x5618082567a4 in open_server_SSL (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/libpq/be-secure.c:925
#7  secure_open_server (port=port@entry=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/libpq/be-secure.c:221
#8  0x5618082c7eb8 in ProcessStartupPacket (port=port@entry=0x561808e05700, 
SSLdone=SSLdone@entry=0 '\000') at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1921
#9  0x5618081030f9 in BackendInitialize (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:4036
#10 BackendStartup (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:3807
#11 ServerLoop () at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1690
#12 0x5618082cace1 in PostmasterMain (argc=5, argv=) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1315
#13 0x561808103fb3 in main (argc=5, argv=0x561808db6970) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/main/main.c:227

I tracked it down to the following code in the OpenSSL 2.0.9 canister code:

int FIPS_drbg_generate(DRBG_CTX *dctx, unsigned char *out, size_t outlen,
int prediction_resistance,
const unsigned char *adin, size_t adinlen)
{
int r = 0;

if (FIPS_selftest_failed())
{
FIPSerr(FIPS_F_FIPS_DRBG_GENERATE, FIPS_R_SELFTEST_FAILED);
return 0;
}

if (!fips_drbg_check(dctx))
return 0;

if (dctx->status != DRBG_STATUS_READY
&& dctx->status != DRBG_STATUS_RESEED)
{
if (dctx->status == DRBG_STATUS_ERROR)
r = FIPS_R_IN_ERROR_STATE;
else if(dctx->status == DRBG_STATUS_UNINITIALISED)
r = FIPS_R_NOT_INSTANTIATED;
goto end;
}
...

The place where it fails is where dctx->status == DRBG_STATUS_UNINITIALIZED 
(i.e. 0).

So, my question is this: In FIPS mode, what would cause the random number 
generation to not initialize? I have put print statements in the postgresql 
code such that I know that it is in FIPS mode properly. I know that the 
dctx->status pointer, which points to a "static DRBG_CTX ossl_dctx" structure, 
is initialized to 1 in the main process. It appears that this initialization 
doesn't get propagated to other backends or the SSL transaction above.

If any of the developers have some insight into this, I would appreciate it.

Thanks,

Rodney Lott




Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Bruce Momjian
On Fri, Apr 29, 2016 at 08:37:57AM -0700, Joshua Drake wrote:
> >Technically, this is exactly what pg_upgrade does.  I think what you
> >really mean is for the backend binary to be able to read the system
> >tables and WAL files of the old clusters --- something I can't see us
> >implementing anytime soon.
> >
> 
> For the most part, pg_upgrade is good enough. There are exceptions and it
> does need a more thorough test suite but as a whole, it works. As nice as
> being able to install 9.6 right on top of 9.5 and have 9.6 magically work,
> it is certainly not a *requirement* anymore.

Yes, the trick would be making the new 9.6 features work with the
existing 9.5 system tables that don't know about the 9.6 features.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Joshua D. Drake

On 04/29/2016 08:32 AM, Bruce Momjian wrote:

On Tue, Apr 12, 2016 at 11:25:21AM -0700, Josh Berkus wrote:

Here's the features I can imagine being worth major backwards
compatibility breaks:

...

5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
without pg_upgrade or other modification).


Technically, this is exactly what pg_upgrade does.  I think what you
really mean is for the backend binary to be able to read the system
tables and WAL files of the old clusters --- something I can't see us
implementing anytime soon.



For the most part, pg_upgrade is good enough. There are exceptions and 
it does need a more thorough test suite but as a whole, it works. As 
nice as being able to install 9.6 right on top of 9.5 and have 9.6 
magically work, it is certainly not a *requirement* anymore.


Sincerely,

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Bruce Momjian
On Tue, Apr 12, 2016 at 11:25:21AM -0700, Josh Berkus wrote:
> Here's the features I can imagine being worth major backwards
> compatibility breaks:
...
> 5. Transparent upgrade-in-place (i.e. allowing 10.2 to use 10.1's tables
> without pg_upgrade or other modification).

Technically, this is exactly what pg_upgrade does.  I think what you
really mean is for the backend binary to be able to read the system
tables and WAL files of the old clusters --- something I can't see us
implementing anytime soon.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS]

2016-04-29 Thread Rodney Lott
Hi, there.

First, my particulars:

* Ubuntu Trusty build and runtime environment

* PostgreSQL 9.3.10 Ubuntu source code

* Using a FIPS enabled version of OpenSSL (i.e. 1.0.1p version of the 
library and 2.0.9 of the FIPS canister source code)

* I initially posted this to the pgsql-general list a few weeks ago, 
but I wasn't able to get enough specific information to resolve my issues. 
Hence, why I am posting this here.

* I am new to FIPS and postgresql in general (i.e. working with them 
for a few months)

I've been trying to get the postgresql packages to work in FIPS mode. To 
accomplish this, I've patched the Ubuntu source code with the patch that is 
attached to this message.

The main postgresql server runs fine as expected in either FIPS or non-FIPS 
modes. However, when I try to use the psql command in FIPS mode, I get the 
following error:

# psql -h 127.0.0.1 -U postgres -d sslmode=require
psql: SSL SYSCALL error: EOF detected

I used the gdb debugger to try to find where in the backend the command was 
failing. The backtrace on the server side suggests that the problem involves 
the key-exchange failing:

(gdb) bt
#0  0x7f40183e8f20 in __nanosleep_nocancel () at 
../sysdeps/unix/syscall-template.S:81
#1  0x7f40183e8dd4 in __sleep (seconds=0) at 
../sysdeps/unix/sysv/linux/sleep.c:137
#2  0x7f40196a95ce in DH_generate_key () from 
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0
#3  0x7f40199e8ba6 in ssl3_send_server_key_exchange () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#4  0x7f40199ec18b in ssl3_accept () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#5  0x7f40199fb8b3 in ssl23_accept () from 
/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#6  0x5618082567a4 in open_server_SSL (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/libpq/be-secure.c:925
#7  secure_open_server (port=port@entry=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/libpq/be-secure.c:221
#8  0x5618082c7eb8 in ProcessStartupPacket (port=port@entry=0x561808e05700, 
SSLdone=SSLdone@entry=0 '\000') at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1921
#9  0x5618081030f9 in BackendInitialize (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:4036
#10 BackendStartup (port=0x561808e05700) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:3807
#11 ServerLoop () at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1690
#12 0x5618082cace1 in PostmasterMain (argc=5, argv=) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/postmaster/postmaster.c:1315
#13 0x561808103fb3 in main (argc=5, argv=0x561808db6970) at 
/home/rlott/git/stash/postgresql-fips/postgresql-9.3-9.3.10/build/../src/backend/main/main.c:227

I tracked it down to the following code in the OpenSSL 2.0.9 canister code:

int FIPS_drbg_generate(DRBG_CTX *dctx, unsigned char *out, size_t outlen,
int prediction_resistance,
const unsigned char *adin, size_t adinlen)
{
int r = 0;

if (FIPS_selftest_failed())
{
FIPSerr(FIPS_F_FIPS_DRBG_GENERATE, FIPS_R_SELFTEST_FAILED);
return 0;
}

if (!fips_drbg_check(dctx))
return 0;

if (dctx->status != DRBG_STATUS_READY
&& dctx->status != DRBG_STATUS_RESEED)
{
if (dctx->status == DRBG_STATUS_ERROR)
r = FIPS_R_IN_ERROR_STATE;
else if(dctx->status == DRBG_STATUS_UNINITIALISED)
r = FIPS_R_NOT_INSTANTIATED;
goto end;
}
...

The place where it fails is where dctx->status == DRBG_STATUS_UNINITIALIZED 
(i.e. 0).

So, my question is this: In FIPS mode, what would cause the random number 
generation to not initialize? I have put print statements in the postgresql 
code such that I know that it is in FIPS mode properly. I know that the 
dctx->status pointer, which points to a "static DRBG_CTX ossl_dctx" structure, 
is initialized to 1 in the main process. It appears that this initialization 
doesn't get propagated to other backends or the SSL transaction above.

If any of the developers have some insight into this, I would appreciate it.

Thanks,

Rodney Lott



Re: [HACKERS] Lets (not) break all the things. Was: [pgsql-advocacy] 9.6 -> 10.0

2016-04-29 Thread Bruce Momjian
On Tue, Apr 12, 2016 at 01:43:41PM -0400, Robert Haas wrote:
> I'm going to throw down the gauntlet (again) and say more or less what
> I previously said on the pgsql-advocacy thread.  I think that:
> 
> 1. Large backward compatibility breaks are bad.  Therefore, if any of
> these things are absolutely impossible to do without major
> compatibility breaks, we shouldn't do them at all.
> 
> 2. Small backward compatibility breaks are OK, but don't require doing
> anything special to the version number.
> 
> 3. There's no value in aggregating many small backward compatibility
> breaks into a single release.  That increases pain for users, rather
> than decreasing it, and slows down development, too, because you have
> to wait for the special magic release where it's OK to hose users.  We
> typically have a few small backward compatibility breaks in each
> release, and that's working fine, so I see little reason to change it.

Well, this is true for SQL-level and admin-level changes, but it does
make sense to group pg_upgrade breaks into a single release.  I think
the plan is for us to have logical replication usable before we make
such a change.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-04-29 Thread Petr Jelinek

On 29/04/16 16:02, Michael Paquier wrote:

On Fri, Apr 29, 2016 at 9:13 PM, Andrew Dunstan  wrote:

Yeah, I noticed the ugliness, should have fixed it. Applied your fix and
committed.


Thanks for the commit!



+1

After bit of fighting with the system the "caecilian" reported first 
successful build to the buildfarm.


--
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SPI_exec ERROR in pl/r of R 3.2.4 on PostgreSQL on Windows 7

2016-04-29 Thread Andre Mikulec
I am working with pl/r compiled for R.3.2.4 64 bit on PostgreSQL 9.5.1 64 bit 
on Windows 7 64 bit



At the end of this issue, I am getting the following error.
https://github.com/postgres-plr/plr/issues/1

  ERROR:  could not open file "base/12373/2663": No such file or directory
  LINE 1: SELECT NULL FROM pg_catalog.pg_class WHERE relname = 'plr_mo...
 ^
QUERY:  SELECT NULL FROM pg_catalog.pg_class WHERE relname = 'plr_modules' AND 
relnamespace = 2200

The error seems to be coming from SPI_exec.

If I run this SQL manually from psql
SELECT NULL FROM pg_catalog.pg_class WHERE relname = 'plr_modules' AND 
relnamespace = 2200

The result is returned and is correct.

* The problem is not my hard disk. *
I am running multiple versions of PostgreSQL on the same hard disk. *

The following run fine.
Regular Windows pre-compiled PostgreSQL 9.4.1 downladed from downloaded from 
postgresql.org
Regular Windows pre-compiled PostgreSQL 9.5.1 downladed from downloaded from 
postgresql.org
Regular Windows pre-compiled PostgreSQL 9.5.2 downladed from downloaded from 
postgresql.org

THe problem is not security.
I am gave 'Full Access' to Administators group , EveryOne group, and Users 
group to
the directories containing all of the PostgreSQL directries containing 
both/either data and binaries.

I have shutdown all virus software: AVG.

The pl/r and plr.dll for R 3.1.2 64 bit runs fine on PostgreSQL 9.4.1 64bit on 
Windows 7 64
The pl/r source code has not changed at least since PostgreSQL 9.4.1.

I have physically examined the pl/r source code.
It seems relatively simple to understand.

THe error seems to only come from here.
https://raw.githubusercontent.com/jconway/plr/master/plr.c


static bool
haveModulesTable(Oid nspOid)
{
StringInfo sql = makeStringInfo();
char   *sql_format = "SELECT NULL "
"FROM pg_catalog.pg_class "
"WHERE "
"relname = 'plr_modules' AND "
"relnamespace = %u";
int  spiRc;

appendStringInfo(sql, sql_format, nspOid);

spiRc = SPI_exec(sql->data, 1);
if (spiRc != SPI_OK_SELECT)
/* internal error */
elog(ERROR, "haveModulesTable: select from pg_class failed");

return SPI_processed == 1;
}


I noticed that the using in the SPI_exec function *seems* to be similar in the 
source code.

https://raw.githubusercontent.com/postgres/postgres/3aff33aa687e47d52f453892498b30ac98a296af/src/test/regress/regress.c

query = (char *) palloc(100 + NAMEDATALEN * 3 +
strlen(fieldval) + strlen(fieldtype));

sprintf(query, "insert into %s select * from %s where %s = '%s'::%s",
SPI_getrelname(rel), SPI_getrelname(rel),
SPI_fname(tupdesc, 1),
fieldval, fieldtype);

if ((ret = SPI_exec(query, 0)) < 0)
elog(ERROR, "funny_dup17 (fired %s) on level %3d: SPI_exec (insert ...) 
returned %d",
when, *level, ret);



AND SPI_exec *seems* to be similar here

https://raw.githubusercontent.com/postgres/postgres/8b99edefcab1e82c43139a2c7dc06d31fb27b3e4/src/backend/commands/matview.c

StringInfoData querybuf;
initStringInfo();

/* Analyze the temp table with the new contents. */
appendStringInfo(, "ANALYZE %s", tempname);
if (SPI_exec(querybuf.data, 0) != SPI_OK_UTILITY)
elog(ERROR, "SPI_exec failed: %s", querybuf.data);

It is defined here.

https://raw.githubusercontent.com/postgres/postgres/3aff33aa687e47d52f453892498b30ac98a296af/src/include/executor/spi.h

extern int SPI_execute(const char *src, bool read_only, long tcount);


https://raw.githubusercontent.com/postgres/postgres/39c283e498de1bb7c3d5beadfffcf3273ae8cc27/src/backend/executor/spi.c

/* Parse, plan, and execute a query string */
int
SPI_execute(const char *src, bool read_only, long tcount)
{
_SPI_plan plan;
int res;

if (src == NULL || tcount < 0)
return SPI_ERROR_ARGUMENT;

res = _SPI_begin_call(true);
if (res < 0)
return res;

memset(, 0, sizeof(_SPI_plan));
plan.magic = _SPI_PLAN_MAGIC;
plan.cursor_options = 0;

_SPI_prepare_oneshot_plan(src, );

res = _SPI_execute_plan(, NULL,
InvalidSnapshot, InvalidSnapshot,
read_only, true, tcount);

_SPI_end_call(true);
return res;
}


/* Obsolete version of SPI_execute */
int
SPI_exec(const char *src, long tcount)
{
return SPI_execute(src, false, tcount);
}


My Big question is the following,

Has there been any change in the PostgreSQL IO code from 9.4.1. to 9.5.1
that may be possibly causing this problem ( in SPI_exec or elsewhere )?

  ERROR:  could not open file "base/12373/2663": No such file or directory

Any answers with any possibilities of any directions are welcome.

Thank you,
Andre Mikulec
andre_miku...@hotmail.com





Re: [HACKERS] Timeline following for logical slots

2016-04-29 Thread Craig Ringer
On 29 April 2016 at 15:40, Craig Ringer  wrote:


> I don't think pg_recvlogical can do anything about the need for that dummy
> write, since the client has no way to determine the exact LSN of the commit
> record of the xact of interest. It can't rely
> on pg_current_xlog_insert_location() or pg_current_xlog_location() since
> autovacuum or a checkpoint might've written xlog since. Logical streaming
> replication doesn't have a non-blocking mode where it returns immediately
> if it'd have to wait for more xlog so we can't just send off the most
> recent server LSN as the endpoint.
>

(Patch attached. Blah blah explanation blah):

With this patch pg_recvlogical takes a new --endpos LSN argument, and will
exit if either:

* it receives an XLogData message with dataStart >= endpos; or
* it receives a keepalive with walEnd >= endpos

The latter allows it to work without needing a dummy transaction to make it
see a data message after endpos. If there's nothing to read on the socket
until a keepalive we know that the server has nothing to send us, and if
walend has passed endpos we know nothing can have committed before endpos.


The way I've written things the endpos is the point where we stop receiving
and exit, so if a record with start lsn >= endpos is received we'll exit
without writing it.

I thought about writing out the record before exiting if the record start
LSN is exactly endpos. That'd be handy in cases where the client knows a
commit's LSN and wants everything up to that commit. But it's easy enough
in this case for the client to set endpos to the commit start lsn + 1, so
it's not like the current behaviour stops you doing anything, and it means
the code can just test endpos and exit. pg_current_xlog_insert_location()
will return at least the lsn of the last commit + 1, so you'll get the
expected behaviour for free there. It does mean we might wait for the next
walsender keepalive or status update before we exit, though, so if someone
feels strongly that endpos should be an inclusive bound I can do that. It's
just a bit uglier in the code.

I can't add a "number of xacts" filter like the SQL interface has because
pg_recvlogical has no idea which records represent a commit, so it's not
possible without changing the protocol. I'm not convinced a "number of
messages" filter is particularly useful. I could add a timeout, but it's
easy enough to do that in a wrapper (like IPC::Run). So I'm sticking with
just the LSN filter for now.

Also because pg_recvlogical isn't aware of transaction boundaries, setting
endpos might result in a partial transaction being output if endpos is
after the end of the last xact wanted and some other xact containing
changes made before endpos commits after endpos but before the next status
update/keepalive is sent. That xact won't be consumed from the server and
will just be sent when the slot is next read from. This won't result in
unpredictable output for testing since there we control what other xacts
run and will generally exit based on walsender status updates/keepalives.

Here's the patch. Docs included. Comments?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
From cc54f99d21de3c573873cce3467237caccfb1a33 Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Fri, 29 Apr 2016 18:18:19 +0800
Subject: [PATCH] Allow a stop LSN to be specified to pg_recvlogical

pg_recvlogical just runs until cancelled or until the upstream
server disconnects. For some purposes, especially testing, it's
useful to have the ability to stop receive at a specified LSN
without having to parse the output and deal with buffering issues,
etc.

Add a --endpos parameter that takes the LSN at which no further
messages should be written and receive should stop.
---
 doc/src/sgml/ref/pg_recvlogical.sgml   |  36 
 src/bin/pg_basebackup/pg_recvlogical.c | 102 +
 2 files changed, 127 insertions(+), 11 deletions(-)

diff --git a/doc/src/sgml/ref/pg_recvlogical.sgml b/doc/src/sgml/ref/pg_recvlogical.sgml
index 9d0b58b..8f1e8f6 100644
--- a/doc/src/sgml/ref/pg_recvlogical.sgml
+++ b/doc/src/sgml/ref/pg_recvlogical.sgml
@@ -155,6 +155,42 @@ PostgreSQL documentation
  
 
  
+  --endpos=lsn
+  
+   
+In --start mode, automatically stop replication and
+exit with normal exit status 0 when receive passes the specified LSN.
+Because of logical decoding's reordering of operations this actually
+means that no change that's part of a transaction that committed before
+endpos is still waiting to be received. There can be still be
+pending changes made before endpos that're part of transactions that
+committed after endpos or are not yet committed.
+   
+   
+The endpos option is not aware of transaction boundaries and may
+truncate output partway through a 

Re: [HACKERS] Typo

2016-04-29 Thread Magnus Hagander
On Fri, Apr 29, 2016 at 1:37 PM, Thomas Munro  wrote:

> Hi,
>
> Here is a patch to fix a typo in dsm_impl.h.
>

Applied, thanks.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Merlin Moncure
On Wed, Apr 27, 2016 at 4:05 PM, Stephen Frost  wrote:
> * Merlin Moncure (mmonc...@gmail.com) wrote:
>> On Tue, Apr 26, 2016 at 11:49 AM, Stephen Frost  wrote:
>> > As I mentioned to Sehrope on IRC, at least for my 2c, if you want a
>> > compact JSON format to reduce the amount of traffic over the wire or to
>> > do things with on the client side, we should probably come up with a
>> > binary format, rather than just hack out the whitespace.  It's not like
>> > representing numbers using ASCII characters is terribly efficient
>> > either.
>>
>> -1
>>
>> This will benefit pretty much nobody unless you are writing a hand
>> crafted C application that consumes and processes the data directly.
>
> That's not accurate.  All that's needed is for the libraries which
> either wrap libpq or re-implement it to be updated to understand the
> format and then convert the data into whatever structure makes sense for
> the language (or library that the language includes for working with
> JSON data).

Sure, that's pretty easy.   Note, I cowrote the only libpq wrapper
that demystifies pg binary formats, libpqtypes.  I can tell you that
binary formats are much faster than text formats in any cases where
parsing is non trivial -- geo types, timestamp types, containers etc.
 However I would be very surprised if postgres binary format json
would replace language parsing of json in any popular language like
java for common usage.

I'll go further.   Postgres json support has pretty much made our
binary formats obsolete.  The main problem with text format data was
sending complex structured data to the client over our overlapping
escape mechanisms; client side parsing was slow and in certain
scenarios backslashes would proliferate exponentially.json support
eliminates all of those problems and the performance advantages of
binary support (mainly parsing of complex types)  rarely justify the
development headaches.  These days, for the vast majority of data
traffic to the application it's a single row, single column json
coming in and out of the database.

>> I'd venture to guess this is a tiny fraction of pg users these days.
>> I do not understand at all the objection to removing whitespace.
>> Extra whitespace does nothing but pad the document as humans will
>> always run the document through a prettifier tuned to their specific
>> requirements (generally starting with, intelligent placement of
>> newlines) if reading directly.
>
> The objection is that it's a terribly poor solution as it simply makes
> things ugly for a pretty small amount of improvement.  Looking at it
> from the perspective of just "whitespace is bad!"

Whitespace is bad, because it simply pads documents on every stage of
processing.  You simply can't please everyone in terms of where it
should go so you don't and reserve that functionality for
prettification functions.  json is for *data serialization*.  We
should not inject extra characters for aesthetics in the general case;
reducing memory consumption by 10% on both the client and server
during parse is a feature.

Andrew mentions several solutions.  I like them all except I would
prefer not to introduce a GUC for controlling the output format.  I do
not think it's a good idea to set the expectation that clients can
rely on text out byte for byte for any type including json.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Crash in apply_projection_to_path

2016-04-29 Thread Tom Lane
Amit Kapila  writes:
>> On Thu, Apr 28, 2016 at 10:06 PM, Tom Lane  wrote:
>>> I'd be inclined to think that it's silly to build GatherPaths in advance
>>> of having finalized the list of partial paths for a rel.

> What's happening here is that to form joinrel, we need to call
> add_paths_to_joinrel() with both outer and inner relation twice, once with
> rel1 as outer relation and rel1 as inner relation and vice versa.  So now
> the second call to add_paths_to_joinrel() can replace a partial path which
> is being referenced by GatherPath generated in first call.  I think we
> should generate gather paths for join rel after both the calls
> to add_paths_to_joinrel() aka in make_join_rel().  Attached patch on above
> lines fixes the problem for me.

make_join_rel is certainly not far enough down the call stack to solve
this problem.  It can, and typically will, be invoked multiple times
for the same target join relation.

One possible answer is to do it in standard_join_search, just before
the set_cheapest call for each join relation.  You'd need to account
for the issue in GEQO search as well.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-04-29 Thread Michael Paquier
On Fri, Apr 29, 2016 at 9:13 PM, Andrew Dunstan  wrote:
> Yeah, I noticed the ugliness, should have fixed it. Applied your fix and
> committed.

Thanks for the commit!

Nitpick comment:
+ * Also  for VS2015, add a define that stops compiler complaints about
Two spaces instead of one between "Also" and "for" :)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Tom Lane
Amit Kapila  writes:
> On Fri, Apr 29, 2016 at 12:01 PM, Andreas Seltenreich 
> wrote:
>> tonight's sqlsmith run yielded another core dump:
>> 
>> TRAP: FailedAssertion("!(MyProc->lockGroupLeader == ((void *)0))", File:
>> "proc.c", Line: 1787)
>> 
>> I couldn't identifiy a query for it though: debug_query_string is empty.
>> Additionally, the offending query was not reported in the error context
>> as it typically is for non-parallel executor crashes.

> From callstack below, it is clear that the reason for core dump is that
> Gather node is pushed below another Gather node which makes worker execute
> the Gather node.  Currently there is no support in workers to launch
> another workers and ideally such a plan should not be generated.

It might not be intentional.  The bug we identified from Andreas' prior
report could be causing this: once a GatherPath's subpath has been freed,
that palloc chunk could be recycled into another GatherPath, or something
with a GatherPath in its substructure, leading to a plan of that shape.

> It will
> be helpful if you can find the offending query or plan corresponding to it?

I presume the lack of debug_query_string data is because nothing is
bothering to set debug_query_string in a worker process.  Should that be
remedied?  At the very least set it to "worker process", but it might be
worth copying over the full query from the parent side.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Shulgin, Oleksandr
On Fri, Apr 29, 2016 at 3:18 PM, Andrew Dunstan  wrote:

>
> On 04/28/2016 04:29 PM, Alvaro Herrera wrote:
>
>>
>>> Actually we did have someone come up with a patch to "normalize" how
>> JSON stuff was output, because our code seems to do it in three
>> different, inconsistent ways.  And our response was for them to get the
>> heck outta here, because we're so much in love with our current
>> practice that we don't need to refactor the three implementations into a
>> single one.
>>
>
> That's a pretty bad mischaracterization of the discussion. What was
> proposed was broken, as I pointed out to the OP, and then again later to
> you when you asked about it. What he wanted to achieve simply couldn't be
> done they way he was trying to achieve it.
>

Yeah, the original request was pretty invalid, but when I've proposed to
resubmit just the normalization of whitespace nobody has shown enthusiasm
about the idea either:


http://www.postgresql.org/message-id/cacaco5qkoiz-00jf6w2uf0pst05qrekq9uzssybl0m9fgkd...@mail.gmail.com

Regarding the present proposal:
>
> I wouldn't necessarily be opposed to us having one or more of the
> following:
>
> a) suppressing whitespace addition in all json generation and text output,
> possibly governed by a GUC setting so we could maintain behaviour
> compatibility if required
>

I'm not thrilled about GUC that would silently break stuff.  That being
said, if someone's code is dependent on exact placement of whitespace in
the JSON text, it's pretty broken already and it's just a matter of time
when they hit an issue there.


> b) a function to remove whitespace from json values, but otherwise
> preserve things like key order
> c) a function to pretty-print json similar to the output from jsonb, but
> again preserving key order
> d) a function to reorder keys in json so they were sorted according to the
> relevant collation.
>
> None of these things except possibly the last should be terribly difficult
> to do.
>

My vote goes to remove all optional whitespace by default and have a single
function to prettify it.  Key reordering can then be handled with an
optional parameter to such prettifying function.

It would probably make sense model this function after Python's
"dump-to-JSON-string" function:
https://docs.python.org/2/library/json.html#json.dumps  With the optional
parameters for sorting the keys, indentation size and punctuation.  This
way all the prettiness enhancements could be contained in a single function
w/o the need for generalized interface used in many places.

How about that?

--
Alex


Re: [HACKERS] 9.6 and fsync=off

2016-04-29 Thread Tom Lane
Abhijit Menon-Sen  writes:
> Do you want a patch along those lines now, or is it too late?

We're certainly not going to consider fooling with this in 9.6.
The situation for manual fsync-twiddling is no worse than it was in
any prior release, and we are long past feature freeze.

If you want to put it on your to-do queue for 9.7, feel free.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Support building with Visual Studio 2015

2016-04-29 Thread Andrew Dunstan



On 04/29/2016 09:29 AM, Christian Ullrich wrote:

* Andrew Dunstan wrote:


Support building with Visual Studio 2015
http://git.postgresql.org/pg/commitdiff/da52474f3d3cbdf38d8a6677a4ebedaf402ade3a 



diff --git a/src/port/win32env.c b/src/port/win32env.c
index 7e4ff62..d6b0ebe 100644 (file)
--- a/src/port/win32env.c
+++ b/src/port/win32env.c
@@ -70,6 +70,9 @@ pgwin32_putenv(const char *envval)
"msvcr120", 0, NULL
},  /* Visual Studio 2013 */
{
+   "urctbase", 0, NULL
+   },  /* Visual Studio 2015 and later */
+   {
NULL, 0, NULL
}
};

s/urctbase/ucrtbase/

Sorry, I missed that this morning.




Oh, darn, thanks. Will fix.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL - Var attno

2016-04-29 Thread Tom Lane
sri harsha  writes:
> Its not an OpExpr . It is a VAR , got it from "reltargetlist" in base
> relation ( RelOptInfo) .

Read the comment:

 *reltargetlist - List of Var and PlaceHolderVar nodes for the values
 *we need to output from this relation.
 *List is in no particular order, but all rels of an
 *appendrel set must use corresponding orders.
 *NOTE: in an appendrel child relation, may contain
 *arbitrary expressions pulled up from a subquery!
  ^^^

In general, it's seldom a good idea to assume that a Node is of a specific
type without having confirmed that with an IsA() check.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Support building with Visual Studio 2015

2016-04-29 Thread Christian Ullrich

* Andrew Dunstan wrote:


Support building with Visual Studio 2015
http://git.postgresql.org/pg/commitdiff/da52474f3d3cbdf38d8a6677a4ebedaf402ade3a


diff --git a/src/port/win32env.c b/src/port/win32env.c
index 7e4ff62..d6b0ebe 100644 (file)
--- a/src/port/win32env.c
+++ b/src/port/win32env.c
@@ -70,6 +70,9 @@ pgwin32_putenv(const char *envval)
"msvcr120", 0, NULL
},  /* Visual Studio 2013 */
{
+   "urctbase", 0, NULL
+   },  /* Visual Studio 2015 and later */
+   {
NULL, 0, NULL
}
};

s/urctbase/ucrtbase/

Sorry, I missed that this morning.

--
Christian




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add jsonb_compact(...) for whitespace-free jsonb to text

2016-04-29 Thread Andrew Dunstan



On 04/28/2016 04:29 PM, Alvaro Herrera wrote:

David G. Johnston wrote:

On Thu, Apr 28, 2016 at 10:00 AM, Ryan Pedela 
wrote:

In addition, every JSON implementation I have ever seen fully minimizes
JSON by default. PG appears to deviate from standard practice for no
apparent reason.

Sorry to nit-pick but that's called convention - the standard is likely
silent on this point.  And its not like this was done in a vacuum - why is
this only coming up now and not, say, during the beta?  Is it surprising
that this seemingly easy-to-overlook dynamic was not explicitly addressed
by the author and reviewer of the patch, especially when implementation of
said feature consisted of a lot more things of greater import and impact?

Actually we did have someone come up with a patch to "normalize" how
JSON stuff was output, because our code seems to do it in three
different, inconsistent ways.  And our response was for them to get the
heck outta here, because we're so much in love with our current
practice that we don't need to refactor the three implementations into a
single one.



That's a pretty bad mischaracterization of the discussion. What was 
proposed was broken, as I pointed out to the OP, and then again later to 
you when you asked about it. What he wanted to achieve simply couldn't 
be done they way he was trying to achieve it.


Regarding the present proposal:

I wouldn't necessarily be opposed to us having one or more of the following:

a) suppressing whitespace addition in all json generation and text 
output, possibly governed by a GUC setting so we could maintain 
behaviour compatibility if required
b) a function to remove whitespace from json values, but otherwise 
preserve things like key order
c) a function to pretty-print json similar to the output from jsonb, but 
again preserving key order
d) a function to reorder keys in json so they were sorted according to 
the relevant collation.



None of these things except possibly the last should be terribly 
difficult to do.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Simon Riggs
On 29 April 2016 at 08:31, Andreas Seltenreich  wrote:

> Hi,
>
> tonight's sqlsmith run yielded another core dump:
>
> TRAP: FailedAssertion("!(MyProc->lockGroupLeader == ((void *)0))", File:
> "proc.c", Line: 1787)
>
> I couldn't identifiy a query for it though: debug_query_string is empty.
> Additionally, the offending query was not reported in the error context
> as it typically is for non-parallel executor crashes.
>

It's good that the input is fuzzed, but there needs to be a way to re-run
identical fuzzing or a way to backtrack to find what broke. Not much point
finding bugs we can't identify later.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-04-29 Thread Andrew Dunstan



On 04/29/2016 12:39 AM, Tom Lane wrote:

Andrew Dunstan  writes:

latest patch attached. I have also cleaned up the docs some, and removed
references to the now redundant msysGit.

Please don't do stuff like this:

@@ -232,6 +265,10 @@ win32_langinfo(const char *ctype)
if (r != NULL)
sprintf(r, "CP%s", codepage);
}
+
+#if (_MSC_VER >= 1900)
+   }
+#endif
  #endif
  
  	return r;


I'm not very sure what pgindent will do with conditionally-included
indentation, but it's unlikely to be pleasing.

In this particular case, you could probably fix it by making the
other end of that be

+   if (GetLocaleInfoEx(wctype,
+   LOCALE_IDEFAULTANSICODEPAGE | LOCALE_RETURN_NUMBER,
+   (LPWSTR) , sizeof(cp) / sizeof(WCHAR)) > 0)
+   {
+   r = malloc(16); /* excess */
+   if (r != NULL)
+   sprintf(r, "CP%u", cp);
+   }
+   else
+#endif
+   {
+

and omitting the #if/#endif around the trailing }.






Yeah, I noticed the ugliness, should have fixed it. Applied your fix and 
committed.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Alvaro Herrera
Amit Kapila wrote:
> On Fri, Apr 29, 2016 at 12:01 PM, Andreas Seltenreich 
> wrote:

> > I couldn't identifiy a query for it though: debug_query_string is empty.
> > Additionally, the offending query was not reported in the error context
> > as it typically is for non-parallel executor crashes.
> 
> From callstack below, it is clear that the reason for core dump is that
> Gather node is pushed below another Gather node which makes worker execute
> the Gather node.  Currently there is no support in workers to launch
> another workers and ideally such a plan should not be generated.  It will
> be helpful if you can find the offending query or plan corresponding to it?

So I suppose the PID of the process starting the workers should be in
the stack somewhere.  With that one should be able to attach to that
process and get another stack trace.  I'm curious on whether you would
need to have started the server with "postgres -T" in order to be able
to get a coordinated code dump from both processes.  The
debug_query_string would be found in the leader, I suppose.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo

2016-04-29 Thread Thomas Munro
Hi,

Here is a patch to fix a typo in dsm_impl.h.

-- 
Thomas Munro
http://www.enterprisedb.com


typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Michael Paquier
On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner  wrote:
> If you need any more info or testing done just let me know.

The information you are providing is sufficient to reproduce the
problem, thanks! I have added this bug to the list of open items for
9.6.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Detrimental performance impact of ringbuffers on performance

2016-04-29 Thread Bruce Momjian
On Wed, Apr  6, 2016 at 12:57:16PM +0200, Andres Freund wrote:
> Hi,
> 
> While benchmarking on hydra
> (c.f. 
> http://archives.postgresql.org/message-id/20160406104352.5bn3ehkcsceja65c%40alap3.anarazel.de),
> which has quite slow IO, I was once more annoyed by how incredibly long
> the vacuum at the the end of a pgbench -i takes.
> 
> The issue is that, even for an entirely shared_buffers resident scale,
> essentially no data is cached in shared buffers. The COPY to load data
> uses a 16MB ringbuffer. Then vacuum uses a 256KB ringbuffer. Which means
> that copy immediately writes and evicts all data. Then vacuum reads &
> writes the data in small chunks; again evicting nearly all buffers. Then
> the creation of the ringbuffer has to read that data *again*.
> 
> That's fairly idiotic.
> 
> While it's not easy to fix this in the general case, we introduced those
> ringbuffers for a reason after all, I think we at least should add a
> special case for loads where shared_buffers isn't fully used yet.  Why
> not skip using buffers from the ringbuffer if there's buffers on the
> freelist? If we add buffers gathered from there to the ringlist, we
> should have few cases that regress.
> 
> Additionally, maybe we ought to increase the ringbuffer sizes again one
> of these days? 256kb for VACUUM is pretty damn low.

Is this a TODO?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Stefan Huehner
Hello,
@Tomas put you in CC as it looks like related to work on fk -> join estimates

i did a tiny bit of testing of our software against the nightly postgresql-9.6 
debs from apt.postgresql.org

Specifically against:
ii  postgresql-9.6
9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64object-relational 
SQL database, version 9.6 server
ii  postgresql-9.6-dbg
9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64debug symbols for 
postgresql-9.6

so autobuilt from last night.

I get postgres consistently to segfault using the following query (trimmed down 
to shortest example still triggering the crash)

SELECT 1
FROM ad_model_object mo
LEFT JOIN ad_menu m ON mo.ad_process_id = m.ad_process_id
AND mo.action IN ('P', 'R');

With the trigger being a FK definition from ad_menu.ad_process_id to 
ad_process.ad_process_id.

Dropping that fk makes the crash go away.

See attached files for trimmed down table definition to directly reproduce.

Backtrace ends in:
#0  get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
#1  0x565291ec6ba0 in quals_match_foreign_key (root=0x7fca9b3bcba0, 
fkrel=0x5652932ab980, foreignrel=0x5652932e77b8, 
joinquals=0x7fca9b3bcba0, fkinfo=0x5652932e6ce8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3961

so probably related to the 'Use Foreign keys to improve joins estimates' 
project from Tomas

If you need any more info or testing done just let me know.

Regards,
Stefan


pg9.6-2016-04-29-crash-reproducer.sql
Description: application/sql
Program received signal SIGSEGV, Segmentation fault.
get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
212 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:
 No such file or directory.
(gdb) bt
#0  get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
#1  0x565291ec6ba0 in quals_match_foreign_key (root=0x7fca9b3bcba0, 
fkrel=0x5652932ab980, foreignrel=0x5652932e77b8, 
joinquals=0x7fca9b3bcba0, fkinfo=0x5652932e6ce8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3961
#2  find_best_foreign_key_quals (fkrel=fkrel@entry=0x5652932ab980, 
foreignrel=foreignrel@entry=0x5652932e77b8, 
joinquals=joinquals@entry=0x7fca9b3bcba0, 
joinqualsbitmap=joinqualsbitmap@entry=0x7ffdd3f0d848, root=0x5652932b0e78)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4110
#3  0x565291ec6e6d in clauselist_join_selectivity 
(root=root@entry=0x5652932b0e78, joinquals=joinquals@entry=0x7fca9b3bcba0, 
jointype=jointype@entry=JOIN_LEFT, sjinfo=sjinfo@entry=0x7fca9b3b0108)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4170
#4  0x565291ec743e in calc_joinrel_size_estimate (root=0x5652932b0e78, 
outer_rows=1, inner_rows=127, sjinfo=0x7fca9b3b0108, 
restrictlist=)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4292
#5  0x565291ecb621 in set_joinrel_size_estimates 
(root=root@entry=0x5652932b0e78, rel=rel@entry=0x7fca9b3b3a20, 
outer_rel=outer_rel@entry=0x5652932ab980, 
inner_rel=inner_rel@entry=0x5652932e77b8, sjinfo=sjinfo@entry=0x7fca9b3b0108, 
restrictlist=restrictlist@entry=0x7fca9b3bca80)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3840
#6  0x565291f00d8a in build_join_rel (root=root@entry=0x5652932b0e78, 
joinrelids=joinrelids@entry=0x7fca9b3bc8d0, 
outer_rel=outer_rel@entry=0x5652932ab980, 
inner_rel=inner_rel@entry=0x5652932e77b8, sjinfo=0x7fca9b3b0108, 
restrictlist_ptr=restrictlist_ptr@entry=0x7ffdd3f0d998)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/relnode.c:502
#7  0x565291ed508d in make_join_rel (root=root@entry=0x5652932b0e78, 
rel1=rel1@entry=0x5652932ab980, rel2=rel2@entry=0x5652932e77b8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/joinrels.c:714
#8  0x565291ed5abb in make_rels_by_clause_joins (other_rels=, old_rel=, root=)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/joinrels.c:274
#9  join_search_one_level (root=root@entry=0x5652932b0e78, 

Re: [HACKERS] Timeline following for logical slots

2016-04-29 Thread Craig Ringer
On 28 April 2016 at 02:29, Andres Freund  wrote:


>
> I don't object either, I was looking for the feature myself a number of
> times (for tap tests in my case).
>

Exactly what I want it for.


> It requires a some amount of thinking about what the limit applies to
> though. "messages sent by server", Bytes? TCP messages? xids? Time?
>
>
The main thing I think would be useful is a threshold for message LSNs
after which it disconnects and exits. It doesn't have to be a commit
message; if we receive any message with upstream LSN after the LSN of
interest then there can't be any commits with a later LSN anyway, and that
way it'll be useful if/when streaming decoding is implemented too.

That way a test can capture the xlog insert lsn after doing the work it
wants to replay, do another dummy write to make sure there's something more
to replay, and decode up to that point.

I don't think pg_recvlogical can do anything about the need for that dummy
write, since the client has no way to determine the exact LSN of the commit
record of the xact of interest. It can't rely
on pg_current_xlog_insert_location() or pg_current_xlog_location() since
autovacuum or a checkpoint might've written xlog since. Logical streaming
replication doesn't have a non-blocking mode where it returns immediately
if it'd have to wait for more xlog so we can't just send off the most
recent server LSN as the endpoint.

Ideally I think this should be done server-side with a limit on replay LSN
and a non-blocking option, but that's way too invasive for this stage in
the release cycle. Client-side seems fine enough.

Number of xacts received would also be handy, but I don't know if I'll get
to that.

pg_receivexlog should send confirmation on exit.

(The other thing I've wanted sometimes is a --peek option, but that's again
not really in scope for this.)


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Amit Kapila
On Fri, Apr 29, 2016 at 12:01 PM, Andreas Seltenreich 
wrote:
>
> Hi,
>
> tonight's sqlsmith run yielded another core dump:
>
> TRAP: FailedAssertion("!(MyProc->lockGroupLeader == ((void *)0))", File:
"proc.c", Line: 1787)
>
> I couldn't identifiy a query for it though: debug_query_string is empty.
> Additionally, the offending query was not reported in the error context
> as it typically is for non-parallel executor crashes.
>

>From callstack below, it is clear that the reason for core dump is that
Gather node is pushed below another Gather node which makes worker execute
the Gather node.  Currently there is no support in workers to launch
another workers and ideally such a plan should not be generated.  It will
be helpful if you can find the offending query or plan corresponding to it?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [BUGS] Breakage with VACUUM ANALYSE + partitions

2016-04-29 Thread Fabien COELHO


An interesting complement about the previous failing test:

Although I disabled the "flushing" feature...

  sh> grep flush_after xxx/postgresql.conf
  bgwriter_flush_after = 0# 0 disables,
  backend_flush_after = 0 # 0 disables,
  wal_writer_flush_after = 0  # 0 disables
  checkpoint_flush_after = 0  # 0 disables,

the test still fails with "head":

  ...
  ERROR:  could not fsync file "base/16384/16397.1": No such file or directory
  ERROR:  checkpoint request failed
  HINT:  Consult recent messages in the server log for details.
  STATEMENT:  CHECKPOINT;

So this seem to suggest that there is an underlying issue independent from 
the flushing file.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Crash in apply_projection_to_path

2016-04-29 Thread Amit Kapila
On Fri, Apr 29, 2016 at 8:07 AM, Robert Haas  wrote:
>
> On Thu, Apr 28, 2016 at 10:06 PM, Tom Lane  wrote:
> > Andreas Seltenreich  writes:
> >> the following query against the regression database crashes master as
of
> >> 23b09e15.
> >
> >> select 1 from depth0 inner join depth1 on (depth0.c = depth1.c)
> >>  where depth0.c @@ depth1.c limit 1;
> >
> > What's going on here is that add_partial_path is recycling a
now-dominated
> > partial path without regard for the fact that there's already a
GatherPath
> > pointing at that old partial path.  Later use of the GatherPath tries to
> > make use of its now-dangling subpath pointer.
> >
> > I'd be inclined to think that it's silly to build GatherPaths in advance
> > of having finalized the list of partial paths for a rel.
>
> Uh ... yeah, that shouldn't be happening.  I obviously screwed something
up.
>

What's happening here is that to form joinrel, we need to call
add_paths_to_joinrel() with both outer and inner relation twice, once with
rel1 as outer relation and rel1 as inner relation and vice versa.  So now
the second call to add_paths_to_joinrel() can replace a partial path which
is being referenced by GatherPath generated in first call.  I think we
should generate gather paths for join rel after both the calls
to add_paths_to_joinrel() aka in make_join_rel().  Attached patch on above
lines fixes the problem for me.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fix_parallel_join_path_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL - Var attno

2016-04-29 Thread Ashutosh Bapat
On Fri, Apr 29, 2016 at 11:12 AM, sri harsha 
wrote:

>
> Its not an OpExpr . It is a VAR , got it from "reltargetlist" in base
> relation ( RelOptInfo) . Can you shed some light on where the conversion
> from 141 to "original" attribute number takes place ??
>

If you try to print the node as *(Node *) node in a debugger, it will tell
you the type of node. What does that print?


> On Fri, Apr 29, 2016 at 10:03 AM, Tom Lane  wrote:
>
>> sri harsha  writes:
>> >Assume the following query ,
>> > (SELECT a * 1 , b FROM TABLE_1) UNION ALL (SELECT a *1 , b FROM
>> TABLE_2);
>>
>> > In this query , attribute number of the VARs are 141 and 2 respectively
>> !!
>>
>> I doubt it.
>>
>> Maybe you're looking at something that's not a Var, possibly an OpExpr,
>> but assuming it's a Var?  The fact that 141 is the pg_proc OID of int4mul
>> lends considerable weight to this suspicion ...
>>
>> regards, tom lane
>>
>
>


-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] 9.6 and fsync=off

2016-04-29 Thread Abhijit Menon-Sen
At 2016-04-28 13:44:23 -0700, and...@anarazel.de wrote:
>
> Abhijit had a patch implementing automatically running fsync whenever
> reenabled IIRC. Abhijit?

The patch I had written is attached, and it's not quite the same thing.
Here's how I originally described it in response to a question from
Robert:

«In 20150115133245.gg5...@awork2.anarazel.de, Andres explained his
rationale as follows:

«What I am thinking of is that, currently, if you start the
server for initial loading with fsync=off, and then restart it,
you're open to data loss. So when the current config file
setting is changed from off to on, we should fsync the data
directory. Even if there was no crash restart.»

That's what I tried to implement.»

I remember there was some subsequent discussion about it being better to
issue fsync during a checkpoint when we see that its value has changed,
but if I did any work on it (which I have a vague memory of), I can't
find it now. Sorry.

Do you want a patch along those lines now, or is it too late?

-- Abhijit
>From 1768680b672bcb037446230323cabcf9960f7f9a Mon Sep 17 00:00:00 2001
From: Abhijit Menon-Sen 
Date: Fri, 1 May 2015 17:59:51 +0530
Subject: Recursively fsync PGDATA on the next restart after fsync was disabled

Even if we didn't crash, we want to fsync PGDATA on startup if we know
the server ran with fsync=off at some point since the previous restart.
Otherwise, starting the server with fsync=off for initial data loading
and then restarting it opens you to data loss on a power failure after
the restart.
---
 src/backend/access/transam/xlog.c   | 9 +++--
 src/bin/pg_controldata/pg_controldata.c | 2 ++
 src/include/catalog/pg_control.h| 8 +++-
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 084174d..af12992 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -4823,6 +4823,7 @@ BootStrapXLOG(void)
 	ControlFile->checkPoint = checkPoint.redo;
 	ControlFile->checkPointCopy = checkPoint;
 	ControlFile->unloggedLSN = 1;
+	ControlFile->fsync_disabled = false;
 
 	/* Set important parameter values for use when replaying WAL */
 	ControlFile->MaxConnections = MaxConnections;
@@ -5893,10 +5894,12 @@ StartupXLOG(void)
 	 */
 
 	if (enableFsync &&
-		(ControlFile->state != DB_SHUTDOWNED &&
-		 ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY))
+		(ControlFile->fsync_disabled ||
+		 (ControlFile->state != DB_SHUTDOWNED &&
+		  ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)))
 	{
 		fsync_pgdata(data_directory);
+		ControlFile->fsync_disabled = false;
 	}
 
 	if (ControlFile->state == DB_SHUTDOWNED)
@@ -8272,6 +8275,8 @@ CreateCheckPoint(int flags)
 	/* crash recovery should always recover to the end of WAL */
 	ControlFile->minRecoveryPoint = InvalidXLogRecPtr;
 	ControlFile->minRecoveryPointTLI = 0;
+	if (!enableFsync)
+		ControlFile->fsync_disabled = true;
 
 	/*
 	 * Persist unloggedLSN value. It's reset on crash recovery, so this goes
diff --git a/src/bin/pg_controldata/pg_controldata.c b/src/bin/pg_controldata/pg_controldata.c
index d8cfe5e..e99014f 100644
--- a/src/bin/pg_controldata/pg_controldata.c
+++ b/src/bin/pg_controldata/pg_controldata.c
@@ -290,6 +290,8 @@ main(int argc, char *argv[])
 		   (uint32) ControlFile.backupEndPoint);
 	printf(_("End-of-backup record required:%s\n"),
 		   ControlFile.backupEndRequired ? _("yes") : _("no"));
+	printf(_("Fsync disabled at runtime:%s\n"),
+		   ControlFile.fsync_disabled ? _("yes") : _("no"));
 	printf(_("Current wal_level setting:%s\n"),
 		   wal_level_str(ControlFile.wal_level));
 	printf(_("Current wal_log_hints setting:%s\n"),
diff --git a/src/include/catalog/pg_control.h b/src/include/catalog/pg_control.h
index 2e4c381..a71d73e 100644
--- a/src/include/catalog/pg_control.h
+++ b/src/include/catalog/pg_control.h
@@ -21,7 +21,7 @@
 
 
 /* Version identifier for this pg_control format */
-#define PG_CONTROL_VERSION	942
+#define PG_CONTROL_VERSION	943
 
 /*
  * Body of CheckPoint XLOG records.  This is declared here because we keep
@@ -182,6 +182,12 @@ typedef struct ControlFileData
 	bool		track_commit_timestamp;
 
 	/*
+	 * Indicates whether fsync was ever disabled since the last restart.
+	 * Tested and set at checkpoints, reset at startup.
+	 */
+	bool		fsync_disabled;
+
+	/*
 	 * This data is used to check for hardware-architecture compatibility of
 	 * the database and the backend executable.  We need not check endianness
 	 * explicitly, since the pg_control version will surely look wrong to a
-- 
1.9.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNION ALL - Var attno

2016-04-29 Thread Amit Langote
On Fri, Apr 29, 2016 at 2:42 PM, sri harsha  wrote:
>
> Its not an OpExpr . It is a VAR , got it from "reltargetlist" in base
> relation ( RelOptInfo) . Can you shed some light on where the conversion
> from 141 to "original" attribute number takes place ??

As Tom said, you must be looking at an OPEXPR's opfuncid value.
Because that's what I see as being 141.

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [sqlsmith] Failed assertion in BecomeLockGroupLeader

2016-04-29 Thread Andreas Seltenreich
Hi,

tonight's sqlsmith run yielded another core dump:

TRAP: FailedAssertion("!(MyProc->lockGroupLeader == ((void *)0))", File: 
"proc.c", Line: 1787)

I couldn't identifiy a query for it though: debug_query_string is empty.
Additionally, the offending query was not reported in the error context
as it typically is for non-parallel executor crashes.

regards,
Andreas

GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
Core was generated by `postgres: bgworker: parallel worker for PID 4706 
'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7ff1bda16067 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: Datei oder Verzeichnis nicht 
gefunden.
#0  0x7ff1bda16067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7ff1bda17448 in __GI_abort () at abort.c:89
#2  0x007eaa11 in ExceptionalCondition 
(conditionName=conditionName@entry=0x988318 "!(MyProc->lockGroupLeader == 
((void *)0))", errorType=errorType@entry=0x82a45d "FailedAssertion", 
fileName=fileName@entry=0x8760e5 "proc.c", lineNumber=lineNumber@entry=1787) at 
assert.c:54
#3  0x006e3e7b in BecomeLockGroupLeader () at proc.c:1787
#4  0x004e6a59 in LaunchParallelWorkers (pcxt=pcxt@entry=0x1db05c8) at 
parallel.c:437
#5  0x005ef2d7 in ExecGather (node=node@entry=0x1d9d0b8) at 
nodeGather.c:168
#6  0x005dd788 in ExecProcNode (node=node@entry=0x1d9d0b8) at 
execProcnode.c:515
#7  0x005d999f in ExecutePlan (dest=0x1d7d310, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, 
use_parallel_mode=, planstate=0x1d9d0b8, estate=0x1d9c858) at 
execMain.c:1567
#8  standard_ExecutorRun (queryDesc=0x1db0080, direction=, 
count=0) at execMain.c:338
#9  0x005dcb3f in ParallelQueryMain (seg=, 
toc=0x7ff1be507000) at execParallel.c:716
#10 0x004e608b in ParallelWorkerMain (main_arg=) at 
parallel.c:1033
#11 0x00683a42 in StartBackgroundWorker () at bgworker.c:726
#12 0x0068eb82 in do_start_bgworker (rw=0x1d24ec0) at postmaster.c:5531
#13 maybe_start_bgworker () at postmaster.c:5706
#14 0x0046c993 in ServerLoop () at postmaster.c:1762
#15 0x006909fe in PostmasterMain (argc=argc@entry=3, 
argv=argv@entry=0x1cfa560) at postmaster.c:1298
#16 0x0046d5ed in main (argc=3, argv=0x1cfa560) at main.c:228
(gdb) bt full
#3  0x006e3e7b in BecomeLockGroupLeader () at proc.c:1787
leader_lwlock = 
#4  0x004e6a59 in LaunchParallelWorkers (pcxt=pcxt@entry=0x1db05c8) at 
parallel.c:437
oldcontext = 0x1d9ced0
worker = {
  bgw_name = 
"\220\a\333\001\000\000\000\000\370\340L\276\361\177\000\000\370\373\025\264\361\177\000\000P\a\333\001\000\000\000\000X\310\331\001\000\000\000\000\004\000\000\000\000\000\000\000\310\005\333\001\000\000\000\000\233\222J\000\000\000\000",
 
  bgw_flags = 1, 
  bgw_start_time = BgWorkerStart_PostmasterStart, 
  bgw_restart_time = 1, 
  bgw_main = 0x0, 
  bgw_library_name = 
"\000\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000u\222J\000\000\000\000\000\350\336\331\001\000\000\000\000\360\260a\000\000\000\000\000\200\315]\000\000\000\000\000\300\330\367\217\375\177\000\000h\205\333\001\000\000\000",
 
  bgw_function_name = 
"`\346\327\001\000\000\000\000\004\000\000\000\000\000\000\000\360\260a\000\000\000\000\000\200\315]\000\000\000\000\000\300\330\367\217\375\177\000\000h\317\331\001\000\000\000\000\210\325\327\001\000\000\000\000\004\000\000\000\000\000\000",
 
  bgw_main_arg = 6402288, 
  bgw_extra = 
"X\310\331\001\000\000\000\000\000\000\000\000\000\000\000\000\060\233\333\001\000\000\000\000\001\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000l\321]\000\000\000\000\000\000\000\000\000\000\000\000\000H-\334\001\000\000\000\000h\317\331\001\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\274\341M\276\361\177\000\000\310\005\333\001\000\000\000\000\004\000\000\000\000\000\000\000\310\005\333\001\000\000\000\000\000\000\000\000\000\000\000",
 
  bgw_notify_pid = 4
}
i = 
any_registrations_failed = 0 '\000'
#5  0x005ef2d7 in ExecGather (node=node@entry=0x1d9d0b8) at 
nodeGather.c:168
pcxt = 0x1db05c8
estate = 
gather = 0x1d7d440
fslot = 0x1d9ced0
i = 
resultSlot = 
isDone = ExprSingleResult
econtext = 
#6  0x005dd788 in ExecProcNode (node=node@entry=0x1d9d0b8) at 
execProcnode.c:515
result = 
__func__ = "ExecProcNode"
#7  0x005d999f in ExecutePlan (dest=0x1d7d310, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, 
use_parallel_mode=, planstate=0x1d9d0b8, estate=0x1d9c858) at 
execMain.c:1567
slot = 
current_tuple_count = 0
#8  standard_ExecutorRun (queryDesc=0x1db0080, direction=,