[HACKERS] Re: new tests post-feature freeze (was pgsql: Add TAP tests for pg_dump)

2016-05-22 Thread Noah Misch
On Sun, May 08, 2016 at 12:29:27PM -0400, Stephen Frost wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
> > My suggestion is that, from this point forward, we add new tests to
> > 9.6 only if they are closely related to a bug that is getting fixed or
> > a feature that is new in 9.6.  I think that's a reasonable compromise,
> > but what do others think?

+1.  This is a natural extension of the well-established default that we
(back-)patch tests for a bug into all releases getting a fix for the bug.

> I'm willing to accept that compromise, but I'm not thrilled with it due
> to what it will mean for the process I'm currently going through.  The
> approach I've been using has been to add tests to gain more code
> coverage of the code in pg_dump.  That has turned up multiple
> pre-existing bugs in pg_dump but the vast majority of the tests come
> back clean.  This compromise would mean that I'd continue to work
> through the code coverage tests, but would have to segregate out and
> commit only those tests which actually reveal bugs, once those bugs have
> been fixed (as to avoid turning the buildfarm red).  The rest of the
> tests would still get written, but since they currently don't reveal
> bugs, they would be shelved until development is opened for 9.7.

Some or even most of the other tests would qualify under "closely related to
... a feature that is new in 9.6".  Your 9.6 pg_dump changes affected object
selection and catalog extraction for most object types, so I think validating
those paths is in scope under Robert's suggestion.  Testing "pg_dump
--encoding" or "pg_dump --jobs" probably wouldn't fall in scope, because those
features operate at arm's length from the 9.6 pg_dump changes.  Expanding, for
example, tests of postgres_fdw query deparse would certainly fall out of
scope.  That would have no apparent chance of catching a regression caused by
the 9.6 pg_dump changes.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-22 Thread Amit Kapila
On Sun, May 22, 2016 at 9:32 PM, Andreas Seltenreich 
wrote:
>
> Amit Kapila writes:
>
> > avoid_restricted_clause_below_gather_v1.patch
> > prohibit_parallel_clause_below_rel_v1.patch
>
> I didn't observe any parallel worker related coredumps since applying
> these.  The same amount of testing done before applying them yielded
> about a dozend.
>

Thanks for verification.

> Dilip Kumar writes:
>
> > So now its clear that because of sub query pullup, we may get
expression in
> > targetlist while creating single table path list. So we need to avoid
> > parallel plan if it contains expression.
>
> This sounds like a rather heavy restriction though…
>

I think what Dilip means by above statement is to avoid parallel plan if
target list contains parallel unsafe or restricted expressions.  We already
restrict generation of parallel plans if qualification for a relation
contains such expressions (refer set_rel_consider_parallel()), so this
doesn't sound to be heavy restriction.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] 9.4 failure on skink in _bt_newroot/XLogCheckBuffer

2016-05-22 Thread Tom Lane
Andres Freund  writes:
> On 2016-05-21 17:18:14 -0400, Tom Lane wrote:
>> What remains unclear is how come this only fails once in a blue moon.
>> Seems like any valgrind run of the regression tests should have caught it.

> Looks like a timing issue.

Yeah, I came to the same conclusion after awhile.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding an alternate syntax for Phrase Search

2016-05-22 Thread David G. Johnston
On Sun, May 22, 2016 at 6:53 PM, Teodor Sigaev  wrote:

>
> to_tsquery(' Berkus & "PostgreSQL Version 10.0" ')
>>
>> ... would be equivalent to:
>>
>> to_tsquery(' Berkus & ( PostgreSQL <-> version <-> 10.0 )')
>>
>
> select to_tsquery('Berkus') && phraseto_tsquery('PostgreSQL Version 10.0');
> does it as you wish


​Sure, but I imagine (not having used it myself), that in cases involving
user input said text treated somewhat holistically and it wouldn't be all
that easy, or desirable, to choose between the two forms at runtime.

David J.​


Re: [HACKERS] Changed SRF in targetlist handling

2016-05-22 Thread David G. Johnston
tl;dr

Semantic changes to SRF-in-target-list processing are undesirable when they
are all but deprecated.

I'd accept a refactoring that trades a performance gain for unaffected
queries for a reasonable performance hit of those afflicted.

Preamble...

Most recent thread that I can recall seeing on the topic - and where I
believe the rewrite idea was first presented.

http://www.postgresql.org/message-id/flat/25750.1458767...@sss.pgh.pa.us#25750.1458767...@sss.pgh.pa.us

On Sun, May 22, 2016 at 8:53 PM, Andres Freund  wrote:

> Hi,
>
> discussing executor performance with a number of people at pgcon,
> several hackers - me included - complained about the additional
> complexity, both code and runtime, required to handle SRFs in the target
> list.
>
> One idea I circulated was to fix that by interjecting a special executor
> node to process SRF containing targetlists (reusing Result possibly?).
> That'd allow to remove the isDone argument from ExecEval*/ExecProject*
> and get rid of ps_TupFromTlist which is fairly ugly.
>

​Conceptually I'm all for minimizing the impact on queries of this form.
It seems to be the most likely to get written and committed and the least
likely to cause unforeseen issues.
​


> Robert suggested - IIRC mentioning previous on-list discussion - to
> instead rewrite targetlist SRFs into lateral joins. My gut feeling is
> that that'd be a larger undertaking, with significant semantics changes.
>
​[...]​

> If we accept bigger semantical changes, I'm inclined to instead just get
> rid of targetlist SRFs in total; they're really weird and not needed
> anymore.
>

​I cannot see these, in isolation, being a good option.  Nonetheless, I
don't think any semantic change should happen before 9.2 becomes no longer
supported.  I'd be inclined to take a similar approach as with
standard_conforming_strings (minus the execution guc, just the warning one)
with whatever after-the-fact learning taken into account.

Its worth considering query rewrite and making it forbidden as a joint goal.

For something like a canonical version of this, especially for
composite-returning SRF:

WITH func_call (
SELECT func(tbl.col)
FROM tbl
)
​SELECT (func_call.func).*
FROM func_call;​

If we can rewrite the CTE portion into a lateral - with the exact same
semantics (specifically, returning the single-column composite) then check
the rewritten query the select list SRF would not longer be present and no
error would be thrown.

For situations where a rewrite cannot be made to behave properly we leave
the construct alone and let the query raise an error.

In considering what I just wrote I'm not particularly enamored with
it...hence my overall conclusion.  Can't say I hate it and after re-reading
the aforementioned thread I'm inclined to like it for cases where, for
instance, we are susceptible to a LCM evaluation.

David J.


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-22 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> There was one instance of this PANIC when testing with the regression db
> of master at 50e5315.
> 
> ,
> | WARNING:  specified item offset is too large
> | PANIC:  failed to add BRIN tuple
> | server closed the connection unexpectedly
> `
> 
> It is reproducible with the query below on this instance only.  I've put
> the data directory (20MB) here:
> 
> http://ansel.ydns.eu/~andreas/brincrash.tar.xz

Thanks for all the details.  I'll be looking into this tomorrow.


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changed SRF in targetlist handling

2016-05-22 Thread Craig Ringer
On 23 May 2016 at 08:53, Andres Freund  wrote:

> Hi,
>
> discussing executor performance with a number of people at pgcon,
> several hackers - me included - complained about the additional
> complexity, both code and runtime, required to handle SRFs in the target
> list.
>
> One idea I circulated was to fix that by interjecting a special executor
> node to process SRF containing targetlists (reusing Result possibly?).
> That'd allow to remove the isDone argument from ExecEval*/ExecProject*
> and get rid of ps_TupFromTlist which is fairly ugly.
>
>
> Robert suggested - IIRC mentioning previous on-list discussion - to
> instead rewrite targetlist SRFs into lateral joins. My gut feeling is
> that that'd be a larger undertaking, with significant semantics changes.
>
> If we accept bigger semantical changes, I'm inclined to instead just get
> rid of targetlist SRFs in total; they're really weird and not needed
> anymore.
>
> One issue with removing targetlist SRFs is that they're currently
> considerably faster than SRFs in FROM:
> tpch[14693][1]=# COPY (SELECT * FROM generate_series(1, 1000)) TO
> '/dev/null';
> COPY 1000
> Time: 2217.167 ms
> tpch[14693][1]=# COPY (SELECT generate_series(1, 1000)) TO '/dev/null';
> COPY 1000
> Time: 1355.929 ms
> tpch[14693][1]=#
>
> I'm no tto concerned about that, and we could probably fixing by
> removing forced materialization from the relevant code path.
>
> Comments?
>
>
SRFs-in-tlist are a lot faster for lockstep iteration etc. They're also
much simpler to write, though if the result result rowcount differs
unexpectedly between the functions you get exciting and unexpected
behaviour.

WITH ORDINALITY provides what I think is the last of the functionality
needed to replace SRFs-in-from, but at a syntatactic complexity and
performance cost. The following example demonstrates that, though it
doesn't do anything that needs LATERAL etc. I'm aware the following aren't
semantically identical if the rowcounts differ.


craig=> EXPLAIN ANALYZE SELECT generate_series(1,100) x,
generate_series(1,100) y;
  QUERY PLAN

--
 Result  (cost=0.00..5.01 rows=1000 width=0) (actual time=0.024..92.845
rows=100 loops=1)
 Planning time: 0.039 ms
 Execution time: 123.123 ms
(3 rows)

Time: 123.719 ms


craig=> EXPLAIN ANALYZE SELECT x, y FROM generate_series(1,100) WITH
ORDINALITY AS x(i, n) INNER JOIN generate_series(1,100) WITH ORDINALITY
AS y(i, n) ON (x.n = y.n);
QUERY PLAN

--
 Merge Join  (cost=0.01..97.50 rows=5000 width=64) (actual
time=179.863..938.375 rows=100 loops=1)
   Merge Cond: (x.n = y.n)
   ->  Function Scan on generate_series x  (cost=0.00..10.00 rows=1000
width=40) (actual time=108.813..303.690 rows=100 loops=1)
   ->  Materialize  (cost=0.00..12.50 rows=1000 width=40) (actual
time=71.043..372.880 rows=100 loops=1)
 ->  Function Scan on generate_series y  (cost=0.00..10.00
rows=1000 width=40) (actual time=71.039..266.209 rows=100 loops=1)
 Planning time: 0.184 ms
 Execution time: 970.744 ms
(7 rows)

Time: 971.706 ms


I get the impression the with-ordinality case could perform just as well if
the optimiser recognised a join on the ordinality column and iterated the
functions in lockstep to populate the result row directly. Though that
could perform _worse_ if the function is computationally costly and
benefits significantly from the CPU cache, where we're better off
materializing it or at least executing it in chunks/batches...


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[HACKERS] Changed SRF in targetlist handling

2016-05-22 Thread Andres Freund
Hi,

discussing executor performance with a number of people at pgcon,
several hackers - me included - complained about the additional
complexity, both code and runtime, required to handle SRFs in the target
list.

One idea I circulated was to fix that by interjecting a special executor
node to process SRF containing targetlists (reusing Result possibly?).
That'd allow to remove the isDone argument from ExecEval*/ExecProject*
and get rid of ps_TupFromTlist which is fairly ugly.


Robert suggested - IIRC mentioning previous on-list discussion - to
instead rewrite targetlist SRFs into lateral joins. My gut feeling is
that that'd be a larger undertaking, with significant semantics changes.

If we accept bigger semantical changes, I'm inclined to instead just get
rid of targetlist SRFs in total; they're really weird and not needed
anymore.

One issue with removing targetlist SRFs is that they're currently
considerably faster than SRFs in FROM:
tpch[14693][1]=# COPY (SELECT * FROM generate_series(1, 1000)) TO 
'/dev/null';
COPY 1000
Time: 2217.167 ms
tpch[14693][1]=# COPY (SELECT generate_series(1, 1000)) TO '/dev/null';
COPY 1000
Time: 1355.929 ms
tpch[14693][1]=#

I'm no tto concerned about that, and we could probably fixing by
removing forced materialization from the relevant code path.

Comments?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 failure on skink in _bt_newroot/XLogCheckBuffer

2016-05-22 Thread Andres Freund
Hi tom,

On 2016-05-21 17:18:14 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > The valgrind animal just reported a large object related failure on 9.4:
> 
> The proximate cause seems to be that _bt_newroot isn't bothering to
> fill the buffer_std field here:
> 
>   /* Make a full-page image of the left child if needed */
>   rdata[2].data = NULL;
>   rdata[2].len = 0;
>   rdata[2].buffer = lbuf;
>   rdata[2].next = NULL;
> 
> which is indeed an actual bug, but the only consequence would be poor
> compression of the full-page image (if the value chanced to be zero),
> so it's not much of a problem.

Thanks for fixing that one!


> What remains unclear is how come this only fails once in a blue moon.
> Seems like any valgrind run of the regression tests should have caught it.

Looks like a timing issue. The relevant access to the uninitialized
buffer_std field only happens when
if (*lsn <= RedoRecPtr)
{
which presumably is not that likely to be hit.  Even under valgrind the
individual tests are likely to finish below a checkpoint timeout.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding an alternate syntax for Phrase Search

2016-05-22 Thread Teodor Sigaev



to_tsquery(' Berkus & "PostgreSQL Version 10.0" ')

... would be equivalent to:

to_tsquery(' Berkus & ( PostgreSQL <-> version <-> 10.0 )')


select to_tsquery('Berkus') && phraseto_tsquery('PostgreSQL Version 10.0');
does it as you wish



I realize we're already in beta, but pgCon was actually the first time I
saw the new syntax.  I think if we don't do this now, we'll be doing it
for 10.0.

Havn't an objections for 10.0
--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query

2016-05-22 Thread Michael Paquier
On Sun, May 22, 2016 at 10:36 AM, Tatsuo Ishii  wrote:
>> The brief introudce of MPI(Message Passing Interface) as following URL,
>> which is a message protocol used for parallel computinng, just like DSM
>> does in parallel query. The DSM play a message passing role(in fact, it's.
>> by passing the query plan/raw node tree to anthor worker) in parallel
>> query.  I think the parallel query resemble the MPI. so I mentioned that we
>> can refere to the MPI bechmark, and use the idea which is used to test the
>> parallel computing system.If the parallel query to be feature in future, I
>> think we must have an other bechmark for this feature, just like tpcc does.
>> So, I mention the MPI.
>>
>> https://www.open-mpi.org/
>>
>> https://en.wikipedia.org/wiki/Message_Passing_Interface
>
> Thank you for the info.

Ishii-san is doing so... Please be sure to press "reply-all" when
answering to an email in the community mailing lists. It is hard to
follow this discussion.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_bsd_indent - improvements around offsetof and sizeof

2016-05-22 Thread Piotr Stefaniak

Hello,

I think I've managed to improve pg_bsd_indent's handling of two types of 
cases.


The first are like in this example:
-   hashp = (HTAB *) DynaHashAlloc(sizeof(HTAB) + strlen(tabname) +1);
+   hashp = (HTAB *) DynaHashAlloc(sizeof(HTAB) + strlen(tabname) + 1);
Pristine pg_bsd_indent is inconsistent in masking parentheses as those 
that are part of a cast and those that "are part of sizeof": seeing a 
type name following an lparen it always masks that lparen as a part of a 
cast; seeing an rparen it only removes the bit if it doesn't overlap 
with sizeof_mask. In the example above, "(HTAB" started both "cast 
parens" and "sizeof parens" at the same time, and the immediately 
following rparen ended only the "sizeof parens". According to indent, 
the cast-to type then ends at "tabname)" and what follows is the cast's 
operand, including the + operator; in that case it's assumed to be unary 
and not binary, which is why indent doesn't add the space after it.

The fix was to make it consistent about masking parens:
-   ps.cast_mask |= 1 << ps.p_l_follow;
+   ps.cast_mask |= (1 << ps.p_l_follow & 
~ps.sizeof_mask);

The second type of cases are like this:
-   nse = palloc(offsetof(PLpgSQL_nsitem, name) +strlen(name) + 1);
+   nse = palloc(offsetof(PLpgSQL_nsitem, name) + strlen(name) + 1);
pg_bsd_indent simply hasn't been taught that a parenthesized type name 
following the offsetof macro and then an lparen is another exception to 
the rule of thumb that a construction like that generally means a cast.


You'll also notice other, seemingly unrelated changes, most notably the 
rearrangement in numbers assigned to keywords. I've done it that way so 
that it was easier and simpler to keep the -bs option functioning as 
designed.


I've also renamed "sizeof_mask" to "not_cast_mask", because I think the 
latter is a better description of what the mask does (it prevents 
interpreting parenthesized type names as a cast where they aren't, 
namely where they follow sizeof or offsetof; I haven't done any support 
for function declarators and I don't plan to - the fact that 
pg_bsd_indent thinks that "(int" in "char func(int);" begins a cast is 
amusing but it seems harmless for now).


I'm attaching the patch for pg_bsd_indent and also a full diff that 
shows the change in its behavior when run against PG's sources.
diff -Burw indent.c indent.c
--- indent.c	2014-01-31 04:06:43.0 +0100
+++ indent.c	2016-05-22 19:24:01.666077311 +0200
@@ -568,7 +568,9 @@
 		 * happy */
 			if (ps.want_blank && *token != '[' &&
 			(ps.last_token != ident || proc_calls_space
-|| (ps.its_a_keyword && (!ps.sizeof_keyword || Bill_Shannon
+			/* offsetof (1) is never allowed a space; sizeof (2) iff -bs;
+			 * all other keywords (>2) always get a space before lparen */
+|| (ps.keyword + Bill_Shannon > 2)))
 *e_code++ = ' ';
 			if (ps.in_decl && !ps.block_init) {
 if (troff && !ps.dumped_decl_indent && !is_procname && ps.last_token == decl) {
@@ -601,17 +603,19 @@
 			 * structure decl or
 			 * initialization */
 			}
-			if (ps.sizeof_keyword)
-ps.sizeof_mask |= 1 << ps.p_l_follow;
+			/* a parenthesized type name following sizeof or offsetof is not
+			 * a cast */
+			if (ps.keyword == 1 || ps.keyword == 2)
+ps.not_cast_mask |= 1 << ps.p_l_follow;
 			break;
 
 		case rparen:	/* got a ')' or ']' */
 			rparen_count--;
-			if (ps.cast_mask & (1 << ps.p_l_follow) & ~ps.sizeof_mask) {
+			if (ps.cast_mask & (1 << ps.p_l_follow) & ~ps.not_cast_mask) {
 ps.last_u_d = true;
 ps.cast_mask &= (1 << ps.p_l_follow) - 1;
 			}
-			ps.sizeof_mask &= (1 << ps.p_l_follow) - 1;
+			ps.not_cast_mask &= (1 << ps.p_l_follow) - 1;
 			if (--ps.p_l_follow < 0) {
 ps.p_l_follow = 0;
 diag(0, "Extra %c", *token);
@@ -780,7 +784,7 @@
 			if (ps.last_token == rparen && rparen_count == 0)
 ps.in_parameter_declaration = 0;
 			ps.cast_mask = 0;
-			ps.sizeof_mask = 0;
+			ps.not_cast_mask = 0;
 			ps.block_init = 0;
 			ps.block_init_level = 0;
 			ps.just_saw_decl--;
@@ -1042,7 +1046,7 @@
 	copy_id:
 			if (ps.want_blank)
 *e_code++ = ' ';
-			if (troff && ps.its_a_keyword) {
+			if (troff && ps.keyword) {
 e_code = chfont(, , e_code);
 for (t_ptr = token; *t_ptr; ++t_ptr) {
 	CHECK_SIZE_CODE;
diff -Burw indent_globs.h indent_globs.h
--- indent_globs.h	2005-11-15 01:30:24.0 +0100
+++ indent_globs.h	2016-05-22 19:23:45.067093287 +0200
@@ -255,10 +255,10 @@
  * comment. In that case, the first non-blank
  * char should be lined up with the comment / */
 	int comment_delta, n_comment_delta;
-	int cast_mask;	/* indicates which close parens close off
- * casts */
-	int sizeof_mask;	/* indicates which close parens close off
- * sizeof''s */
+	int cast_mask;	/* indicates which close parens potentially
+ * close off casts */
+	int 

Re: [HACKERS] Adding an alternate syntax for Phrase Search

2016-05-22 Thread David G. Johnston
On Sun, May 22, 2016 at 3:00 PM, Thom Brown  wrote:

> On 22 May 2016 at 18:52, Josh berkus  wrote:
> > Folks,
> >
> > This came up at pgCon.
> >
> > The 'word <-> word <-> word' syntax for phrase search is not
> > developer-friendly.  While we need the <-> operator for SQL and for the
> > sophisticated cases, it would be really good to support an alternate
> > syntax for the simplest case of "words next to each other".  My proposal
> > is enclosing the phrase in double-quotes, which would be intuitive to
> > users and familiar from search engines.  Thus:
> >
> > to_tsquery(' Berkus & "PostgreSQL Version 10.0" ')
> >
> > ... would be equivalent to:
> >
> > to_tsquery(' Berkus & ( PostgreSQL <-> version <-> 10.0 )')
> >
> > I realize we're already in beta, but pgCon was actually the first time I
> > saw the new syntax.  I think if we don't do this now, we'll be doing it
> > for 10.0.
>
> I think it's way too late for that.  I don't see a problem with
> including it for 10.0, but when the feature freeze has long passed and
> we also have our first beta out, it's no longer a matter of changing
> the design or additional functionality, unless there's something that
> absolutely requires modification.  This isn't that.


​Particularly in light of our annual major release cycle we need to be open
to usability recommendations during Beta 1 (at minimum).  Not everyone with
intelligence, insight, and meaningful uses for our product and features
follows -hackers and compiles from source to try things out during
development.  We should encourage these others to at least voice their
opinions on the new features.

Its not like we get inundated with these kinds of requests.  Let it remain
mostly a resource concern.  If a few people can agree on desirability and
get a patch written, reviewed, and ready-for-commit before the next beta
release then the release committee, with input from the community, can be
the final arbiter of whether to back-patch it into 9.6 or keep it for 10.0

I'd like to think that features are the "top-level capabilities" that we
introduce - this is a sub-component of the "phrase search" feature.
Component freeze should occur no earlier than after the second packaged
release.  I'd generally rather have feature freeze earlier and use the
added time for component work and additional general testing if keeping on
the yearly cycle doesn't allow for both.  But, I'm tending to think that we
are that tightly constrained generally.

David J.


Re: [HACKERS] Adding an alternate syntax for Phrase Search

2016-05-22 Thread Thom Brown
On 22 May 2016 at 18:52, Josh berkus  wrote:
> Folks,
>
> This came up at pgCon.
>
> The 'word <-> word <-> word' syntax for phrase search is not
> developer-friendly.  While we need the <-> operator for SQL and for the
> sophisticated cases, it would be really good to support an alternate
> syntax for the simplest case of "words next to each other".  My proposal
> is enclosing the phrase in double-quotes, which would be intuitive to
> users and familiar from search engines.  Thus:
>
> to_tsquery(' Berkus & "PostgreSQL Version 10.0" ')
>
> ... would be equivalent to:
>
> to_tsquery(' Berkus & ( PostgreSQL <-> version <-> 10.0 )')
>
> I realize we're already in beta, but pgCon was actually the first time I
> saw the new syntax.  I think if we don't do this now, we'll be doing it
> for 10.0.

I think it's way too late for that.  I don't see a problem with
including it for 10.0, but when the feature freeze has long passed and
we also have our first beta out, it's no longer a matter of changing
the design or additional functionality, unless there's something that
absolutely requires modification.  This isn't that.

Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Adding an alternate syntax for Phrase Search

2016-05-22 Thread Josh berkus
Folks,

This came up at pgCon.

The 'word <-> word <-> word' syntax for phrase search is not
developer-friendly.  While we need the <-> operator for SQL and for the
sophisticated cases, it would be really good to support an alternate
syntax for the simplest case of "words next to each other".  My proposal
is enclosing the phrase in double-quotes, which would be intuitive to
users and familiar from search engines.  Thus:

to_tsquery(' Berkus & "PostgreSQL Version 10.0" ')

... would be equivalent to:

to_tsquery(' Berkus & ( PostgreSQL <-> version <-> 10.0 )')

I realize we're already in beta, but pgCon was actually the first time I
saw the new syntax.  I think if we don't do this now, we'll be doing it
for 10.0.

-- 
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query

2016-05-22 Thread Tatsuo Ishii
> The brief introudce of MPI(Message Passing Interface) as following URL,
> which is a message protocol used for parallel computinng, just like DSM
> does in parallel query. The DSM play a message passing role(in fact, it's.
> by passing the query plan/raw node tree to anthor worker) in parallel
> query.  I think the parallel query resemble the MPI. so I mentioned that we
> can refere to the MPI bechmark, and use the idea which is used to test the
> parallel computing system.If the parallel query to be feature in future, I
> think we must have an other bechmark for this feature, just like tpcc does.
> So, I mention the MPI.
> 
> https://www.open-mpi.org/
> 
> https://en.wikipedia.org/wiki/Message_Passing_Interface

Thank you for the info.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertions on parallel worker shutdown

2016-05-22 Thread Andreas Seltenreich
I wrote:

> There's another class of parallel worker core dumps when testing master
> with sqlsmith.  In these cases, the following assertion fails for all
> workers simulataneously:
>
> TRAP: FailedAssertion("!(mqh->mqh_partial_bytes <= nbytes)", File: 
> "shm_mq.c", Line: 386)

I no longer observe these after applying these two patches by Amit
Kapila:

avoid_restricted_clause_below_gather_v1.patch
Message-ID: 

Re: [HACKERS] [sqlsmith] Failed assertion in parallel worker (ExecInitSubPlan)

2016-05-22 Thread Andreas Seltenreich
Amit Kapila writes:

> avoid_restricted_clause_below_gather_v1.patch
> prohibit_parallel_clause_below_rel_v1.patch

I didn't observe any parallel worker related coredumps since applying
these.  The same amount of testing done before applying them yielded
about a dozend.

Dilip Kumar writes:

> So now its clear that because of sub query pullup, we may get expression in
> targetlist while creating single table path list. So we need to avoid
> parallel plan if it contains expression.

This sounds like a rather heavy restriction though…

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query

2016-05-22 Thread Tatsuo Ishii
What's MPI?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> Maybe we can refere to the MPI test cases.
> 
> On Sun, May 22, 2016 at 3:19 PM, Hao Lee  wrote:
> 
>> What kind of cases do you want to run? beside the multi-cores, i think the
>> working mem and its access rate are also a main criteria. As you known the
>> parallel query uses DSM as IPC tools, which means it will meet the memory
>> access barrier.,the memory bus has its access rate limitation. the
>> different system architecture, such as different CPU architect, etc,will
>> also be considered when we do the performace test. Do we need to consider
>> what i mentioned above?
>>
>> Best Regards,
>>
>> Hao LEE.
>>
>> On Thu, May 19, 2016 at 11:07 PM, Tatsuo Ishii 
>> wrote:
>>
>>> Robert,
>>> (and others who are involved in parallel query of PostgreSQL)
>>>
>>> PostgreSQL Enterprise Consortium (one of the PostgreSQL communities in
>>> Japan, in short "PGECons") is planning to test the parallel query
>>> performance of PostgreSQL 9.6. Besides TPC-H (I know you have already
>>> tested on an IBM box), what kind of tests would you like be performed?
>>>
>>> We are planning to use a big intel box (like more than 60 cores).
>>> Any suggestions are welcome.
>>>
>>> Best regards,
>>> --
>>> Tatsuo Ishii
>>> SRA OSS, Inc. Japan
>>> English: http://www.sraoss.co.jp/index_en.php
>>> Japanese:http://www.sraoss.co.jp
>>>
>>>
>>> --
>>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-hackers
>>>
>>
>>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autovacuum to prevent wraparound tries to consume xid

2016-05-22 Thread Alexander Korotkov
On Sun, May 22, 2016 at 12:39 PM, Amit Kapila 
wrote:

> On Mon, Mar 28, 2016 at 4:35 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> Hackers,
>>
>> one our customer meet near xid wraparound situation.  xid counter
>> reached xidStopLimit value.  So, no transactions could be executed in
>> normal mode.  But what I noticed is strange behaviour of autovacuum to
>> prevent wraparound.  It vacuums tables, updates pg_class and pg_database,
>> but then falls with "database is not accepting commands to avoid wraparound
>> data loss in database" message.  We end up with situation that according to
>> pg_database  maximum age of database was less than 200 mln., but
>> transactions couldn't be executed, because ShmemVariableCache wasn't
>> updated (checked by gdb).
>>
>> I've reproduced this situation on my laptop as following:
>>
>> 1) Connect gdb, do "set ShmemVariableCache->nextXid =
>> ShmemVariableCache->xidStopLimit"
>> 2) Stop postgres
>> 3) Make some fake clog: "dd bs=1m if=/dev/zero
>> of=/usr/local/pgsql/data/pg_clog/07FF count=1024"
>> 4) Start postgres
>>
>> Then I found the same situation as in customer database.  Autovacuum to
>> prevent wraparound regularly produced following messages in the log:
>>
>> ERROR:  database is not accepting commands to avoid wraparound data loss
>> in database "template1"
>> HINT:  Stop the postmaster and vacuum that database in single-user mode.
>> You might also need to commit or roll back old prepared transactions.
>>
>> Finally all databases was frozen
>>
>> # SELECT datname, age(datfrozenxid) FROM pg_database;
>>   datname  │   age
>> ───┼──
>>  template1 │0
>>  template0 │0
>>  postgres  │ 5000
>> (3 rows)
>>
>> but no transactions could be executed (ShmemVariableCache wasn't updated).
>>
>> After some debugging I found that vac_truncate_clog consumes xid just to
>> produce warning.  I wrote simple patch which replaces
>> GetCurrentTransactionId() with ShmemVariableCache->nextXid.  That
>> completely fixes this situation for me: ShmemVariableCache was successfully
>> updated.
>>
>
> As per your latest patch, you are using ReadNewTransactionId() to get the
> nextXid which then is used to check if any database's frozenxid is already
> wrapped.  Now, isn't the value of nextXID in your patch same as
> lastSaneFrozenXid in most cases (I mean there is a small window where some
> new transaction might have started due to which the value of
> ShmemVariableCache->nextXid has been advanced)? So isn't relying on
> lastSaneFrozenXid check sufficient?
>

Hmm... So, this code already contains comparison with lastSaneFrozenXid.
Thus, current code compares against both of lastSaneFrozenXid and myXID.  I
have no comment clarifying why this should be so.  In my opinion we can
just remove myXID with its checks.  Git shows that Tom Lane
committed lastSaneFrozenXid and lastSaneMinMulti checks in addition to
myXID check in 78db307b.

Tom, what do you think?  Could we remove myXID from vac_truncate_clog()?

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


[HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-22 Thread Andreas Seltenreich
There was one instance of this PANIC when testing with the regression db
of master at 50e5315.

,
| WARNING:  specified item offset is too large
| PANIC:  failed to add BRIN tuple
| server closed the connection unexpectedly
`

It is reproducible with the query below on this instance only.  I've put
the data directory (20MB) here:

http://ansel.ydns.eu/~andreas/brincrash.tar.xz

The instance was running on Debian Jessie amd64.  Query and Backtrace
below.

regards,
Andreas

--8<---cut here---start->8---
update public.brintest set byteacol = null, charcol =
public.brintest.charcol, int2col = null, int4col =
public.brintest.int4col, textcol = public.brintest.textcol, oidcol =
cast(coalesce(cast(coalesce(null, public.brintest.oidcol) as oid),
pg_catalog.pg_my_temp_schema()) as oid), tidcol =
public.brintest.tidcol, float8col = public.brintest.float8col,
macaddrcol = null, cidrcol = public.brintest.cidrcol, datecol =
public.brintest.datecol, timecol = public.brintest.timecol,
timestamptzcol = pg_catalog.clock_timestamp(), intervalcol =
public.brintest.intervalcol, timetzcol = public.brintest.timetzcol,
bitcol = public.brintest.bitcol, varbitcol =
public.brintest.varbitcol, uuidcol = null returning
public.brintest.byteacol as c0;
--8<---cut here---end--->8---

Core was generated by `postgres: smith regression [local] UPDATE
   '.
Program terminated with signal SIGABRT, Aborted.
#0  0x7fd2cda67067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7fd2cda67067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fd2cda68448 in __GI_abort () at abort.c:89
#2  0x007ec969 in errfinish (dummy=dummy@entry=0) at elog.c:557
#3  0x007f011c in elog_finish (elevel=elevel@entry=20, 
fmt=fmt@entry=0x82ca8f "failed to add BRIN tuple") at elog.c:1378
#4  0x00470618 in brin_doupdate (idxrel=0x101f4c0, pagesPerRange=1, 
revmap=0x10d20e50, heapBlk=8, oldbuf=2878, oldoff=9, origtup=0x10d864a8, 
origsz=6144, newtup=0x5328a88, newsz=6144, samepage=1 '\001') at 
brin_pageops.c:184
#5  0x0046e5bb in brininsert (idxRel=0x101f4c0, values=0x211b, 
nulls=0x6 , 
heaptid=0x, heapRel=0x7fd2ce6fd700, 
checkUnique=UNIQUE_CHECK_NO) at brin.c:244
#6  0x005d887f in ExecInsertIndexTuples (slot=0xe92a560, 
tupleid=0x10d21084, estate=0x9ed8a68, noDupErr=0 '\000', specConflict=0x0, 
arbiterIndexes=0x0) at execIndexing.c:383
#7  0x005f74d5 in ExecUpdate (tupleid=0x7ffe11ea74a0, oldtuple=0x211b, 
slot=0xe92a560, planSlot=0x, epqstate=0x7fd2ce6fd700, 
estate=0x9ed8a68, canSetTag=1 '\001') at nodeModifyTable.c:1015
#8  0x005f7b6c in ExecModifyTable (node=0x9ed8d28) at 
nodeModifyTable.c:1501
#9  0x005dd5d8 in ExecProcNode (node=node@entry=0x9ed8d28) at 
execProcnode.c:396
#10 0x005d962f in ExecutePlan (dest=0xde86040, direction=, numberTuples=0, sendTuples=, operation=CMD_UPDATE, 
use_parallel_mode=, planstate=0x9ed8d28, estate=0x9ed8a68) at 
execMain.c:1567
#11 standard_ExecutorRun (queryDesc=0xde860d8, direction=, 
count=0) at execMain.c:338
#12 0x006f74c9 in ProcessQuery (plan=, 
sourceText=0xd74e88 "update public.brintest[...]", params=0x0, dest=0xde86040, 
completionTag=0x7ffe11ea7670 "") at pquery.c:185
#13 0x006f775f in PortalRunMulti (portal=portal@entry=0xde8abf0, 
isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0xde86040, 
altdest=0xc96680 , 
completionTag=completionTag@entry=0x7ffe11ea7670 "") at pquery.c:1267
#14 0x006f7a0c in FillPortalStore (portal=portal@entry=0xde8abf0, 
isTopLevel=isTopLevel@entry=1 '\001') at pquery.c:1044
#15 0x006f845d in PortalRun (portal=0xde8abf0, 
count=9223372036854775807, isTopLevel=, dest=0x9ee76b8, 
altdest=0x9ee76b8, completionTag=0x7ffe11ea7a20 "") at pquery.c:782
#16 0x006f5c63 in exec_simple_query (query_string=) at 
postgres.c:1094
#17 PostgresMain (argc=233352176, argv=0xe8ad358, dbname=0xcf7508 "regression", 
username=0xe8ad3b0 "Xӊ\016") at postgres.c:4059
#18 0x0046c8b2 in BackendRun (port=0xd1c580) at postmaster.c:4258
#19 BackendStartup (port=0xd1c580) at postmaster.c:3932
#20 ServerLoop () at postmaster.c:1690
#21 0x0069081e in PostmasterMain (argc=argc@entry=4, 
argv=argv@entry=0xcf64f0) at postmaster.c:1298
#22 0x0046d80d in main (argc=4, argv=0xcf64f0) at main.c:228


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autovacuum to prevent wraparound tries to consume xid

2016-05-22 Thread Amit Kapila
On Mon, Mar 28, 2016 at 4:35 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:

> Hackers,
>
> one our customer meet near xid wraparound situation.  xid counter
> reached xidStopLimit value.  So, no transactions could be executed in
> normal mode.  But what I noticed is strange behaviour of autovacuum to
> prevent wraparound.  It vacuums tables, updates pg_class and pg_database,
> but then falls with "database is not accepting commands to avoid wraparound
> data loss in database" message.  We end up with situation that according to
> pg_database  maximum age of database was less than 200 mln., but
> transactions couldn't be executed, because ShmemVariableCache wasn't
> updated (checked by gdb).
>
> I've reproduced this situation on my laptop as following:
>
> 1) Connect gdb, do "set ShmemVariableCache->nextXid =
> ShmemVariableCache->xidStopLimit"
> 2) Stop postgres
> 3) Make some fake clog: "dd bs=1m if=/dev/zero
> of=/usr/local/pgsql/data/pg_clog/07FF count=1024"
> 4) Start postgres
>
> Then I found the same situation as in customer database.  Autovacuum to
> prevent wraparound regularly produced following messages in the log:
>
> ERROR:  database is not accepting commands to avoid wraparound data loss
> in database "template1"
> HINT:  Stop the postmaster and vacuum that database in single-user mode.
> You might also need to commit or roll back old prepared transactions.
>
> Finally all databases was frozen
>
> # SELECT datname, age(datfrozenxid) FROM pg_database;
>   datname  │   age
> ───┼──
>  template1 │0
>  template0 │0
>  postgres  │ 5000
> (3 rows)
>
> but no transactions could be executed (ShmemVariableCache wasn't updated).
>
> After some debugging I found that vac_truncate_clog consumes xid just to
> produce warning.  I wrote simple patch which replaces
> GetCurrentTransactionId() with ShmemVariableCache->nextXid.  That
> completely fixes this situation for me: ShmemVariableCache was successfully
> updated.
>

As per your latest patch, you are using ReadNewTransactionId() to get the
nextXid which then is used to check if any database's frozenxid is already
wrapped.  Now, isn't the value of nextXID in your patch same as
lastSaneFrozenXid in most cases (I mean there is a small window where some
new transaction might have started due to which the value of
ShmemVariableCache->nextXid has been advanced)? So isn't relying on
lastSaneFrozenXid check sufficient?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] [sqlsmith] Failed assertions on parallel worker shutdown

2016-05-22 Thread Andreas Seltenreich
There's another class of parallel worker core dumps when testing master
with sqlsmith.  In these cases, the following assertion fails for all
workers simulataneously:

TRAP: FailedAssertion("!(mqh->mqh_partial_bytes <= nbytes)", File: "shm_mq.c", 
Line: 386)

The backtraces of the controlling process is always in
ExecShutdownGatherWorkers.  The queries always work fine on re-running,
so I guess there is some race condition on worker shutdown?  Backtraces
below.

regards
andreas

Core was generated by `postgres: bgworker: parallel worker for PID 30525   
'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7f5a3df91067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7f5a3df91067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f5a3df92448 in __GI_abort () at abort.c:89
#2  0x007eabe1 in ExceptionalCondition 
(conditionName=conditionName@entry=0x984e10 "!(mqh->mqh_partial_bytes <= 
nbytes)", errorType=errorType@entry=0x82a75d "FailedAssertion", 
fileName=fileName@entry=0x984b8c "shm_mq.c", lineNumber=lineNumber@entry=386) 
at assert.c:54
#3  0x006d8042 in shm_mq_sendv (mqh=0x25f17b8, 
iov=iov@entry=0x7ffc6352af00, iovcnt=iovcnt@entry=1, nowait=) at 
shm_mq.c:386
#4  0x006d807d in shm_mq_send (mqh=, nbytes=, data=, nowait=) at shm_mq.c:327
#5  0x005d96b9 in ExecutePlan (dest=0x25f1850, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, 
use_parallel_mode=, planstate=0x2612da8, estate=0x2612658) at 
execMain.c:1596
#6  standard_ExecutorRun (queryDesc=0x261a660, direction=, 
count=0) at execMain.c:338
#7  0x005dc7cf in ParallelQueryMain (seg=, 
toc=0x7f5a3ea6c000) at execParallel.c:735
#8  0x004e617b in ParallelWorkerMain (main_arg=) at 
parallel.c:1035
#9  0x00683862 in StartBackgroundWorker () at bgworker.c:726
#10 0x0068e9a2 in do_start_bgworker (rw=0x2590760) at postmaster.c:5531
#11 maybe_start_bgworker () at postmaster.c:5706
#12 0x0046cbba in ServerLoop () at postmaster.c:1762
#13 0x0069081e in PostmasterMain (argc=argc@entry=4, 
argv=argv@entry=0x256d580) at postmaster.c:1298
#14 0x0046d80d in main (argc=4, argv=0x256d580) at main.c:228
(gdb) attach 30525
0x7f5a3e044e33 in __epoll_wait_nocancel () at 
../sysdeps/unix/syscall-template.S:81
81  ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) bt
#0  0x7f5a3e044e33 in __epoll_wait_nocancel () at 
../sysdeps/unix/syscall-template.S:81
#1  0x006d1b4e in WaitEventSetWaitBlock (nevents=1, 
occurred_events=0x7ffc6352aec0, cur_timeout=-1, set=0x44251c0) at latch.c:981
#2  WaitEventSetWait (set=set@entry=0x44251c0, timeout=timeout@entry=-1, 
occurred_events=occurred_events@entry=0x7ffc6352aec0, nevents=nevents@entry=1) 
at latch.c:935
#3  0x006d1f96 in WaitLatchOrSocket (latch=0x7f5a3d898494, 
wakeEvents=wakeEvents@entry=1, sock=sock@entry=-1, timeout=timeout@entry=-1) at 
latch.c:347
#4  0x006d205d in WaitLatch (latch=, 
wakeEvents=wakeEvents@entry=1, timeout=timeout@entry=-1) at latch.c:302
#5  0x004e6d64 in WaitForParallelWorkersToFinish (pcxt=0x442d4e8) at 
parallel.c:537
#6  0x005dcf84 in ExecParallelFinish (pei=0x441cab8) at 
execParallel.c:541
#7  0x005eeead in ExecShutdownGatherWorkers (node=node@entry=0x3e3a070) 
at nodeGather.c:416
#8  0x005ef389 in ExecShutdownGather (node=0x3e3a070) at 
nodeGather.c:430
#9  0x005dd03d in ExecShutdownNode (node=0x3e3a070) at 
execProcnode.c:807
#10 0x0061ad73 in planstate_tree_walker (planstate=0x3e361a8, 
walker=0x5dd010 , context=0x0) at nodeFuncs.c:3442
#11 0x0061ad73 in planstate_tree_walker (planstate=0xf323c30, 
walker=0x5dd010 , context=0x0) at nodeFuncs.c:3442
#12 0x0061ad73 in planstate_tree_walker (planstate=0xf323960, 
walker=0x5dd010 , context=0x0) at nodeFuncs.c:3442
#13 0x005d96da in ExecutePlan (dest=0xb826868, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, 
use_parallel_mode=, planstate=0xf323960, estate=0xf322b28) at 
execMain.c:1576
#14 standard_ExecutorRun (queryDesc=0xddca888, direction=, 
count=0) at execMain.c:338
#15 0x006f6e88 in PortalRunSelect (portal=portal@entry=0x258ccc8, 
forward=forward@entry=1 '\001', count=0, count@entry=9223372036854775807, 
dest=dest@entry=0xb826868) at pquery.c:946
#16 0x006f83ae in PortalRun (portal=0x258ccc8, 
count=9223372036854775807, isTopLevel=, dest=0xb826868, 
altdest=0xb826868, completionTag=0x7ffc6352b3d0 "") at pquery.c:787
#17 0x006f5c63 in exec_simple_query (query_string=) at 
postgres.c:1094
#18 PostgresMain (argc=39374024, argv=0x25ed130, dbname=0x256e480 "regression", 
username=0x25ed308 "0\321^\002") at postgres.c:4059
#19 0x0046c8b2 in BackendRun (port=0x25935d0) at postmaster.c:4258
#20 

Re: [HACKERS] Parallel query

2016-05-22 Thread Tatsuo Ishii
Thank you for the suggesion.  Currently no particular test cases are
in my mind. That's the reason why I need input from
community. Regarding the test schedule, PGECons starts the planning
from next month or so. So I guess test starts no earlier than July.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> What kind of cases do you want to run? beside the multi-cores, i think the
> working mem and its access rate are also a main criteria. As you known the
> parallel query uses DSM as IPC tools, which means it will meet the memory
> access barrier.,the memory bus has its access rate limitation. the
> different system architecture, such as different CPU architect, etc,will
> also be considered when we do the performace test. Do we need to consider
> what i mentioned above?
> 
> Best Regards,
> 
> Hao LEE.
> 
> On Thu, May 19, 2016 at 11:07 PM, Tatsuo Ishii  wrote:
> 
>> Robert,
>> (and others who are involved in parallel query of PostgreSQL)
>>
>> PostgreSQL Enterprise Consortium (one of the PostgreSQL communities in
>> Japan, in short "PGECons") is planning to test the parallel query
>> performance of PostgreSQL 9.6. Besides TPC-H (I know you have already
>> tested on an IBM box), what kind of tests would you like be performed?
>>
>> We are planning to use a big intel box (like more than 60 cores).
>> Any suggestions are welcome.
>>
>> Best regards,
>> --
>> Tatsuo Ishii
>> SRA OSS, Inc. Japan
>> English: http://www.sraoss.co.jp/index_en.php
>> Japanese:http://www.sraoss.co.jp
>>
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Inheritance

2016-05-22 Thread Jan Johansson
Hi,

I've been reading some threads about inheritance, and how complicated it
seems to let child (children) to inherit constraints and indexes
(behaviors). In fact this seems to have been the issue for years, with no
resolution.

However, inheritance is a very good feature, and it would be great to have
it feature complete.

To try to unlock the feature, to be more complete, how about introducing
restriction to inheritance like:

 - Allow single (behavior) inheritance (model here is quite a few modern
languages, such as C#, D, ...)
 - Allow multiple declarative inheritance (interface like, the inheritance
almost works like this today though)

If, with these restrictions (or maybe only the first), do you think that it
will simplify implementation and make it more feature complete?

Kind regards,
Jan Johansson