On Wed, Aug 17, 2016 at 1:34 AM, Peter Eisentraut
wrote:
> On 6/20/16 11:16 PM, Tom Lane wrote:
>>> > I think this test would only fail if it runs out of workers, and that
>>> > would only happen in an installcheck run against a server configured in
>>> > a
On 6/20/16 11:16 PM, Tom Lane wrote:
>> > I think this test would only fail if it runs out of workers, and that
>> > would only happen in an installcheck run against a server configured in
>> > a nonstandard way or that is doing something else -- which doesn't
>> > happen on the buildfarm.
>
On Tue, Aug 2, 2016 at 1:17 PM, Peter Eisentraut
wrote:
> On 6/19/16 10:00 PM, Robert Haas wrote:
>>> Independent of that, it would help testing things like this if we allowed
>>> > setting max_worker_processes to 0, instead of the current minimum 1. If
>>> >
On 6/19/16 10:00 PM, Robert Haas wrote:
>> Independent of that, it would help testing things like this if we allowed
>> > setting max_worker_processes to 0, instead of the current minimum 1. If
>> > there a reason for the minimum of 1?
> I believe that's pure brain fade on my part. I think the
On Tue, Jun 21, 2016 at 1:24 AM, Robert Haas wrote:
>
> On Mon, Jun 20, 2016 at 1:13 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane wrote:
> >>> Personally, I'm +1
Peter Eisentraut writes:
> On 6/19/16 5:55 PM, Tom Lane wrote:
>> Depending on what the percentage actually is, maybe we could treat
>> this like the "random" test, and allow a failure to be disregarded
>> overall? But that doesn't seem very nice either, in view
On 6/19/16 5:55 PM, Tom Lane wrote:
Depending on what the percentage actually is, maybe we could treat
this like the "random" test, and allow a failure to be disregarded
overall? But that doesn't seem very nice either, in view of our
increasing reliance on automated testing. If "random" were
Alvaro Herrera writes:
> Tom Lane wrote:
>> This seems like pretty good evidence that we should remove the "ignored"
>> marking for the random test, and maybe remove that functionality from
>> pg_regress altogether. We could probably adjust the test to decrease
>> its
On Mon, Jun 20, 2016 at 5:47 PM, Robert Haas wrote:
> On Mon, Jun 20, 2016 at 4:52 PM, David G. Johnston
> wrote:
> > Internal or external I do think the number and type of flags described
> here,
> > for the purposes described, seems
Tom Lane wrote:
> This seems like pretty good evidence that we should remove the "ignored"
> marking for the random test, and maybe remove that functionality from
> pg_regress altogether. We could probably adjust the test to decrease
> its risk-of-failure by another factor of ten or so, if
I wrote:
> Depending on what the percentage actually is, maybe we could treat
> this like the "random" test, and allow a failure to be disregarded
> overall? But that doesn't seem very nice either, in view of our
> increasing reliance on automated testing. If "random" were failing
> 90% of the
On Mon, Jun 20, 2016 at 4:52 PM, David G. Johnston
wrote:
> Internal or external I do think the number and type of flags described here,
> for the purposes described, seems undesirable from an architectural
> standpoint.
Well, that seems like a bold statement to me,
On Mon, Jun 20, 2016 at 4:03 PM, Robert Haas wrote:
> On Mon, Jun 20, 2016 at 3:29 PM, David G. Johnston
> wrote:
> > The entire theory here looks whacked - and seems to fall into the "GUCs
> > controlling results" bucket of undesirable things.
On Mon, Jun 20, 2016 at 3:29 PM, David G. Johnston
wrote:
> The entire theory here looks whacked - and seems to fall into the "GUCs
> controlling results" bucket of undesirable things.
As far as I can see, this entire email is totally wrong and off-base,
because the
On Mon, Jun 20, 2016 at 1:13 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane wrote:
>>> Personally, I'm +1 for such tinkering if it makes the feature either more
>>> controllable or more
On Mon, Jun 20, 2016 at 12:06 PM, Robert Haas wrote:
> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane wrote:
> >> although I fear we
> >> might be getting to a level of tinkering with parallel query that
> >> starts to look more like feature development.
>
Robert Haas writes:
> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane wrote:
>> Personally, I'm +1 for such tinkering if it makes the feature either more
>> controllable or more understandable. After reading the comments at the
>> head of nodeGather.c,
On Mon, Jun 20, 2016 at 1:26 AM, Amit Kapila wrote:
> I have done analysis on this and didn't found any use case where passing
> CURSOR_OPT_PARALLEL_OK in exec_stmt_execsql() can help in parallelizing the
> queries. Basically, there seems to be three ways in which
On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane wrote:
>> although I fear we
>> might be getting to a level of tinkering with parallel query that
>> starts to look more like feature development.
>
> Personally, I'm +1 for such tinkering if it makes the feature either more
>
On Thu, Jun 16, 2016 at 8:20 AM, Robert Haas wrote:
>
> On Wed, Jun 15, 2016 at 10:48 PM, Amit Kapila
wrote:
> > exec_stmt_execsql() is used to execute SQL statements insider plpgsql
which
> > includes dml statements as well, so probably you wanted
Robert Haas writes:
> On Sun, Jun 19, 2016 at 5:22 PM, Peter Eisentraut
> wrote:
>> Well, the purpose of the test is to check the error passing between worker
>> and leader. If we just silently revert to not doing that, then we can't
>>
On Sun, Jun 19, 2016 at 5:22 PM, Peter Eisentraut
wrote:
> Well, the purpose of the test is to check the error passing between worker
> and leader. If we just silently revert to not doing that, then we can't
> really be sure that we're testing the right thing.
Peter Eisentraut writes:
> On 6/19/16 3:09 PM, Robert Haas wrote:
>> On Sun, Jun 19, 2016 at 11:36 AM, Tom Lane wrote:
>>> No, it *might* execute in a worker. If you can get one.
>> Right.
> Well, the purpose of the test is to check the
On 6/19/16 3:09 PM, Robert Haas wrote:
On Sun, Jun 19, 2016 at 11:36 AM, Tom Lane wrote:
Amit Kapila writes:
On Sun, Jun 19, 2016 at 10:10 AM, Tom Lane wrote:
Would this not result in unstable test output depending on whether
On Sun, Jun 19, 2016 at 11:36 AM, Tom Lane wrote:
> Amit Kapila writes:
>> On Sun, Jun 19, 2016 at 10:10 AM, Tom Lane wrote:
>>> Would this not result in unstable test output depending on whether the
>>> code executes in the
Amit Kapila writes:
> On Sun, Jun 19, 2016 at 10:10 AM, Tom Lane wrote:
>> Would this not result in unstable test output depending on whether the
>> code executes in the leader or a worker?
> Before doing that test, we set force_parallel_mode=1, so
On Sun, Jun 19, 2016 at 10:10 AM, Tom Lane wrote:
>
> Peter Eisentraut writes:
> > With that, I think it would be preferable to undo the context-hiding
> > dance in the tests, as in the attached patch, no?
>
> Would this not result in
Peter Eisentraut writes:
> With that, I think it would be preferable to undo the context-hiding
> dance in the tests, as in the attached patch, no?
Would this not result in unstable test output depending on whether the
code executes in the leader or a worker?
On 6/17/16 9:26 AM, Robert Haas wrote:
On Wed, Jun 15, 2016 at 5:10 PM, Robert Haas wrote:
I'm comfortable with that. Feel free to make it so, unless you want
me to do it for some reason.
Time is short, so I did this.
With that, I think it would be preferable to
On Wed, Jun 15, 2016 at 5:10 PM, Robert Haas wrote:
> I'm comfortable with that. Feel free to make it so, unless you want
> me to do it for some reason.
Time is short, so I did this.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
On Wed, Jun 15, 2016 at 10:48 PM, Amit Kapila wrote:
> exec_stmt_execsql() is used to execute SQL statements insider plpgsql which
> includes dml statements as well, so probably you wanted to play safe by not
> allowing parallel option from that place. However, I think
On Thu, Jun 16, 2016 at 12:46 AM, Robert Haas wrote:
>
> On Wed, Jun 15, 2016 at 5:23 AM, Amit Kapila
wrote:
> >> > Considering above analysis is correct, we have below options:
> >> > a. Modify the test such that it actually generates an error and
On Wed, Jun 15, 2016 at 4:59 PM, Peter Eisentraut
wrote:
> On 6/14/16 12:37 PM, Robert Haas wrote:
>> ERROR: can't generate random numbers because you haven't specified a seed
>>
>> ...to which the user will reply, "oh yes I did; in fact I ran SELECT
>>
On 6/14/16 12:37 PM, Robert Haas wrote:
ERROR: can't generate random numbers because you haven't specified a seed
...to which the user will reply, "oh yes I did; in fact I ran SELECT
magic_setseed(42) just before I ran the offending query!". They'll
then contact an expert (hopefully) who will
On Wed, Jun 15, 2016 at 5:23 AM, Amit Kapila wrote:
>> > Considering above analysis is correct, we have below options:
>> > a. Modify the test such that it actually generates an error and to hide
>> > the
>> > context, we can exception block and raise some generic error.
On Wed, Jun 15, 2016 at 12:07 PM, Noah Misch wrote:
>
> On Wed, Jun 15, 2016 at 11:50:33AM +0530, Amit Kapila wrote:
> > In short, this test doesn't serve it's purpose which is to generate an
> > error from worker.
>
> That's bad. Thanks for figuring out the problem.
>
> > do
On Wed, Jun 15, 2016 at 12:07 PM, Noah Misch wrote:
>
> On Wed, Jun 15, 2016 at 11:50:33AM +0530, Amit Kapila wrote:
> > do $$begin
> > Perform stringu1::int2 from tenk1 where unique1 = 1;
> > end$$;
> >
> > ERROR: invalid input syntax for integer: "BA"
> > CONTEXT:
On Wed, Jun 15, 2016 at 11:50:33AM +0530, Amit Kapila wrote:
> In short, this test doesn't serve it's purpose which is to generate an
> error from worker.
That's bad. Thanks for figuring out the problem.
> do $$begin
> Perform stringu1::int2 from tenk1 where unique1 = 1;
> end$$;
>
> ERROR:
On Wed, Jun 15, 2016 at 1:42 AM, Robert Haas wrote:
>
> On Tue, Jun 14, 2016 at 1:14 PM, Tom Lane wrote:
> >
> > I have not dug into the code enough to find out exactly what's happening
> > in Peter's complaint, but it seems like it would be a good idea
On Tue, Jun 14, 2016 at 1:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Jun 14, 2016 at 12:51 PM, Tom Lane wrote:
>>> FWIW, I follow all of your reasoning except this. If we believe that the
>>> parallel worker context
Robert Haas writes:
> On Tue, Jun 14, 2016 at 12:51 PM, Tom Lane wrote:
>> FWIW, I follow all of your reasoning except this. If we believe that the
>> parallel worker context line is useful, then it is a bug that plpgsql
>> suppresses it. If we don't
On Tue, Jun 14, 2016 at 12:51 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jun 10, 2016 at 4:12 PM, Robert Haas wrote:
>>> On Fri, Jun 10, 2016 at 1:49 PM, Peter Eisentraut
>>> wrote:
Robert Haas writes:
> On Fri, Jun 10, 2016 at 4:12 PM, Robert Haas wrote:
>> On Fri, Jun 10, 2016 at 1:49 PM, Peter Eisentraut
>> wrote:
>>> Elsewhere in this thread I suggested getting rid of the parallel worker
On Fri, Jun 10, 2016 at 4:12 PM, Robert Haas wrote:
> On Fri, Jun 10, 2016 at 1:49 PM, Peter Eisentraut
> wrote:
>> Regarding the patch that ended up being committed, I wonder if it is
>> intentional that PL/pgSQL overwrites the context
On Fri, Jun 10, 2016 at 1:49 PM, Peter Eisentraut
wrote:
> Regarding the patch that ended up being committed, I wonder if it is
> intentional that PL/pgSQL overwrites the context from the parallel worker.
> Shouldn't the context effectively look like
>
> ERROR:
On 6/7/16 11:43 PM, Noah Misch wrote:
I changed this to keep the main message while overwriting the CONTEXT; a bug
in this area could very well produce some other error rather than no error.
Regarding the patch that ended up being committed, I wonder if it is
intentional that PL/pgSQL
On Tue, Jun 7, 2016 at 11:43 PM, Noah Misch wrote:
> Committed that way.
Thanks for the carefully-considered commit, Noah. And thanks Clement,
Peter, and others involved in figuring out the best way to do this and
drawing attention to it.
--
Robert Haas
EnterpriseDB:
On Tue, Jun 07, 2016 at 10:24:33PM +, Clément Prévost wrote:
> I also considered setting max_parallel_degree to 1 to make the test more
> futur-proof but there is a rather long discussion on the setting name (
> https://www.postgresql.org/message-id/20160424035859.gb29...@momjian.us) so
> I
>
> On Sun, May 15, 2016 at 12:53:13PM +, Clément Prévost wrote:
> > After some experiments, I found out that, for my setup (9b7bfc3a88ef7b),
> a
> > parallel seq scan is used given both parallel_setup_cost
> > and parallel_tuple_cost are set to 0 and given that the table is at
> least 3
> >
On 6/7/16 1:27 AM, Noah Misch wrote:
Testing under these conditions does not test the planner job at all but at
least some parallel code can be run on the build farm and the test suite
gets 2643 more lines and 188 more function covered.
Nice.
Please see also my message
Thanks for this patch. I have reviewed it:
On Sun, May 15, 2016 at 12:53:13PM +, Clément Prévost wrote:
> After some experiments, I found out that, for my setup (9b7bfc3a88ef7b), a
> parallel seq scan is used given both parallel_setup_cost
> and parallel_tuple_cost are set to 0 and given
On Sun, May 29, 2016 at 01:31:24AM -0400, Noah Misch wrote:
> On Sun, May 15, 2016 at 12:53:13PM +, Clément Prévost wrote:
> > On Mon, May 9, 2016 at 4:50 PM Andres Freund wrote:
> > > I think it's a good idea to run a force-parallel run on some buildfarm
> > > members.
On 5/9/16 10:50 AM, Andres Freund wrote:
I think it's a good idea to run a force-parallel run on some buildfarm
members. But I'm rather convinced that the core tests run by all animals
need some minimal coverage of parallel queries. Both because otherwise
it'll be hard to get some coverage of
On Sun, May 15, 2016 at 12:53:13PM +, Clément Prévost wrote:
> On Mon, May 9, 2016 at 4:50 PM Andres Freund wrote:
> > I think it's a good idea to run a force-parallel run on some buildfarm
> > members. But I'm rather convinced that the core tests run by all animals
> >
On Mon, May 9, 2016 at 4:50 PM Andres Freund wrote:
> I think it's a good idea to run a force-parallel run on some buildfarm
> members. But I'm rather convinced that the core tests run by all animals
> need some minimal coverage of parallel queries. Both because otherwise
>
On 12 May 2016 at 07:04, Robert Haas wrote:
> On Wed, May 11, 2016 at 1:57 PM, Robert Haas
wrote:
>> I don't immediately understand what's going wrong here. It looks to
>> me like make_group_input_target() already called, and that worked OK,
>> but
On Wed, May 11, 2016 at 1:57 PM, Robert Haas wrote:
> I don't immediately understand what's going wrong here. It looks to
> me like make_group_input_target() already called, and that worked OK,
> but now make_partialgroup_input_target() is failing using more-or-less
> the
On Wed, May 11, 2016 at 1:48 PM, David G. Johnston
wrote:
> What happens when there are no workers available due to max_worker_processes
> already being assigned?
Then the leader runs the plan after all.
> Related question, if max_parallel_degree is >1 and "the
On Wed, May 11, 2016 at 1:38 PM, Robert Haas wrote:
>> I would just go fix this, along the lines of
>>
>> *** create_plain_partial_paths(PlannerInfo *
>> *** 702,708
>> * with all of its inheritance siblings it may well pay off.
>> */
>>
On Wed, May 11, 2016 at 10:38 AM, Robert Haas wrote:
> On Wed, May 11, 2016 at 12:34 AM, Tom Lane wrote:
> >> Hmm, that is strange. I would have expected that to stuff a Gather on
> >> top of the Aggregate. I wonder why it's not doing that.
> >
> >
On Wed, May 11, 2016 at 12:34 AM, Tom Lane wrote:
>> Hmm, that is strange. I would have expected that to stuff a Gather on
>> top of the Aggregate. I wonder why it's not doing that.
>
> The reason is that create_plain_partial_paths() contains a hard-wired
> decision not to
Robert Haas writes:
> On Mon, May 9, 2016 at 11:11 AM, Tom Lane wrote:
>> regression=# set force_parallel_mode TO on;
>> SET
>> regression=# explain select count(*) from tenk1;
>> QUERY PLAN
>>
On Mon, May 9, 2016 at 11:11 AM, Tom Lane wrote:
> Andres Freund writes:
>> I think it's a good idea to run a force-parallel run on some buildfarm
>> members.
>
> Noah's already doing that on at least one of his critters. But some more
> wouldn't hurt.
I
Andres Freund writes:
> I think it's a good idea to run a force-parallel run on some buildfarm
> members.
Noah's already doing that on at least one of his critters. But some more
wouldn't hurt.
> But I'm rather convinced that the core tests run by all animals
> need some
On 2016-05-08 22:20:55 -0300, Alvaro Herrera wrote:
> David Rowley wrote:
>
> > I'm not entirely sure which machine generates that coverage output,
> > but the problem with it is that it's just one machine. We do have at
> > least one buildfarm member which runs with force_parallel_mode =
> >
On 9 May 2016 at 14:26, Alvaro Herrera wrote:
> David Rowley wrote:
>> On 9 May 2016 at 13:20, Alvaro Herrera wrote:
>
>> > It's not a buildfarm machine, but a machine setup specifically for
>> > coverage reports. It runs "make check-world"
David Rowley wrote:
> On 9 May 2016 at 13:20, Alvaro Herrera wrote:
> > It's not a buildfarm machine, but a machine setup specifically for
> > coverage reports. It runs "make check-world" only. I can add some
> > additional command(s) to run, if somebody can suggest
On 9 May 2016 at 13:20, Alvaro Herrera wrote:
> David Rowley wrote:
>
>> I'm not entirely sure which machine generates that coverage output,
>> but the problem with it is that it's just one machine. We do have at
>> least one buildfarm member which runs with
David Rowley wrote:
> I'm not entirely sure which machine generates that coverage output,
> but the problem with it is that it's just one machine. We do have at
> least one buildfarm member which runs with force_parallel_mode =
> regress.
It's not a buildfarm machine, but a machine setup
On 9 May 2016 at 09:12, Clément Prévost wrote:
> The entire parallel.c reported test coverage is zero:
> http://coverage.postgresql.org/src/backend/access/transam/parallel.c.gcov.html
>
> It seem that it's not covered by the original 924bcf4f commit but I don't
> know if
70 matches
Mail list logo