On 18 April 2018 at 07:26, Alvaro Herrera wrote:
> David Rowley wrote:
>
>> I've made another pass over the nodeAppend.c code and I'm unable to
>> see what might cause this, although I did discover a bug where
>> first_partial_plan is not set taking into account that some subplans
>> may have been
David Rowley wrote:
> I've made another pass over the nodeAppend.c code and I'm unable to
> see what might cause this, although I did discover a bug where
> first_partial_plan is not set taking into account that some subplans
> may have been pruned away during executor init. The only thing I think
On 11 April 2018 at 18:58, David Rowley wrote:
> On 10 April 2018 at 08:55, Tom Lane wrote:
>> Alvaro Herrera writes:
>>> David Rowley wrote:
Okay, I've written and attached a fix for this. I'm not 100% certain
that this is the cause of the problem on pademelon, but the code does
On 10 April 2018 at 08:55, Tom Lane wrote:
> Alvaro Herrera writes:
>> David Rowley wrote:
>>> Okay, I've written and attached a fix for this. I'm not 100% certain
>>> that this is the cause of the problem on pademelon, but the code does
>>> look wrong, so needs to be fixed. Hopefully, it'll mak
On 10 April 2018 at 09:58, Alvaro Herrera wrote:
> I then noticed that support for nfiltered3 was incomplete; hence 0001.
> (I then noticed that nfiltered3 was added for MERGE. It looks wrong to
> me.)
>
> Frankly, I don't like this. I would rather have an instrument->ntuples2
> rather than thes
Alvaro Herrera writes:
> I then noticed that support for nfiltered3 was incomplete; hence 0001.
> (I then noticed that nfiltered3 was added for MERGE. It looks wrong to
> me.)
In that case, it's likely to go away when Simon reverts MERGE. Suggest
you hold off committing until he's done so, as h
Andrew Gierth wrote:
> > "Alvaro" == Alvaro Herrera writes:
>
> Alvaro> Thanks for cleaning that up. I'll look into why the test
> Alvaro> (without this commit) fails with force_parallel_mode=regress
> Alvaro> next week.
>
> Seems clear enough to me - the "Heap Fetches" statistic is kept
Alvaro Herrera writes:
> David Rowley wrote:
>> Okay, I've written and attached a fix for this. I'm not 100% certain
>> that this is the cause of the problem on pademelon, but the code does
>> look wrong, so needs to be fixed. Hopefully, it'll make pademelon
>> happy, if not I'll think a bit hard
David Rowley wrote:
> Okay, I've written and attached a fix for this. I'm not 100% certain
> that this is the cause of the problem on pademelon, but the code does
> look wrong, so needs to be fixed. Hopefully, it'll make pademelon
> happy, if not I'll think a bit harder about what might be causin
On 9 April 2018 at 15:03, David Rowley wrote:
> On 9 April 2018 at 13:03, David Rowley wrote:
> Okay, I've written and attached a fix for this. I'm not 100% certain
> that this is the cause of the problem on pademelon, but the code does
> look wrong, so needs to be fixed. Hopefully, it'll make p
On 9 April 2018 at 13:03, David Rowley wrote:
> On 9 April 2018 at 09:54, Tom Lane wrote:
>> BTW, pademelon just exhibited a different instability in this test:
>>
>> ***
>> /home/bfarm/bf-data/HEAD/pgsql.build/src/test/regress/expected/partition_prune.out
>> Sun Apr 8 01:56:04 2018
>> ---
>
On 9 April 2018 at 09:54, Tom Lane wrote:
> BTW, pademelon just exhibited a different instability in this test:
>
> ***
> /home/bfarm/bf-data/HEAD/pgsql.build/src/test/regress/expected/partition_prune.out
> Sun Apr 8 01:56:04 2018
> ---
> /home/bfarm/bf-data/HEAD/pgsql.build/src/test/regress/
Andrew Gierth writes:
> "Alvaro" == Alvaro Herrera writes:
> Alvaro> Thanks for cleaning that up. I'll look into why the test
> Alvaro> (without this commit) fails with force_parallel_mode=regress
> Alvaro> next week.
> Seems clear enough to me - the "Heap Fetches" statistic is kept in the
>
> "Alvaro" == Alvaro Herrera writes:
Alvaro> Thanks for cleaning that up. I'll look into why the test
Alvaro> (without this commit) fails with force_parallel_mode=regress
Alvaro> next week.
Seems clear enough to me - the "Heap Fetches" statistic is kept in the
IndexOnlyScanState node in i
Andrew Gierth wrote:
> > "David" == David Rowley writes:
>
> David> I've attached my proposed fix for the unstable regression tests.
> David> I removed the vacuums I'd added in the last version and
> David> commented why we're doing set enable_indesonlyscan = off;
>
> Looks basically sane
> "David" == David Rowley writes:
David> I've attached my proposed fix for the unstable regression tests.
David> I removed the vacuums I'd added in the last version and
David> commented why we're doing set enable_indesonlyscan = off;
Looks basically sane - I'll try it out and commit it sh
On 8 April 2018 at 15:34, Andrew Gierth wrote:
> You can't ever assume that data you just inserted will become
> all-visible just because you just vacuumed the table, unless you know
> that there is NO concurrent activity that might have a snapshot (and no
> other possible reason why OldestXmin mi
> "David" == David Rowley writes:
>> Setting autovacuum_naptime to 10 seconds makes it occur in 10 second
>> intervals...
David> Ok, I thought it might have been some concurrent vacuum on the
David> table but the only tables I see being vacuumed are system
David> tables.
It's not vacuu
On 8 April 2018 at 15:21, Andrew Gierth wrote:
> David> Setting autovacuum_naptime to 10 seconds makes it occur in 10
> David> second intervals...
>
> Analyze (including auto-analyze on a different table entirely) has a
> snapshot, which can hold back OldestXmin, hence preventing the
> all-visib
On 8 April 2018 at 15:02, David Rowley wrote:
> On 8 April 2018 at 14:56, David Rowley wrote:
>> It happens 12 or 13 times on my machine, then does not happen again
>> for 60 seconds, then happens again.
>
> Setting autovacuum_naptime to 10 seconds makes it occur in 10 second
> intervals...
Ok,
> "David" == David Rowley writes:
>> It happens 12 or 13 times on my machine, then does not happen again
>> for 60 seconds, then happens again.
David> Setting autovacuum_naptime to 10 seconds makes it occur in 10
David> second intervals...
Analyze (including auto-analyze on a different
On 8 April 2018 at 14:56, David Rowley wrote:
> It happens 12 or 13 times on my machine, then does not happen again
> for 60 seconds, then happens again.
Setting autovacuum_naptime to 10 seconds makes it occur in 10 second
intervals...
--
David Rowley http://www.2ndQuadrant
On 8 April 2018 at 12:15, Alvaro Herrera wrote:
> Yeah, I don't quite understand this problem, and I tend to agree that
> it likely isn't this patch's fault. However, for the moment I'm going
> to avoid pushing the patch you propose because maybe there's a bug
> elsewhere and it'd be good to unde
Yeah, I don't quite understand this problem, and I tend to agree that
it likely isn't this patch's fault. However, for the moment I'm going
to avoid pushing the patch you propose because maybe there's a bug
elsewhere and it'd be good to understand it. I'm looking at it now.
If others would prefe
On 8 April 2018 at 11:26, David Rowley wrote:
> On 8 April 2018 at 10:59, David Rowley wrote:
>> Sometimes I see:
>>
>> relname | relallvisible
>> -+---
>> tprt_1 | 0
>> tprt_2 | 1
>>
>> Other times I see:
>>
>> relname | relallvisible
>>
On 8 April 2018 at 10:59, David Rowley wrote:
> Sometimes I see:
>
> relname | relallvisible
> -+---
> tprt_1 | 0
> tprt_2 | 1
>
> Other times I see:
>
> relname | relallvisible
> -+---
> tprt_1 | 0
> tprt_2 |
On 8 April 2018 at 09:57, Tom Lane wrote:
> Alvaro Herrera writes:
>> Support partition pruning at execution time
>
> Buildfarm member lapwing doesn't like this. I can reproduce the
> failures here by setting force_parallel_mode = regress. Kind
> of looks like instrumentation counts aren't gett
Alvaro Herrera writes:
> Support partition pruning at execution time
Buildfarm member lapwing doesn't like this. I can reproduce the
failures here by setting force_parallel_mode = regress. Kind
of looks like instrumentation counts aren't getting propagated
from workers back to the leader?
On 8 April 2018 at 09:02, Alvaro Herrera wrote:
> Support partition pruning at execution time
I'm looking at buildfarm member lapwing's failure [1] now.
Probably it can be fixed by adding a vacuum, but will need a few mins
to test and produce a patch.
[1]
https://buildfarm.postgresql.org/cgi-b
Support partition pruning at execution time
Existing partition pruning is only able to work at plan time, for query
quals that appear in the parsed query. This is good but limiting, as
there can be parameters that appear later that can be usefully used to
further prune partitions.
This commit ad
30 matches
Mail list logo