29.03.2024 11:59, Alexander Lakhin wrote:
But it looks like subselect is not the only test that can fail due to
vacuum instability. I see that create_index also suffers from cranky
ConditionalLockBufferForCleanup() (+if (rand() % 10 == 0) ...
Just for the record, I think I've reproduced the
I wrote:
> I experimented with the attached modified version of the patch,
> which probes just after the relevant VACUUMs and reduces the
> crankiness of ConditionalLockBufferForCleanup a bit to more nearly
> approximate what we're likely to see in the buildfarm.
Sigh, forgot to attach the patch,
Alexander Lakhin writes:
> 29.03.2024 16:51, Tom Lane wrote:
>> Ouch. So what's triggering that? The intention of test_setup
>> surely is to provide a uniform starting point.
> Thanks for your attention to the issue!
> Please try the attached...
I experimented with the attached modified
Hello Tom,
29.03.2024 16:51, Tom Lane wrote:
Alexander Lakhin writes:
I think that deviation can be explained by the fact that cost_index() takes
baserel->allvisfrac (derived from pg_class.relallvisible) into account for
the index-only-scan case, and I see the following difference when a test
Alexander Lakhin writes:
> I think that deviation can be explained by the fact that cost_index() takes
> baserel->allvisfrac (derived from pg_class.relallvisible) into account for
> the index-only-scan case, and I see the following difference when a test
> run fails:
> relname |
Richard Guo writes:
> On Fri, Mar 29, 2024 at 1:33 AM Tom Lane wrote:
>> Tomas Vondra writes:
>>> Yeah. I think it's good to design the data/queries in such a way that
>>> the behavior does not flip due to minor noise like in this case.
>> +1
> Agreed. The query in problem is:
> -- we can
29.03.2024 11:59, Alexander Lakhin wrote:
But it looks like subselect is not the only test that can fail due to
vacuum instability. I see that create_index also suffers from cranky
ConditionalLockBufferForCleanup() (+if (rand() % 10 == 0)
return false; ), although it placed in parallel_schedule
29.03.2024 11:59, Alexander Lakhin wrote:
This simple change fixes the issue for me:
-VACUUM ANALYZE tenk2;
+VACUUM (ANALYZE, DISABLE_PAGE_SKIPPING) tenk2;
I'm sorry, I wasn't persevering enough when testing that...
After more test runs, I see that in fact it doesn't help.
Best regards,
28.03.2024 20:33, Tom Lane wrote:
But I'm a bit confused - how come the estimates do change at all? The
analyze simply fetches 30k rows, and tenk only has 10k of them. So we
should have *exact* numbers, and it should be exactly the same for all
the analyze runs. So how come it changes like
On Thu, Mar 28, 2024 at 11:00 PM Alexander Lakhin
wrote:
> When running multiple 027_stream_regress.pl test instances in parallel
> (and with aggressive autovacuum) on a rather slow machine, I encountered
> test failures due to the subselect test instability just as the following
> failures on
On Fri, Mar 29, 2024 at 1:33 AM Tom Lane wrote:
> Tomas Vondra writes:
> > Yeah. I think it's good to design the data/queries in such a way that
> > the behavior does not flip due to minor noise like in this case.
>
> +1
Agreed. The query in problem is:
-- we can pull up the sublink into
Tomas Vondra writes:
> Yeah. I think it's good to design the data/queries in such a way that
> the behavior does not flip due to minor noise like in this case.
+1
> But I'm a bit confused - how come the estimates do change at all? The
> analyze simply fetches 30k rows, and tenk only has 10k of
On 3/28/24 16:00, Alexander Lakhin wrote:
> ...
>
> Using the trick Thomas proposed in [1] (see my modification attached), I
> could reproduce the failure easily on my workstation with no specific
> conditions:
> 2024-03-28 14:05:13.792 UTC client backend[2358012]
> pg_regress/test_setup LOG:
Hello hackers,
When running multiple 027_stream_regress.pl test instances in parallel
(and with aggressive autovacuum) on a rather slow machine, I encountered
test failures due to the subselect test instability just as the following
failures on buildfarm:
1)
14 matches
Mail list logo