I've been poking at why coverage.postgresql.org shows that the second
stanza in int8_avg_combine isn't exercised, when it clearly should be.
I can reproduce the problem here, so it's fairly robust.  Eventually
it occurred to me to try a straight EXPLAIN ANALYZE VERBOSE, and what
I find is that with --enable-coverage, you reproducibly get behavior
like this:

explain analyze verbose
  SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
                                                                            
QUERY PLAN                                                                      
       
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=401.70..401.71 rows=1 width=64) (actual 
time=116.843..116.843 rows=1 loops=1)
   Output: variance(unique1), sum((unique1)::bigint)
   ->  Gather  (cost=401.65..401.66 rows=4 width=64) (actual 
time=55.079..116.789 rows=4 loops=1)
         Output: (PARTIAL variance(unique1)), (PARTIAL sum((unique1)::bigint))
         Workers Planned: 4
         Workers Launched: 4
         ->  Partial Aggregate  (cost=401.65..401.66 rows=1 width=64) (actual 
time=5.148..5.148 rows=1 loops=4)
               Output: PARTIAL variance(unique1), PARTIAL sum((unique1)::bigint)
               Worker 0: actual time=0.026..0.026 rows=1 loops=1
               Worker 1: actual time=0.028..0.028 rows=1 loops=1
               Worker 2: actual time=0.023..0.023 rows=1 loops=1
               Worker 3: actual time=20.514..20.515 rows=1 loops=1
               ->  Parallel Seq Scan on public.tenk1  (cost=0.00..382.94 
rows=2494 width=4) (actual time=0.009..0.918 rows=2500 loops=4)
                     Output: unique1, unique2, two, four, ten, twenty, hundred, 
thousand, twothousand, fivethous, tenthous, odd, even, stringu1, stringu2, 
string4
                     Worker 0: actual time=0.007..0.007 rows=0 loops=1
                     Worker 1: actual time=0.007..0.007 rows=0 loops=1
                     Worker 2: actual time=0.005..0.005 rows=0 loops=1
                     Worker 3: actual time=0.018..3.653 rows=10000 loops=1
 Planning Time: 0.116 ms
 Execution Time: 144.237 ms
(20 rows)

So the reason for the apparent lack of coverage in the combine step
is that only one worker ever sends back a non-null partial result.
We do have coverage, in that normal runs do exercise the code in question,
but you wouldn't know it by looking at the coverage report.

We could probably fix it by using a significantly larger test case,
but that's not very attractive to put into the regression tests.
Anybody have a better idea about how to improve this?  Or even a
clear explanation for what's causing it?  (I'd expect coverage
instrumentation to impose costs at process exit, not startup.)

                        regards, tom lane

Reply via email to