Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes: > When building with --with-blocksize=16, the select_parallel test fails > with this difference:
> explain (costs off) > select sum(parallel_restricted(unique1)) from tenk1 > group by(parallel_restricted(unique1)); > - QUERY PLAN > ----------------------------------------------------- > + QUERY PLAN > +------------------------------------------- > HashAggregate > Group Key: parallel_restricted(unique1) > - -> Index Only Scan using tenk1_unique1 on tenk1 > -(3 rows) > + -> Gather > + Workers Planned: 4 > + -> Parallel Seq Scan on tenk1 > +(5 rows) > set force_parallel_mode=1; > explain (costs off) > We know that different block sizes cause some test failures, mainly > because of row ordering differences. But this looked a bit different. I suspect what is happening is that min_parallel_relation_size is being interpreted differently (because the default is set at 1024 blocks, regardless of what BLCKSZ is) and that's affecting the cost estimate for the parallel seqscan. The direction of change seems a bit surprising though; if the table is now half as big blocks-wise, how did that make the parallel scan look cheaper? Please step through create_plain_partial_paths and see what is being done differently. Possibly we ought to change things so that the default value of min_parallel_relation_size is a fixed number of bytes rather than a fixed number of blocks. Not sure though. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers