On Sun, Dec 24, 2017 at 8:37 PM, Amit Kapila <amit.kapil...@gmail.com> wrote: > On Sun, Dec 24, 2017 at 12:06 PM, Robert Haas <robertmh...@gmail.com> wrote: >> On Fri, Dec 22, 2017 at 6:18 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: >> >>> Also, don't we need to use parallel_divisor for partial paths instead >>> of non-partial paths as those will be actually distributed among >>> workers? >> >> Uh, that seems backwards to me. We're trying to estimate the average >> number of rows per worker. > > Okay, but is it appropriate to use the parallel_divisor? The > parallel_divisor means the contribution of all the workers (+ > leader_contribution) whereas for non-partial paths there will be > always only the subset of workers which will operate on them. > Consider a case with one non-partial subpath and five partial subpaths > with six as parallel_divisor, now the current code will try to divide > the rows of non-partial subpath with respect to six workers. However, > in reality, there will always be one worker which will execute that > path.
That's true, of course, but if five processes each return 0 rows and the sixth process returns 600 rows, the average number of rows per process is 100, not anything else. Here's one way to look at it. Suppose there is a table with 1000 partitions. If we do a Parallel Append over a Parallel Seq Scan per partition, we will come up with a row estimate by summing the estimated row count across all partitions and dividing by the parallel_divisor. This will give us some answer. If we instead do a Parallel Append over a Seq Scan per partition, we should really come up with the *same* estimate. The only way to do that is to also divide by the parallel_divisor in this case. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company