On Mon, Dec 21, 2015 at 6:38 PM, David Rowley
<david.row...@2ndquadrant.com> wrote:
> On 22 December 2015 at 04:16, Paul Ramsey <pram...@cleverelephant.ca> wrote:
>> Shouldn’t parallel aggregate come into play regardless of scan
>> selectivity?
> I'd say that the costing should take into account the estimated number of
> groups.
> The more tuples that make it into each group, the more attractive parallel
> grouping should seem. In the extreme case if there's 1 tuple per group, then
> it's not going to be of much use to use parallel agg, this would be similar
> to a scan with 100% selectivity. So perhaps the costings for it can be
> modeled around a the parallel scan costing, but using the estimated groups
> instead of the estimated tuples.

Generally, the way that parallel costing is supposed to work (with the
parallel join patch, anyway) is that you've got the same nodes costed
the same way you would otherwise, but the row counts are lower because
you're only processing 1/Nth of the rows.  That's probably not exactly
the whole story here, but it's something to think about.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to