>From my perspective, this is much much better. For sufficiently large
tables, I get parallel behaviour without jimmying with the defaults
on parallel_setup_cost and parallel_tuple_cost. *And*, the parallel
behaviour *is* sensitive to the costs of functions in target lists, so
reasonably chosen costs will flip us into a parallel mode for expensive
functions against smaller tables too.
Hopefully some variant of this finds it's way into core! Is there any way I
can productively help?

On Sat, Nov 4, 2017 at 10:02 PM, Amit Kapila <amit.kapil...@gmail.com>

> On Sat, Nov 4, 2017 at 4:43 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> > Paul Ramsey <pram...@cleverelephant.ca> writes:
> >>> Whether I get a parallel aggregate seems entirely determined by the
> number
> >>> of rows, not the cost of preparing those rows.
> >
> >> This is true, as far as I can tell and unfortunate. Feeding tables with
> >> 100ks of rows, I get parallel plans, feeding 10ks of rows, never do, no
> >> matter how costly the work going on within. That's true of changing
> costs
> >> on the subquery select list, and on the aggregate transfn.
> >
> > This sounds like it might be the same issue being discussed in
> >
> > https://www.postgresql.org/message-id/flat/CAMkU=
> 1ycXNipvhWuweUVpKuyu6SpNjF=yhwu4c4us5jgvgx...@mail.gmail.com
> >
> I have rebased the patch being discussed on that thread.
> Paul, you might want to once check with the recent patch [1] posted on
> the thread mentioned by Tom.
> [1] - https://www.postgresql.org/message-id/CAA4eK1%2B1H5Urm0_
> Wp-n5XszdLX1YXBqS_zW0f-vvWKwdh3eCJA%40mail.gmail.com
> --
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com

Reply via email to