On Sat, Mar 4, 2017 at 2:17 PM, Peter Geoghegan <p...@bowt.ie> wrote:
> On Sat, Mar 4, 2017 at 12:43 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> Oh.  But then I don't see why you need min_parallel_anything.  That's
>> just based on an estimate of the amount of data per worker vs.
>> maintenance_work_mem, isn't it?
> Yes -- and it's generally a pretty good estimate.
> I don't really know what minimum amount of memory to insist workers
> have, which is why I provisionally chose one of those GUCs as the
> threshold.
> Any better ideas?

I don't understand how min_parallel_anything is telling you anything
about memory.  It has, in general, nothing to do with that.

If you think parallelism isn't worthwhile unless the sort was going to
be external anyway, then it seems like the obvious thing to do is
divide the projected size of the sort by maintenance_work_mem, round
down, and cap the number of workers to the result.  If the result of
compute_parallel_workers() based on min_parallel_table_scan_size is
smaller, then use that value instead.  I must be confused, because I
actually though that was the exact algorithm you were describing, and
it sounded good to me.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to