On 2018-11-01 10:10:33 -0700, Paul Ramsey wrote: > On Wed, Oct 31, 2018 at 2:11 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > > > =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <m...@komzpa.net> > > writes: > > > Question is, what's the best policy to allocate cores so we can play nice > > > with rest of postgres? > > > > > > There is not, because we do not use or support multiple threads inside > > a Postgres backend, and have no intention of doing so any time soon. > > > > As a practical matter though, if we're multi-threading a heavy PostGIS > function, presumably simply grabbing *every* core is not a recommended or > friendly practice. My finger-in-the-wind guess would be that the value > of max_parallel_workers_per_gather would be the most reasonable value to > use to limit the number of cores a parallel PostGIS function should use. > Does that make sense?
I'm not sure that's a good approximation. Postgres' infrastructure prevents every query from using max_parallel_workers_per_gather processes due to the global max_worker_processes limit. I think you probably would want something very very roughly approximating a global limit - otherwise you'll either need to set the per-process limit way too low, or overwhelm machines with context switches. Greetings, Andres Freund