On Dec 2, 2016, at 4:07 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <rh...@postgresql.org> writes:
>> Add max_parallel_workers GUC.
>> Increase the default value of the existing max_worker_processes GUC
>> from 8 to 16, and add a new max_parallel_workers GUC with a maximum
>> of 8.
> This broke buildfarm members coypu and sidewinder. It appears the reason
> is that those machines can only get up to 30 server processes, cf this
> pre-failure initdb trace:
> creating directory data-C ... ok
> creating subdirectories ... ok
> selecting default max_connections ... 30
> selecting default shared_buffers ... 128MB
> selecting dynamic shared memory implementation ... sysv
> creating configuration files ... ok
> running bootstrap script ... ok
> performing post-bootstrap initialization ... ok
> syncing data to disk ... ok
> So you've reduced their available number of regular backends to less than
> 20, which is why their tests are now dotted with
> ! psql: FATAL: sorry, too many clients already
> There may well be other machines with similar issues; we won't know until
> today's other breakage clears.
> We could ask the owners of these machines to reduce the test parallelism
> via the MAX_CONNECTIONS makefile variable, but I wonder whether this
> increase was well thought out in the first place.
Signs point to "no". It seemed like a good idea to leave some daylight between
max_parallel_workers and max_worker_processes, but evidently this wasn't the
way to get there. Or else we should just give up on that thought.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: