On 23/04/16 00:56, Robert Haas wrote:
If Java can find out how many processors there are available to it,
since JDK1.4, then surely PostgreSQL can do the same?
On Thu, Apr 21, 2016 at 7:20 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
Robert Haas <robertmh...@gmail.com> writes:
On Thu, Apr 21, 2016 at 4:01 PM, Gavin Flower
Why not 4? As most processors now have at least 4 physical cores, & surely
it be more likely to flush out race conditions.
Because if we did that, then it's extremely likely that people would
end up writing queries that are faster only if workers are present,
and then not get any workers.
Is that because max_worker_processes is only 8 by default? Maybe we
need to raise that, at least for beta purposes?
I'm not really in favor of that. I mean, almost all of our default
settings are optimized for running PostgreSQL on, for example, a
Raspberry Pi 2, so it would seem odd to suddenly swing the other
direction and assume that there are more than 8 unused CPU cores. It
doesn't make sense to me to roll out settings in beta that we wouldn't
be willing to release with if they work out. That's why, honestly, I
would prefer max_parallel_degree=1, which I think would be practical
for many real-world deployments. max_parallel_degree=2 is OK. Beyond
that, we're just setting people up to fail, I think. Higher settings
should probably only be used on substantial hardware, and not
everybody has that.
So how about the default being half the available processors rounded up
to the nearest integer?
Perhaps the GUC for workers should be a percentage of the available
processors, with the minimum & maximum workers optionally specified - or
something of that nature?
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: