On Fri, Apr 22, 2016 at 1:31 AM, Gavin Flower <gavinflo...@archidevsys.co.nz
> wrote:

> On 22/04/16 06:07, Robert Haas wrote:
>> On Thu, Apr 21, 2016 at 1:48 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> Robert Haas <robertmh...@gmail.com> writes:
>>>> On Wed, Apr 20, 2016 at 2:28 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>>>> Andres Freund <and...@anarazel.de> writes:
>>>>>> max_parallel_degree currently defaults to 0.  I think we should enable
>>>>>> it by default for at least the beta period. Otherwise we're primarily
>>>>>> going to get reports back after the release.
>>>>> So, I suggest that the only sensible non-zero values here are probably
>>>> "1" or "2", given a default pool of 8 worker processes system-wide.
>>>> Andres told me yesterday he'd vote for "2".  Any other opinions?
>>> It has to be at least 2 for beta purposes, else you are not testing
>>> situations with more than one worker process at all, which would be
>>> rather a large omission no?
>> That's what Andres, thought, too.  From my point of view, the big
>> thing is to be using workers at all.  It is of course possible that
>> there could be some bugs where a single worker is not enough, but
>> there's a lot of types of bug where even one worker would probably
>> find the problem.  But I'm OK with changing the default to 2.
>> I'm curious.
> Why not 4?

IIUC, the idea to change max_parallel_degree for beta is to catch any bugs
in parallelism code, not to do any performance testing of same.  So, I
think either 1 or 2 should be sufficient to hit the bugs if there are any.
Do you have any reason to think that we might miss some category of bugs if
we don't use higher max_parallel_degree?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to