Charles-François Natali added the comment:

I'm not convinced.
The reason is that using the number of CPU cores is just a heuristic
for a *default value*: the API allows the user to specify the number
of workers to use, so it's not really a limitation.

The problem is that if you try to think about a more "correct" default
value, it gets complicated: here, it's about cgroups, but for example:
- What if they are multiple processes running on the same box?
- What if the process is subject to CPU affinity? Currently, the CPU
affinity mask is ignored.
- What if the code being executed by children is itself multi-threaded
(maybe because it's using a numerical library using BLAS etc)?
- What about hyper-threading? If the code has a lot of cache misses,
it would probably be a good idea to use one worker per logical thread,
but if it's cache-friendly, probably not.
- Etc.

In other words, I think that there's simply not reasonable default
value for the number of workers to use, that any value will make some
class of users/use-case unhappy, and it would add a lot of unnecessary
complexity.

Since the user can always specify the number of workers - if you find
a place where it's not possible, then please report it - I really
think we should let the choice/burden up to the user.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26692>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to