On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno <jsbu...@python.org.br> wrote:
>
> So,
> It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
> the stdlib as a whole to be able to allocate specific cores for each 
> subprocess -
> that is automatically done by the O.S. (and of course, the O.S. having an 
> interface
> for it, one can write a specific Python library which would allow this 
> granularity,
> and it could even check core capabilities).

Python does have a way to set processor affinity, so it's entirely
possible that this would be possible. Might need external tools
though.

> As it stands however, is that you simply have to change your approach:
> instead of dividing yoru workload into different cores before starting, the
> common approach there is to set up worker processes, one per core, or
> per processor thread, and use those as a pool of resources to which
> you submit your processing work in chunks.
> In that way, if a worker happens to be in a faster core, it will be
> done with its chunk earlier and accept more work before
> slower cores are available.
>

But I agree with this. Easiest to just subdivide further.

ChrisA
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/AAIOSDVRE57G3ARY6YGNATW4YBP5G7UA/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to