New submission from sds <s...@gnu.org>:

The number of workers (max_workers) I want to use often depends on the server 
load.
Imagine this scenario: I have 64 CPUs and I need to run 200 processes.
However, others are using the server too, so currently loadavg is 50, thus I 
will set `max_workers` to (say) 20. 
But 2 hours later when those 20 processes are done, loadavg is now 0 (because 
the 50 processes run by my colleagues are done too), so I want to increase the 
pool size max_workers to 70.
It would be nice if it were possible to adjust the pool size depending on the 
server loadavg when a worker is started.
Basically, the intent is maintaining a stable load average and full resource 
utilization.

----------
components: Library (Lib)
messages: 361905
nosy: sam-s
priority: normal
severity: normal
status: open
title: max_workers argument to concurrent.futures.ProcessPoolExecutor is not 
flexible enough
type: enhancement
versions: Python 3.8

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue39617>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to