Kyle Stanley <aeros...@gmail.com> added the comment:

> You have to account also for the thread stack size. I suggest to look at RSS 
> memory instead.

Ah, good point. I believe get_size() only accounts for the memory usage of the 
thread object, not the amount allocated in physical memory from the thread 
stack. Thanks for the clarification. 

> I measured the RSS memory per thread: it's around 13.2 kB/thread. Oh, that's 
> way lower than what I expected.

On Python 3.8 and Linux kernel 5.3.8, I received the following result:

# Starting mem
VmRSS:      8408 kB
# After initializing and starting 1k threads:
VmRSS:     21632 kB

~13224kB for 1k threads, which reflects the ~13.2kB/thread estimate. 

Also, as a sidenote, I think you could remove the "for thread in threads: 
thread.started_event.wait()" portion for our purposes. IIUC, waiting on the 
threading.Event objects wouldn't affect memory usage.

> To be clear: I mean that FastChildWatcher is safe only if all process's code 
> spaws subprocesses by FastChildWatcher.
> If ProcessPoolExecutor or direct subprocess calls are used the watcher is 
> unsafe.
> If some C extension spawns new processes on its own (e.g. in a separate 
> thread) -- the watcher is unsafe.

> I just think that this particular watcher is too dangerous.

So are we at least in agreement for starting with deprecating FastChildWatcher? 
If a server is incredibly tight on memory and it can't spare ~13.2kB/thread, 
SafeChildWatcher would be an alternative to ThreadedChildWatcher.

Personally, I still think this amount is negligible for most production 
servers, and that we can reasonably deprecate SafeChildWatcher as well. But I 
can start with FastChildWatcher.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue38591>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to