Chris Langton <[email protected]> added the comment:
interestingly, while it is expected Process or Queue would actually close
resource file descriptors and doesn't because a dev decided they prefer to
defer to the user how to manage gc themselves, the interesting thing is if you
'upgrade' your code to use a pool, the process fd will be closed as the pool
will destroy the object (so it is gc more often);
Say you're limited to a little over 1000 fd in your o/s you can do this
#######################################################################
import multiprocessing
import json
def process(data):
with open('/tmp/fd/%d.json' % data['name'], 'w') as f:
f.write(json.dumps(data))
return 'processed %d' % data['name']
if __name__ == '__main__':
pool = multiprocessing.Pool(1000)
try:
for _ in range(10000000):
x = {'name': _}
pool.apply(process, args=(x,))
finally:
pool.close()
del pool
#######################################################################
only the pool fd hangs around longer then it should, which is a huge
improvement, and you might not find a scenario where you need many pool objects.
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue33081>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com