On Thu, Dec 5, 2019 at 5:38 AM Mark Shannon <m...@hotpy.org> wrote: > From my limited googling, linux has a hard limit of about 600k file > descriptors across all processes. So, 1M is well past any reasonable > per-process limit. My impression is that the limits are lower on > Windows, is that right?
Linux does limit the total number of file descriptors across all processes, but the limit is configurable at runtime. 600k is the default limit, but you can always make it larger (and people do). In my limited experimentation with Windows, it doesn't seem to impose any a priori limit on how many sockets you can have open. When I wrote a simple process that opens as many sockets as it can in a loop, I didn't get any error; eventually the machine just locked up. (I guess this is another example of why it can be better to have explicit limits!) -n -- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/5Z3CQQK6QDH3L466BIF7HAGCRV5SXBNW/ Code of Conduct: http://python.org/psf/codeofconduct/