Greg Price <gnpr...@gmail.com> added the comment:

> Loading it dynamically reduces the memory footprint.

Ah, this is a good question to ask!

First, FWIW on my Debian buster desktop I get a smaller figure for `import 
unicodedata`: only 64 kiB.

$ python
Python 3.7.3 (default, Apr  3 2019, 05:39:12) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status")
VmRSS:      9888 kB

>>> import unicodedata
>>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status")
VmRSS:      9952 kB

But whether 64 kiB or 160 kiB, it's much smaller than the 1.1 MiB of the whole 
module.  Which makes sense -- there's no need to bring the whole thing in 
memory when we only import it, or generally to bring into memory the parts we 
aren't using.  I wouldn't expect that to change materially if the tables and 
algorithms were built in.

Here's another experiment: suppose we load everything that ast.c needs in order 
to handle non-ASCII identifiers.

$ python
Python 3.7.3 (default, Apr  3 2019, 05:39:12) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status")
VmRSS:      9800 kB

>>> là = 3
>>> os.system(f"grep ^VmRSS /proc/{os.getpid()}/status")
VmRSS:      9864 kB

So that also comes to 64 kiB.

We wouldn't want to add 64 kiB to our memory use for no reason; but I think 64 
or 160 kiB is well within the range that's an acceptable cost if it gets us a 
significant simplification or improvement to core functionality, like Unicode.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue32771>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to