[Python-Dev] Investigating Python memory footprint of one real Web application
Hi, all. After reading Instagram's blog article [1], I’m thinking about how Python can reduce memory usage of Web applications. My company creating API server with Flask, SQLAlchemy and typing. (sorry, it's closed source). So I can get some data from it's codebase. [1]: https://engineering.instagram.com/dismissing-python-garbage-collection-at-instagram-4dca40b29172#.lenebvdgn Report is here https://gist.github.com/methane/ce723adb9a4d32d32dc7525b738d3c31 My thoughts are: * Interning (None,) seems worth enough. * There are many empty dicts. Allocating ma_keys lazily may reduce memory usage. * Most large strings are docstring. Is it worth enough that option for trim docstrings, without disabling asserts? * typing may increase memory footprint, through functions __attributes__ and abc. * Can we add option to remove or lazy evaluate __attributes__ ? * Using string literal for annotating generic type may reduce WeakRef usage. * Since typing will be used very widely in this year. Need more investigating. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
2017-01-20 11:49 GMT+01:00 INADA Naoki :
> Report is here
> https://gist.github.com/methane/ce723adb9a4d32d32dc7525b738d3c31
Very interesting report, thanks!
> My thoughts are:
>
> * Interning (None,) seems worth enough.
I guess that (None,) comes from constants of code objects:
>>> def f(): pass
...
>>> f.__code__.co_consts
(None,)
> * There are many empty dicts. Allocating ma_keys lazily may reduce
> memory usage.
Would you be able to estimate how many bytes would be saved by this
change? With the total memory usage to have an idea of the %.
By the way, it would help if you can add the total memory usage
computed by tracemalloc (get_traced_memory()[0]) in your report.
About empty dict, do you expect that they come from shared keys?
Anyway, if it has a negligible impact on the performance, go for it
:-)
> but other namespaces or annotations, like ('return',) or ('__wrapped__',) are
> not shared
Maybe we can intern all tuple which only contains one string?
Instead of interning, would it be possible to at least merge
duplicated immutable objects?
> * Most large strings are docstring. Is it worth enough that option
> for trim docstrings, without disabling asserts?
Yeah, you are not the first one to propose. The problem is to decide
how to name the .pyc file.
My PEP 511 proposes to add a new -O command line option and a new
sys.implementation.optim_tag string to support this feature:
https://www.python.org/dev/peps/pep-0511/
Since the code transformer part of the PEP seems to be controversal,
maybe we should extract only these two changes from the PEP and
implement them? I also want -O noopt :-) (disable the peephole
optimizer)
Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On Fri, 20 Jan 2017 19:49:01 +0900 INADA Naoki wrote: > > Report is here > https://gist.github.com/methane/ce723adb9a4d32d32dc7525b738d3c31 "this script counts static memory usage. It doesn’t care about dynamic memory usage of processing real request" You may be trying to optimize something which is only a very small fraction of your actual memory footprint. That said, the marshal module could certainly try to intern some tuples and other immutable structures. > * Most large strings are docstring. Is it worth enough that option > for trim docstrings, without disabling asserts? Perhaps docstrings may be compressed and then lazily decompressed when accessed for the first time. lz4 and zstd are good modern candidates for that. zstd also has a dictionary mode that helps for small data (*). See https://facebook.github.io/zstd/ (*) Even a 200-bytes docstring can be compressed this way: >>> data = os.times.__doc__.encode() >>> len(data) 211 >>> len(lz4.compress(data)) 200 >>> c = zstd.ZstdCompressor() >>> len(c.compress(data)) 156 >>> c = zstd.ZstdCompressor(dict_data=dict_data) >>> len(c.compress(data)) 104 `dict_data` here is some 16KB dictionary I've trained on some Python docstrings. That 16KB dictionary could be computed while building Python (or hand-generated from time to time, since it's unlikely to change a lot) and put in a static array somewhere: >>> samples = [(mod.__doc__ or '').encode() for mod in sys.modules.values()] >>> sum(map(len, samples)) 258113 >>> dict_data = zstd.train_dictionary(16384, samples) >>> len(dict_data.as_bytes()) 16384 Of course, compression is much more efficient on larger docstrings: >>> import numpy as np >>> data = np.__doc__.encode() >>> len(data) 3140 >>> len(lz4.compress(data)) 2271 >>> c = zstd.ZstdCompressor() >>> len(c.compress(data)) 1539 >>> c = zstd.ZstdCompressor(dict_data=dict_data) >>> len(c.compress(data)) 1348 >>> import pdb >>> data = pdb.__doc__.encode() >>> len(data) 12018 >>> len(lz4.compress(data)) 6592 >>> c = zstd.ZstdCompressor() >>> len(c.compress(data)) 4502 >>> c = zstd.ZstdCompressor(dict_data=dict_data) >>> len(c.compress(data)) 4128 A similar strategy may be used for annotations and other rarely-accessed metadata. Another possibility, but probably much more costly in terms of initial development and maintenance, is to put the docstrings (+ annotations, etc.) in a separate file that's lazily read. I think optimizing the footprint for everyone is much better than adding command-line options to disable some specific metadata. Regards Antoine. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On 20 January 2017 at 11:49, INADA Naoki wrote: > * typing may increase memory footprint, through functions > __attributes__ and abc. >* Can we add option to remove or lazy evaluate __attributes__ ? > This idea already appeared few times. I proposed to introduce a flag (e.g. -OOO) to ignore function and variable annotations in compile.c It was decide to postpone this, but maybe we can get back to this idea. In 3.6, typing is already (quite heavily) optimized for both speed and space. I remember doing an experiment comparing a memory footprint with and without annotations, the difference was few percent. Do you have such comparison (with and without annotations) for your app? It would be nice to have a realistic number to estimate what would the additional optimization flag save. -- Ivan ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
> > "this script counts static memory usage. It doesn’t care about dynamic > memory usage of processing real request" > > You may be trying to optimize something which is only a very small > fraction of your actual memory footprint. That said, the marshal > module could certainly try to intern some tuples and other immutable > structures. > Yes. I hadn't think static memory footprint is so important. But Instagram tried to increase CoW efficiency of prefork application, and got some success about memory usage and CPU throughput. I surprised about it because prefork only shares static memory footprint. Maybe, sharing some tuples which code object has may increase cache efficiency. I'll try run pyperformance with the marshal patch. >> * Most large strings are docstring. Is it worth enough that option >> for trim docstrings, without disabling asserts? > > Perhaps docstrings may be compressed and then lazily decompressed when > accessed for the first time. lz4 and zstd are good modern candidates > for that. zstd also has a dictionary mode that helps for small data > (*). See https://facebook.github.io/zstd/ > > (*) Even a 200-bytes docstring can be compressed this way: > data = os.times.__doc__.encode() len(data) > 211 len(lz4.compress(data)) > 200 c = zstd.ZstdCompressor() len(c.compress(data)) > 156 c = zstd.ZstdCompressor(dict_data=dict_data) len(c.compress(data)) > 104 > > `dict_data` here is some 16KB dictionary I've trained on some Python > docstrings. That 16KB dictionary could be computed while building > Python (or hand-generated from time to time, since it's unlikely to > change a lot) and put in a static array somewhere: > Interesting. I noticed zstd is added to mercurial (current RC version). But zstd (and brotli) are new project. I stay tuned about them. > > A similar strategy may be used for annotations and other > rarely-accessed metadata. > > Another possibility, but probably much more costly in terms of initial > development and maintenance, is to put the docstrings (+ annotations, > etc.) in a separate file that's lazily read. > > I think optimizing the footprint for everyone is much better than > adding command-line options to disable some specific metadata. > I see. Although -OO option exists, I can't strip only SQLAlchemy's docstrings. I need to check all dependency libraries doesn't require __doc__ to use -OO in production. We have almost one year before 3.7beta1. We can find and implement better way. > Regards > > Antoine. > ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On Fri, Jan 20, 2017 at 8:52 PM, Ivan Levkivskyi wrote: > On 20 January 2017 at 11:49, INADA Naoki wrote: >> >> * typing may increase memory footprint, through functions >> __attributes__ and abc. >>* Can we add option to remove or lazy evaluate __attributes__ ? > > > This idea already appeared few times. I proposed to introduce a flag (e.g. > -OOO) to ignore function and variable annotations in compile.c > It was decide to postpone this, but maybe we can get back to this idea. > > In 3.6, typing is already (quite heavily) optimized for both speed and > space. > I remember doing an experiment comparing a memory footprint with and without > annotations, the difference was few percent. > Do you have such comparison (with and without annotations) for your app? > It would be nice to have a realistic number to estimate what would the > additional optimization flag save. > I'm sorry. I just read the blog article yesterday and investigate one application today. I don't have idea how to compare memory overhead of __annotations__ yet. And the project I borrowed codebase start using typing very recently, after reading Dropbox's story. So I don't know how % of functions are typed. I'll survey more about it later, hopefully in this month. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On 2017-01-20 13:15, INADA Naoki wrote: >> >> "this script counts static memory usage. It doesn’t care about dynamic >> memory usage of processing real request" >> >> You may be trying to optimize something which is only a very small >> fraction of your actual memory footprint. That said, the marshal >> module could certainly try to intern some tuples and other immutable >> structures. >> > > Yes. I hadn't think static memory footprint is so important. > > But Instagram tried to increase CoW efficiency of prefork application, > and got some success about memory usage and CPU throughput. > I surprised about it because prefork only shares static memory footprint. > > Maybe, sharing some tuples which code object has may increase cache > efficiency. > I'll try run pyperformance with the marshal patch. IIRC Thomas Wouters (?) has been working on a patch to move the ref counter out of the PyObject struct and into a dedicated memory area. He proposed the idea to improve cache affinity, reduce cache evictions and to make CoW more efficient. Especially modern ccNUMA machines with multiple processors could benefit from the improvement, but also single processor/multi core machines. Christian ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On 2017-01-20 13:15, INADA Naoki wrote: >> >> "this script counts static memory usage. It doesn’t care about dynamic >> memory usage of processing real request" >> >> You may be trying to optimize something which is only a very small >> fraction of your actual memory footprint. That said, the marshal >> module could certainly try to intern some tuples and other immutable >> structures. >> > > Yes. I hadn't think static memory footprint is so important. > > But Instagram tried to increase CoW efficiency of prefork application, > and got some success about memory usage and CPU throughput. > I surprised about it because prefork only shares static memory footprint. > > Maybe, sharing some tuples which code object has may increase cache > efficiency. > I'll try run pyperformance with the marshal patch. IIRC Thomas Wouters (?) has been working on a patch to move the ref counter out of the PyObject struct and into a dedicated memory area. He proposed the idea to improve cache affinity, reduce cache evictions and to make CoW more efficient. Especially modern ccNUMA machines with multiple processors could benefit from the improvement, but also single processor/multi core machines. Christian ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On Fri, Jan 20, 2017 at 8:17 PM, Victor Stinner
wrote:
> 2017-01-20 11:49 GMT+01:00 INADA Naoki :
>> Report is here
>> https://gist.github.com/methane/ce723adb9a4d32d32dc7525b738d3c31
>
> Very interesting report, thanks!
>
>> My thoughts are:
>>
>> * Interning (None,) seems worth enough.
>
> I guess that (None,) comes from constants of code objects:
>
def f(): pass
> ...
f.__code__.co_consts
> (None,)
>
>
>> * There are many empty dicts. Allocating ma_keys lazily may reduce
>> memory usage.
>
> Would you be able to estimate how many bytes would be saved by this
> change? With the total memory usage to have an idea of the %.
>
Smallest dictkeysobject is 5*8 + 8 + (8 * 3 * 5) = 168 bytes.
1600 empty dicts = 268800 bytes.
Unlike tuples bound to code objects, I don't think this is so important
for cache hit rate. So tuple is more important than dict.
> By the way, it would help if you can add the total memory usage
> computed by tracemalloc (get_traced_memory()[0]) in your report.
>
Oh, nice to know it. I'll use it next time.
> About empty dict, do you expect that they come from shared keys?
> Anyway, if it has a negligible impact on the performance, go for it
> :-)
>
>
>> but other namespaces or annotations, like ('return',) or ('__wrapped__',)
>> are not shared
>
> Maybe we can intern all tuple which only contains one string?
Ah, it's dict's key. I used print(tuple(d.keys())) to count dicts.
>
> Instead of interning, would it be possible to at least merge
> duplicated immutable objects?
>
I meant sharing same object, I didn't meant using dict or adding bit
for interning like
interned strings. So I think we have same idea.
>
>> * Most large strings are docstring. Is it worth enough that option
>> for trim docstrings, without disabling asserts?
>
> Yeah, you are not the first one to propose. The problem is to decide
> how to name the .pyc file.
>
> My PEP 511 proposes to add a new -O command line option and a new
> sys.implementation.optim_tag string to support this feature:
> https://www.python.org/dev/peps/pep-0511/
>
> Since the code transformer part of the PEP seems to be controversal,
> maybe we should extract only these two changes from the PEP and
> implement them? I also want -O noopt :-) (disable the peephole
> optimizer)
>
> Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On Fri, 20 Jan 2017 13:40:14 +0100 Christian Heimes wrote: > > IIRC Thomas Wouters (?) has been working on a patch to move the ref > counter out of the PyObject struct and into a dedicated memory area. He > proposed the idea to improve cache affinity, reduce cache evictions and > to make CoW more efficient. > Especially modern ccNUMA machines with > multiple processors could benefit from the improvement, but also single > processor/multi core machines. Moving the refcount out of the PyObject will probably make increfs / decrefs more costly, and there are a lot of them. We'd have to see actual measurements if a patch is written, but my intuition is that the net result won't be positive. Regards Antoine. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
Larry Hastings' Gilectomy also moved the reference counter into a separated memory block, no? (grouping all refcounts into large memory blocks if I understood correctly.) https://github.com/larryhastings/gilectomy Victor 2017-01-20 13:40 GMT+01:00 Christian Heimes : > On 2017-01-20 13:15, INADA Naoki wrote: >>> >>> "this script counts static memory usage. It doesn’t care about dynamic >>> memory usage of processing real request" >>> >>> You may be trying to optimize something which is only a very small >>> fraction of your actual memory footprint. That said, the marshal >>> module could certainly try to intern some tuples and other immutable >>> structures. >>> >> >> Yes. I hadn't think static memory footprint is so important. >> >> But Instagram tried to increase CoW efficiency of prefork application, >> and got some success about memory usage and CPU throughput. >> I surprised about it because prefork only shares static memory footprint. >> >> Maybe, sharing some tuples which code object has may increase cache >> efficiency. >> I'll try run pyperformance with the marshal patch. > > IIRC Thomas Wouters (?) has been working on a patch to move the ref > counter out of the PyObject struct and into a dedicated memory area. He > proposed the idea to improve cache affinity, reduce cache evictions and > to make CoW more efficient. Especially modern ccNUMA machines with > multiple processors could benefit from the improvement, but also single > processor/multi core machines. > > Christian > > ___ > Python-Dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
> > Moving the refcount out of the PyObject will probably make increfs / > decrefs more costly, and there are a lot of them. We'd have to see > actual measurements if a patch is written, but my intuition is that the > net result won't be positive. > > Regards > > Antoine. I agree with you. But I have similar idea: split only PyGc_Head (3 words). SImple implementation may just use pointer to PyGc_Head instead of embedding it. +1 word for tracked objects, and -2 words for untracked objects. More complex implementation may use bitmap for tracking objects. Memory pool has the bitmap. It means GC module have own memory pool and allocator, or GC module and obmalloc are tightly coupled. But it's too hard. I don't think I can do it by Python 3.7. Reducing number of tuples may be easier. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
On Fri, 20 Jan 2017 22:30:16 +0900 INADA Naoki wrote: > > > > Moving the refcount out of the PyObject will probably make increfs / > > decrefs more costly, and there are a lot of them. We'd have to see > > actual measurements if a patch is written, but my intuition is that the > > net result won't be positive. > > > > Regards > > > > Antoine. > > I agree with you. But I have similar idea: split only PyGc_Head (3 words). That sounds like an interesting idea. Once an object is created, the GC header is rarely accessed. Since the GC header has a small constant size, it would probably be easy to make its allocation very fast (e.g. using a freelist). Then the GC header is out of the way which increases the cache efficiency of GC-tracked objects. Regards Antoine. ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Summary of Python tracker Issues
ACTIVITY SUMMARY (2017-01-13 - 2017-01-20) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open5755 (+36) closed 35335 (+35) total 41090 (+71) Open issues with patches: 2484 Issues opened (49) == #16623: argparse help formatter does not honor non-breaking space http://bugs.python.org/issue16623 reopened by xiang.zhang #29264: sparc/ffi.c:440 error: 'asm' undeclared http://bugs.python.org/issue29264 opened by phantal #29265: os.cpu_count is problematic on sparc/solaris http://bugs.python.org/issue29265 opened by phantal #29266: test_create_connection_service_name fails if "http" isn't list http://bugs.python.org/issue29266 opened by phantal #29267: Cannot override some flags in CFLAGS from the command-line http://bugs.python.org/issue29267 opened by phantal #29268: test_spwd fails on solaris using NIS users http://bugs.python.org/issue29268 opened by phantal #29269: test_socket failing in solaris http://bugs.python.org/issue29269 opened by phantal #29270: super call in ctypes sub-class fails in 3.6 http://bugs.python.org/issue29270 opened by Dave Jones #29271: Task.current_task(None) returns unexpected result http://bugs.python.org/issue29271 opened by yselivanov #29272: test_logging hangs if /etc/hosts only aliases "localhost" to : http://bugs.python.org/issue29272 opened by phantal #29273: test___all__ alters utf8 locale setting http://bugs.python.org/issue29273 opened by martin.panter #29275: time module still has Y2K issues note http://bugs.python.org/issue29275 opened by Elizacat #29278: Python 3.6 build fails with parallel make http://bugs.python.org/issue29278 opened by ugultopu #29282: Fused multiply-add: proposal to add math.fma() http://bugs.python.org/issue29282 opened by juraj.sukop #29283: duplicate README in site-packages http://bugs.python.org/issue29283 opened by matejcik #29284: Include thread_name_prefix in the concurrent.futures.ThreadPo http://bugs.python.org/issue29284 opened by jftuga #29286: Use METH_FASTCALL in str methods http://bugs.python.org/issue29286 opened by haypo #29287: IDLE needs syntax highlighting for f-strings http://bugs.python.org/issue29287 opened by rhettinger #29288: Lookup Error while importing idna from a worker thread http://bugs.python.org/issue29288 opened by Ilya.Kulakov #29289: Convert OrderedDict methods to Argument Clinic http://bugs.python.org/issue29289 opened by haypo #29290: argparse breaks long lines on NO-BREAK SPACE http://bugs.python.org/issue29290 opened by steven.daprano #29293: Missing parameter "n" on multiprocessing.Condition.notify() http://bugs.python.org/issue29293 opened by Victor de la Fuente #29298: argparse fails with required subparsers, un-named dest, and em http://bugs.python.org/issue29298 opened by zachrahan #29299: Argument Clinic: Fix signature of optional positional-only arg http://bugs.python.org/issue29299 opened by haypo #29300: Modify the _struct module to use FASTCALL and Argument Clinic http://bugs.python.org/issue29300 opened by haypo #29302: add contextlib.AsyncExitStack http://bugs.python.org/issue29302 opened by thehesiod #29304: dict: simplify lookup function http://bugs.python.org/issue29304 opened by inada.naoki #29306: Check usage of Py_EnterRecursiveCall() and Py_LeaveRecursiveCa http://bugs.python.org/issue29306 opened by haypo #29308: venv should match virtualenv VIRTUAL_ENV_DISABLE_PROMPT config http://bugs.python.org/issue29308 opened by Jack Bennett #29309: Interpreter hang when interrupting a loop.run_in_executor() fu http://bugs.python.org/issue29309 opened by rsebille #29310: Document typing.NamedTuple default argument syntax http://bugs.python.org/issue29310 opened by Jelle Zijlstra #29311: Argument Clinic: convert dict methods http://bugs.python.org/issue29311 opened by haypo #29313: msi by bdist_msi will fail execute install-scripts if space in http://bugs.python.org/issue29313 opened by eszense #29314: asyncio.async deprecation warning is missing stacklevel=2 http://bugs.python.org/issue29314 opened by r.david.murray #29317: test_copyxattr_symlinks fails http://bugs.python.org/issue29317 opened by marktay #29318: Optimize _PyFunction_FastCallDict() for **kwargs http://bugs.python.org/issue29318 opened by haypo #29319: Embedded 3.6.0 distribution cannot run pyz files http://bugs.python.org/issue29319 opened by paul.moore #29320: bdist_msi install_script fail to execute if alt python locatio http://bugs.python.org/issue29320 opened by eszense #29321: Wrong documentation (Language Ref) for unicode and str compari http://bugs.python.org/issue29321 opened by RK-5wWm9h #29323: Wrong documentation (Library) for unicode and str comparison http://bugs.python.org/issue29323 opened by RK-5wWm9h #29324: test_aead_aes_gcm fails on Kernel 4.9 http://bugs.python.org/issue29324 o
Re: [Python-Dev] Investigating Python memory footprint of one real Web application
I've filed an issue about merging tuples: http://bugs.python.org/issue29336 I'll try the patch with my company's codebase again in next week. But could someone try the patch with realworld large application too? Or if you know OSS large application easy to install, could you share requirements.txt + script imports many modules in the application? ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
