> > I think that the right solution of this issue is generalizing the import > machinery and allowing it to cache not just files, but arbitrary chunks of > code. We already use precompiled bytecode files for exactly same goal -- > speed up the startup by avoiding compilation. This solution could be used > for caching other generated code, not just namedtuples. >
I thought about adding C implementation based on PyStructSequence. But I like Jelle's approach because it may improve performance on all Python implementation. It's reducing source to eval. It shares code objects for most methods. (refer https://github.com/python/cpython/pull/2736#issuecomment-316014866 for quick and dirty bench on PyPy) I agree that template + eval pattern is nice for readability when comparing to other meta-programming magic. And code cache machinery can improve template + eval pattern in CPython. But namedtuple is very widely used. It's loved enough to get optimized for not only CPython. So I prefer Jelle's approach to adding code cache machinery in this case. Regards, INADA Naoki <songofaca...@gmail.com> _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com