[issue43093] Make modules picklable
Arusekk added the comment: Sorry, I forgot to state what is my actual goal: for the module objects to be pickleable in native CPython (possibly from C layer) without the need to add this to every code that wants to pickle module objects. The point is that they can already be unpickled from the representation generated by the code, so it should only be necessary to implement __reduce__ in moduleobject.c, and probably to change the object's qualname / module attributes. Or to introduce a helper function, like PyPy did, for the fake module case; I just found supporting existing unpickling paths more elegant (and working across all Python versions since 2.0.0), as much as unelegant is the monkey-patching done in my example. It is possible that (for the actual module case) _compat_pickle.REVERSE_IMPORT_MAPPING should be considered as well for the old protocols (which would probably imply using __reduce_ex__ instead), but I did not explore that. -- ___ Python tracker <https://bugs.python.org/issue43093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43093] Make modules picklable
New submission from Arusekk : Currently pickling functions and types stores modules by their name. So I believe it is possible to support pickling module objects with the following code (based on the logic in PyPy, which supports pickling modules): import copyreg import types import pickle import sys def picklemod(mod): if mod.__name__ in sys.modules: # real modules return (__import__, (mod.__name__, None, None, ('',))) # module objects created manually: return (types.ModuleType, (mod.__name__,), mod.__dict__) copyreg.pickle(types.ModuleType, picklemod) pickle.loads(pickle.dumps(sys)) # works import http.server pickle.loads(pickle.dumps(http.server)) # works for nested modules fakemod = types.ModuleType('fakemod') fakemod.field1 = 'whatever' # This fake type is used instead of types.ModuleType in order to re-confuse pickle back on track. # Should not have been necessary in the first place, # but types.ModuleType has misconfigured fields according to pickle # (and they are read-only). class _types_ModuleType(types.ModuleType): __module__ = 'types' __name__ = __qualname__ = 'ModuleType' _orig_types_ModuleType = types.ModuleType # bad monkey-patching, but needed for the confusion to work types.ModuleType = _types_ModuleType dump = pickle.dumps(fakemod) # not necessary, but to show unpickling is regardless correct types.ModuleType = _orig_types_ModuleType pickle.loads(dump).field1 # works Disclaimer: I do not see any specific use for this, I was just surprised while trying to port snakeoil library to PyPy, which (snakeoil) uses sys module as an example of an unpicklable object (they should switch to a traceback for instance, but that is not the scope of this issue). -- components: Library (Lib) messages: 386090 nosy: Arusekk priority: normal severity: normal status: open title: Make modules picklable type: enhancement ___ Python tracker <https://bugs.python.org/issue43093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40519] Preserve AttributeError exception context in __getattr__
Arusekk added the comment: Feel free to reuse the patches if you have better ideas -- Added file: https://bugs.python.org/file49123/robust.patch ___ Python tracker <https://bugs.python.org/issue40519> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40519] Preserve AttributeError exception context in __getattr__
New submission from Arusekk : This is another attempt at issue 39865, but with a different attitude. Please see that issue for some extra motivation for this change. My suggestion is to change getattr logic, which now is (in this case): def getattr(obj, attr): try: return normal_getattr(obj, attr) except AttributeError: pass return obj.__getattr__(attr) to be more like: def getattr(obj, attr): try: return normal_getattr(obj, attr) except AttributeError: return obj.__getattr__(attr) A __getattr__ function would then be able to know the exception context (through sys.exc_info()) and to have the context automatically attached to all the exceptions raised (still respecting raise ... from None). This particular issue only lies in Objects/typeobject.c, but is probably valid for other uses of PyErr_Clear() in the interpreter. I checked some using a simple shell pipeline: $ grep -r -A5 PyErr_ExceptionMatches |grep -C5 PyErr_Clear And found some interesting examples of what be worth looking into: Python/sysmodule.c:708 Parser/tokenizer.c:1110 Objects/memoryobject.c:fix_error_int I prepared two patches for this (please forgive if I violated code style somewhere, I am not saying that they are a final version ready to be merged): a simple one, addressing just this very issue, and a robust one, allowing other places (e.g. from the list above) to reuse it. -- components: Interpreter Core files: typeobject.patch keywords: patch messages: 368152 nosy: Arusekk, ammar2, pasenor priority: normal severity: normal status: open title: Preserve AttributeError exception context in __getattr__ type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49122/typeobject.patch ___ Python tracker <https://bugs.python.org/issue40519> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34372] Compiler could output more accurate line numbers
New submission from Arusekk : If this is a duplicate, please excuse me. In particular, the most noticeable inaccuracy happens when the postfix if-else expression is involved. Maybe there are more of them. The problem is quite self-explaining. The module named 'dis' will be helpful to reproduce the issue. >>> import dis >>> code = """( ... [ ... call1(), ... call2() ... ] ... + call3() ... * call4() ... )""" >>> dis.dis(code) 3 0 LOAD_NAME0 (call1) 3 CALL_FUNCTION0 (0 positional, 0 keyword pair) 4 6 LOAD_NAME1 (call2) 9 CALL_FUNCTION0 (0 positional, 0 keyword pair) 12 BUILD_LIST 2 6 15 LOAD_NAME2 (call3) 18 CALL_FUNCTION0 (0 positional, 0 keyword pair) 7 21 LOAD_NAME3 (call4) 24 CALL_FUNCTION0 (0 positional, 0 keyword pair) 27 BINARY_MULTIPLY 28 BINARY_ADD 29 RETURN_VALUE >>> dis.dis(code.replace("+", "if").replace("*", "else")) 6 0 LOAD_NAME0 (call3) 3 CALL_FUNCTION0 (0 positional, 0 keyword pair) 6 POP_JUMP_IF_FALSE 25 9 LOAD_NAME1 (call1) 12 CALL_FUNCTION0 (0 positional, 0 keyword pair) 15 LOAD_NAME2 (call2) 18 CALL_FUNCTION0 (0 positional, 0 keyword pair) 21 BUILD_LIST 2 24 RETURN_VALUE 7 >> 25 LOAD_NAME3 (call4) 28 CALL_FUNCTION0 (0 positional, 0 keyword pair) 31 RETURN_VALUE I used this code to show the difference between if-else and some arithmetics. AFAICT the feature is possible to implement, as lnotab can contain negative line differences. I don't know whether it is just a bug or a fully intended feature, but it would be quite an enhancement to have better line number tracking, useful for debugging. If this is implemented, it may be worth further backporting. Possible reasons in the upstream Python/compile.c (using < instead of !=): https://github.com/python/cpython/blob/077059e0f086cf8c8b7fb9d1f053e38ddc743f59/Python/compile.c#L4092 https://github.com/python/cpython/blob/077059e0f086cf8c8b7fb9d1f053e38ddc743f59/Python/compile.c#L4438 -- components: Interpreter Core messages: 323371 nosy: Arusekk priority: normal severity: normal status: open title: Compiler could output more accurate line numbers type: behavior versions: Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue34372> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17852] Built-in module _io can lose data from buffered files at exit
Arusekk <arek_...@o2.pl> added the comment: Since the issue seems to have been active lately, may I suggest my view on solving it. One solution that comes to my mind is to keep a weak reference to the file, and to register an atexit function per file (and to unregister it when the file is closed). Example concept-illustrating python code for the _pyio module: import atexit, weakref # ... class TextIOWrapper(TextIOBase): def __init__(self): # ... self._weakclose = operator.methodcaller('close') atexit.register(self._weakclose, weakref.proxy(self)) # ... def close(self): atexit.unregister(self._weakclose) # and now actually close the file There is a possibility of a negative impact arising from the use of operator.methodcaller, because it may return something cached in future versions of python. There is also the issue of unregistering an atexit function during atexit processing. But the general idea seems to be simple and worth considering. -- nosy: +Arusekk ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue17852> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com