[issue26832] ProactorEventLoop doesn't support stdin/stdout nor files with connect_read_pipe/connect_write_pipe
Min RK added the comment: It appears that connect_read_pipe also doesn't accept pipes returned by `os.pipe`. If that's the case, what _does_ ProactorEventLoop.connect_read_pipe accept? I haven't been able to find any examples of `connect_read_pipe` that work on Windows, and every connect_read_pipe call in the cpython test suite appears to be skipped on win32. Should it still be raising NotImplementedError on ProactorEventLoop? I think the error handling could be better (I only get logged errors, nothing I can catch/handle). It seems like `connect_read_pipe` itself should raise when it fails to register the pipe with IOCP. If that's not feasible, connection_lost/transport.close should probably be triggered, but it isn't with Python 3.9, at least. Example that works on posix, but seems to fail with non-catchable errors with ProactorEventLoop: ``` import asyncio import os import sys class PipeProtocol(asyncio.Protocol): def __init__(self): self.finished = asyncio.Future() def connection_made(self, transport): print("connection made", file=sys.stderr) self.transport = transport def connection_lost(self, exc): print("connection lost", exc, file=sys.stderr) self.finished.set_result(None) def data_received(self, data): print("data received", data, file=sys.stderr) self.handler(data) def eof_received(self): print("eof received", file=sys.stderr) self.finished.set_result(None) async def test(): r, w = os.pipe() rf = os.fdopen(r, 'r') x, p = await asyncio.get_running_loop().connect_read_pipe(PipeProtocol, rf) await asyncio.sleep(1) print("writing") os.write(w, b'asdf') await asyncio.sleep(2) print("closing") os.close(w) await asyncio.wait([p.finished], timeout=3) x.close() if __name__ == "__main__": asyncio.run(test()) ``` -- nosy: +minrk versions: +Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue26832> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39529] Deprecate get_event_loop()
Min RK added the comment: Oops, I interpreted "not deprecated by oversight" as the opposite of what you meant. Sorry! All clear, now. -- ___ Python tracker <https://bugs.python.org/issue39529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39529] Deprecate get_event_loop()
Min RK added the comment: Thank you! I think I have enough information to update. > IMHO, asyncio.set_event_loop()...[is] not deprecated by oversight. I'm curious, what is an appropriate use of `asyncio.set_event_loop()` if you can never get the event loop with `get_event_loop()`? If you always have to pass the handle around anyway, I'm not sure what the use case for a write-only global would be. -- ___ Python tracker <https://bugs.python.org/issue39529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39529] Deprecate get_event_loop()
Min RK added the comment: Further digging reveals that `policy.get_event_loop()` is _not_ deprecated while `asyncio.get_event_loop()` is. Is that intentional? Does that mean switching our calls to `get_event_loop_policy().get_event_loop()` should continue to work without deprecation? -- ___ Python tracker <https://bugs.python.org/issue39529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39529] Deprecate get_event_loop()
Min RK added the comment: The comments in this thread suggest that `set_event_loop` should also be deprecated, but it hasn't been. It doesn't seem to have any use without `get_event_loop()`. I'm trying to understand the consequences of these changes for IPython, and make the changes intended by asyncio folks, but am not quite clear, yet. If I understand it correctly, this means that the whole concept of a 'current' event loop is deprecated while no event loop is running? My interpretation of these changes is that it means any persistent handles on any event loop while it isn't running is fully the responsibility of individual libraries (e.g. tornado, IPython). This is coming up in IPython where we need a handle on the event loop and advance it with `run_until_complete` for each iteration (it should be the same loop to maintain persistent state across advances, so `asyncio.run()` would not be appropriate). We previously relied on `get_event_loop` to manage this handle, but I think we have to now shift to tracking our own handle, and can no longer rely on standard APIs to track a shared instance across packages. -- nosy: +minrk ___ Python tracker <https://bugs.python.org/issue39529> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36841] Supporting customization of float encoding in JSON
Min RK added the comment: We just ran into this in Jupyter where we've removed a pre-processing step for data structures passed to json.dumps, which took care of this, but was expensive https://github.com/jupyter/jupyter_client/pull/706 My expectation was that our `default` would be called for the unsupported value, but it isn't. I see the PR proposes a new option, but would it be sensible to use the already-existing `default` callback for this? It seems like what `default` is for. -- ___ Python tracker <https://bugs.python.org/issue36841> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36841] Supporting customization of float encoding in JSON
Change by Min RK : -- nosy: +minrk nosy_count: 5.0 -> 6.0 pull_requests: +27016 pull_request: https://github.com/python/cpython/pull/28648 ___ Python tracker <https://bugs.python.org/issue36841> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37373] Configuration of windows event loop for libraries
Min RK added the comment: A hiccup to using uvloop is that it doesn't support Windows yet (https://github.com/MagicStack/uvloop/issues/14), so it can't be used in the affected environment. I'm exploring this again for pyzmq / Jupyter, and currently investigating relying on tornado's AddThread loop functionality. It's even slightly easier for tornado, which can reasonably set the proactor-wrapper policy at IOLoop start time, which means `asyncio.get_event_loop()` returns a loop with add_reader. But pyzmq doesn't get invoked until an event loop is already running. That means the selector thread needs to work not as a wrapper of the loop itself, as in tornado's AddThreadSelector, but attached after-the-fact. Using tornado's AddThread seems to work for this, but I'm not sure that should be assumed. -- nosy: +minrk ___ Python tracker <https://bugs.python.org/issue37373> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32911] Doc strings no longer stored in body of AST
Min RK <benjami...@gmail.com> added the comment: In the A/B vote, I cast mine for B, for what it is worth, but it is not strongly held. >From the IPython side, I don't view our particular issue as a major regression >for users. The only affected case for us is interactively typed string >literals in single statement cells not displaying themselves as results. Since >the same string is necessarily already displayed in the input, this isn't a >huge deal. This is pretty rare (maybe folks do this while investigating >unicode issues?) and we can handle it by recompiling empty modules with >'single' instead of the usual 'exec' that we use because most IPython inputs >are multi-statement cells coming from things like notebooks. It's relevant to >note that *any* logic in the cell, e.g. `"%i" % 1` or additional statements >have no issues. The proposed 'muliline' or 'interactive' compile mode would suit IPython very well, since that's what we really want - single * N, not actually a module, and this is illustrated by the way we do execution: compile with exec, then iterate through module.body and run the nodes one at a time. -- nosy: +minrk ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32911> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29321] Wrong documentation (Language Ref) for unicode and str comparison
Changes by RK-5wWm9h <rkist...@brocade.com>: -- title: Wrong documentation for unicode and str comparison -> Wrong documentation (Language Ref) for unicode and str comparison ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29321> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29323] Wrong documentation (Library) for unicode and str comparison
New submission from RK-5wWm9h: PROBLEM (IN BRIEF): In the currently published 2.7.13 The Python Standard Library (Library Reference manual) section 5.6 "Sequence Types" (https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange): "to compare equal, ... the two sequences must be of the same type" This an *incorrect (and misleading) statement*, for the unicode and str case. PROPOSED FIX: Current full paragraph: "Sequence types also support comparisons. In particular, tuples and lists are compared lexicographically by comparing corresponding elements. This means that to compare equal, every element must compare equal and the two sequences must be of the same type and have the same length. (For full details see Comparisons in the language reference.)" Proposed replacement text: "Sequence types also support comparisons. In particular, tuples and lists are compared lexicographically by comparing corresponding elements. This means that to compare equal, every element must compare equal and the two sequences must be of the same type and have the same length. (Unicode and str are treated as the same type here; for full details see Comparisons in the language reference.)" DETAILS, JUSTIFICATION, CORRECTNESS, ETC: The current incorrect text is really misleading. The behaviour that a str and a unicode object -- despite being objects of different types -- may compare equal, is explicitly stated in the 2.7.13 The Python Language Reference manual, section 5.9 "Comparisons" (https://docs.python.org/2/reference/expressions.html#comparisons): "* Strings are compared lexicographically using the numeric equivalents (the result of the built-in function ord()) of their characters. Unicode and 8-bit strings are fully interoperable in this behavior. [4]" (Aside: Incidentally an earlier paragraph in the Language Ref fails to cover the unicode and str case; see separately filed bug Issue 29321.) -- assignee: docs@python components: Documentation messages: 285792 nosy: RK-5wWm9h, docs@python priority: normal severity: normal status: open title: Wrong documentation (Library) for unicode and str comparison type: behavior versions: Python 2.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29323> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29321] Wrong documentation for unicode and str comparison
New submission from RK-5wWm9h: PROBLEM (IN BRIEF): In the currently published 2.7.13 The Python Language Reference manual, section 5.9 "Comparisons" (https://docs.python.org/2/reference/expressions.html#comparisons): "If both are numbers, they are converted to a common type. Otherwise, objects of different types always compare unequal..." This an *incorrect (and misleading) statement*. PROPOSED FIX: Insert a new sentence, to give this resulting text: "If both are numbers, they are converted to a common type. If one is str and the other unicode, they are compared as below. Otherwise, objects of different types always compare unequal..." DETAILS, JUSTIFICATION, CORRECTNESS, ETC: The behaviour that a str and a unicode object -- despite being objects of different types -- may compare equal, is explicitly stated several paragraphs later: "* Strings are compared lexicographically using the numeric equivalents (the result of the built-in function ord()) of their characters. Unicode and 8-bit strings are fully interoperable in this behavior. [4]" Text in the 2.7.13 The Python Standard Library (Library Reference manual) is careful to cover this unicode - str case (https://docs.python.org/2/library/stdtypes.html#comparisons): "Objects of different types, except different numeric types and different string types, never compare equal; such objects are ordered consistently but arbitrarily (so that sorting a heterogeneous array yields a consistent result)." IMPACT AND RELATED BUG: The current incorrect text is really misleading for anyone reading the Language Ref. It's easy to see the categorical statement and stop reading because your question has been answered. Further, the Library ref about unicode and str (The Python Standard Library (Library Reference manual) section 5.6 "Sequence Types": https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange), links here. Link text: "(For full details see Comparisons in the language reference.)". (Aside: That paragraph has a mistake similar to this present bug: it says "to compare equal, every element must compare equal and the two sequences must be of the same type"; I'll file a separate bug for it.) PS: First time reporting a Python bug; following https://docs.python.org/2/bugs.html. Hope I did ok! :-) ------ assignee: docs@python components: Documentation messages: 285790 nosy: RK-5wWm9h, docs@python priority: normal severity: normal status: open title: Wrong documentation for unicode and str comparison type: behavior versions: Python 2.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29321> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Unbounded memory growth resizing split-table dicts
Min RK added the comment: This affects IPython (specifically the traitlets component), which is what prompted the report. We were able to push out a release of traitlets with a workaround for the bug (4.3.1), but earlier versions of IPython / traitlets will still be affected (all IPython >= 4, traitlets 4.0 <= v < 4.3.1). So I hope 3.6.0 will be released with the fix attached here. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28384] hmac cannot be used with shake algorithms
New submission from Min RK: HMAC digest methods call inner.digest() with no arguments, but new-in-3.6 shake algorithms require a length argument. possible solutions: 1. add optional length argument to HMAC.[hex]digest, and pass through to inner hash object 2. set hmac.digest_size, and use that to pass through to inner hash object if inner hash object has digest_size == 0 3. give shake hashers a default value for `length` in digest methods (logically 32 for shake_256, 16 for shake_128, I think) test: import hmac, hashlib h = hmac.HMAC(b'secret', digestmod=hashlib.shake_256) h.hexdigest() # raises on self.inner.digest() requires length argument -- messages: 278235 nosy: minrk priority: normal severity: normal status: open title: hmac cannot be used with shake algorithms versions: Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28384> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Unbounded memory growth resizing split-table dicts
Min RK added the comment: I pulled just now and saw changes in dictobject.c, and just wanted to confirm the memory growth bug is still in changeset 56294e03ad89 (I think I used the right hash, this time). -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Unbounded memory growth resizing split-table dicts
Min RK added the comment: > Ah, is the leak happen in 3.6b1? The leak happens in 3.6b1 and master as of an hour ago (git: 3c06edfe9463f1cf81bc34b702f165ad71ff79b8, hg:r103797) -- title: Memory leak in new 3.6 dictionary resize -> Unbounded memory growth resizing split-table dicts ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Memory leak in new 3.6 dictionary resize
Min RK added the comment: > dictresize() is called for converting split table to combined table. > How is it triggered many times? every `self.__dict__.pop` triggers a resize. According to https://www.python.org/dev/peps/pep-0412/#split-table-dictionaries `obj.__dict__` is always a split-table dict. I do not understand the dict implementation enough to say precisely why, but `pop` forces a recombine via `resize` because split-table dicts don't support deletion. In `dict_resize`, due to a `<=minused` condition, the size is guaranteed to at least double every time `dict_resize` is called. It would appear that after this, `__dict__` is again forced to be a split-table dict, though I'm not sure how or where this happens, but good old-fashioned printf debugging shows that `dict_resize` is called for every `__dict__.pop` because _PyDict_HasSplitTable is true every time pop is called. > In your test code, which loop cause leak? new instance loop or re-use > instance loop? Both loops cause the leak. If the `pop_attr()` is not in `__init__`, then only the re-used instance has the leak. if `pop_attr` is in `__init__`, then it happens across instances as well. I will try to add more comments in the code to make this clearer. Does anyone have a handy way to create a split-table dict other than on `obj.__dict__`? > Please add an unit test which triggers the memory leak I should not have used the term memory leak, and have updated the title to be more precise. It is not memory allocated without a corresponding free, instead it is unbounded growth of the memory owned by a split-table dict. Cleaning up the object does indeed clean up the memory associated with it. The included test exercises the bug with several iterations. Running the test several times with only one iteration would not exercise the bug. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Memory leak in new 3.6 dictionary resize
Min RK added the comment: I can add the cpython_only decorator, but I'm not sure it is the right thing to do. I would expect the code in the test to pass on any Python implementation, which would suggest that it should not be cpython_only, right? If you still think so, I'll add it. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Memory leak in new 3.6 dictionary resize
Changes by Min RK <benjami...@gmail.com>: -- title: Memory leak in dictionary resize -> Memory leak in new 3.6 dictionary resize ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Memory leak in dictionary resize
Min RK added the comment: This patch fixes the memory leak in split-dict resizing. Each time dict_resize is called, it gets a new, larger size `> minused`. If this is triggered many times, it will keep growing in size by a factor of two each time, as the previous size is passed as minused for the next call. Set the lower bound at minused (inclusive), rather than exclusive, so that the size does not continue to increase for repeated calls. A test is added to test_dict.py based on the earlier test script, but if someone has a simpler way to trigger the split-dict resize events, I'd be happy to see it. -- keywords: +patch Added file: http://bugs.python.org/file44659/0001-Avoid-unbounded-growth-in-dict_resize.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28147] Memory leak in dictionary resize
New submission from Min RK: There is a memory leak in the new dictionary resizing in 3.6, which can cause memory exhaustion in just a few iterations. I don't fully understand the details of the bug, but it happens when resizing a dict with a split table several times. The only way that I have found to trigger this is by popping items off of an object's `__dict__` repeatedly. I've attached a script to illustrate the issue. Be careful with it, because it will eat up all your memory if you don't interrupt it. -- components: Interpreter Core files: test-dict-pop.py messages: 276418 nosy: minrk priority: normal severity: normal status: open title: Memory leak in dictionary resize type: crash versions: Python 3.6, Python 3.7 Added file: http://bugs.python.org/file44658/test-dict-pop.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27583] configparser: modifying default_section at runtime
Changes by rk <r...@simple-is-better.org>: Removed file: http://bugs.python.org/file43815/bug_configparser_default_section.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27583> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27583] configparser: modifying default_section at runtime
rk added the comment: Verified/tested with Python 2.7.9, 3.2.6, 3.3.6, 3.4.2, 3.5.1. The bug exists in all versions, so I've added 3.2, 3.3, 3.4 again. I've also attached an updated testcase, which now works in both Python 2 and Python 3. -- versions: +Python 3.2, Python 3.3, Python 3.4 Added file: http://bugs.python.org/file43815/bug_configparser_default_section.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27583> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27583] configparser: modifying default_section at runtime
rk added the comment: (removed Python 2.7, since default_section was not supported there) -- versions: -Python 2.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27583> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27583] configparser: modifying default_section at runtime
New submission from rk: Modifying "default_section" in the configparser at runtime does not behave as described. The documentation says about default_section: When default_section is given, it specifies the name for the special section holding default values for other sections and interpolation purposes (normally named "DEFAULT"). This value can be retrieved and changed on runtime using the default_section instance attribute. [https://docs.python.org/3/library/configparser.html] So, if I modify default_section at runtime, the default values for other sections should then come from the new default_section. But this is not the case. Instead, the default-values still come from self._default, which was set by self._read. So, this is either a bug in the library or a bug in the documentation. I've attached a testcase. -- components: Library (Lib) files: bug_configparser_default_section.py messages: 270918 nosy: rk priority: normal severity: normal status: open title: configparser: modifying default_section at runtime type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file43808/bug_configparser_default_section.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27583> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25544] cleanup temporary files in distutils.has_function
Min RK added the comment: update patch to use file context manager on temporary source file it should apply cleanly on current default (778ccbe3cf74) -- Added file: http://bugs.python.org/file42399/0001-cleanup-tempfiles-in-has_function.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25544] cleanup temporary files in distutils.has_function
Min RK added the comment: Absolutely, I'll try to do that tomorrow. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26153] PyImport_GetModuleDict: no module dictionary! when `__del__` triggers a warning
Changes by Min RK <benjami...@gmail.com>: Added file: http://bugs.python.org/file41659/main.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26153] PyImport_GetModuleDict: no module dictionary! when `__del__` triggers a warning
Changes by Min RK <benjami...@gmail.com>: Added file: http://bugs.python.org/file41658/b.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26153] PyImport_GetModuleDict: no module dictionary! when `__del__` triggers a warning
Changes by Min RK <benjami...@gmail.com>: Added file: http://bugs.python.org/file41657/a.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26153] PyImport_GetModuleDict: no module dictionary! when `__del__` triggers a warning
New submission from Min RK: PyImport_GetModuleDict: no module dictionary! can be raised during interpreter shutdown if a `__del__` method results in a warning. This only happens on Python 3.5. The prompting case is IPython 4.0.2 and traitlets 4.1.0. An IPython ExtensionManager calls `self.shell.on_trait_change` during its `__del__` to unregister a listener. That `on_trait_change` method is deprecated, and tries to display a DeprecationWarning. The call to `warnings.warn results in: Fatal Python error: PyImport_GetModuleDict: no module dictionary! There appear to be races involved, because the crash happens with inconsistent frequency, sometimes quite rarely. I've tried to put together a simple minimal test case, but I cannot reproduce the crash outside of IPython. I can, however, reproduce inconsistent behavior where a UserWarning displayed during `__del__` sometimes fails with ImportError: import of 'linecache' halted; None in sys.modules and sometimes the exact same code succeeds, showing the error: ~/dev/tmp/del-warn/a.py:9: DeprecationWarning: I don't cleanup anymore self.b.cleanup() and sometimes it shows the warning but not the frame ~/dev/tmp/del-warn/a.py:9: DeprecationWarning: I don't cleanup anymore -- components: Interpreter Core messages: 258586 nosy: minrk priority: normal severity: normal status: open title: PyImport_GetModuleDict: no module dictionary! when `__del__` triggers a warning type: crash versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26153> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25544] cleanup temporary files in distutils.has_function
New submission from Min RK: One of the nits noted in http://bugs.python.org/issue717152, which introduced ccompiler.has_function, was that it does not clean up after itself. This patch uses a TemporaryDirectory context to ensure that the files created during has_function are cleaned up. -- components: Distutils files: 0001-cleanup-temporary-files-in-ccompiler.has_function.patch keywords: patch messages: 253993 nosy: dstufft, eric.araujo, minrk priority: normal severity: normal status: open title: cleanup temporary files in distutils.has_function type: enhancement versions: Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file40933/0001-cleanup-temporary-files-in-ccompiler.has_function.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24564] shutil.copytree fails when copying NFS to NFS
Min RK added the comment: On a bit of further investigation, the NFS files have an xattr `system.nfs4_acl`. This can be read, but attempting to write it fails with EINVAL. Attempting to copy from NFS to non-NFS fails with ENOTSUP, which is caught and ignored, but copying from NFS to NFS raises EINVAL, which raises. Adding `EINVAL` to the ignored errnos would fix the problem, but might hide real failures (I'm not sure about the real failures, but it seems logical). Since the `copy_function` is customizable to switch between `copy` and `copy2`, making copystat optional on files, perhaps the `copystat` should be optional on directories, as well. -- nosy: +minrk ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24564 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24534] disable executing code in .pth files
Min RK added the comment: Could you please post an example of where the feature is problematic ? setuptools/easy_install is the major one, which effectively does `sys.path[:0] = pth_contents`, breaking import priority. This has been known to result in adding `/usr/lib/pythonX.Y/dist-packages` to the front of sys.path, having higher priority that the stdlib or `--user` -installed packages (I helped a user deal with a completely broken installation that was a result of exactly this last week). The result can often be that `pip list` doesn't accurately describe the versions of packages that are imported. It also causes `pip install -e` to result in completely different import priority from `pip install`, which doesn't use easy-install.pth. Removing the code execution from `easy-install.pth` solves all of these problems. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24534 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24534] disable executing code in .pth files
Min RK added the comment: Thanks for the feedback, I thought it might be a long shot. I will go back to removing the *use* of the feature everywhere I can find it, since it is so problematic and rarely, if ever, desirable. it's an essential feature that has been documented for a very long time https://docs.python.org/3.5/library/site.html The entirety of the documentation of this feature appears to be this sentence on that page: Lines starting with import (followed by space or tab) are executed. No explanation or examples are given, nor any reasoning about the feature or why one might use it. This change will basically break all Python applications This surprises me. Can you elaborate? I have not seen an application rely on executing code in .pth files. If you believe that we can smoothly move to a world without .pth files, you should propose an overall plan, step by step. I have no desire to remove .pth files. .pth files are a fine way to add locations to sys.path. It's .pth files *executing arbitrary code* that's the problem, very surprising, and a source of many errors (especially how it is used in setuptools). -- resolution: - rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24534 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24534] disable executing code in .pth files
Min RK added the comment: Just because a feature can be misused doesn't make it a bad feature. That's fair. I'm just not aware of any uses of this feature that aren't misuses, hence the patch. Perhaps you could submit a fix for this to the setuptools maintainers instead. Yes, that's definite the right thing to do, and in fact the first thing I did. It looks like that patch is likely to be merged; it is certainly much less disruptive. That's where I started, then I decided to bring it up to Python itself after reading up on the exploited feature, as it seemed to me like a feature with no use other than misuse. Thanks for your time. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24534 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24534] disable executing code in .pth files
New submission from Min RK: .pth files currently allow execution of arbitrary code, triggered by lines starting with `import`. This is a rarely understood, and often misbehaving feature. easy_install has used this feature to ensure that its packages are highest priority (even higher than stdlib). This is one of the unfortunate behaviors that pip undoes from easy_install, in part due to the problems it can cause. There is currently a proposal in setuptools to stop using this, even for easy_install. The attached patch removes support for executing code in .pth files, throwing an ImportWarning if any such attempts at import are seen. General question that might result in rejecting this patch: Are there any good/valid use cases for .pth files being able to execute arbitrary code at interpreter start time? If this is accepted, some implementation questions: 1. if the feature is removed in 3.6, should a DeprecationWarning be added to 3.5? 2. Is ImportWarning the right warning class (or should there even be a warning)? -- components: Installation files: 0001-disable-executing-code-in-.pth-files.patch keywords: patch messages: 245959 nosy: minrk priority: normal severity: normal status: open title: disable executing code in .pth files versions: Python 3.6 Added file: http://bugs.python.org/file39836/0001-disable-executing-code-in-.pth-files.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue24534 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22269] Resolve distutils option conflicts with priorities
Min RK added the comment: `--prefix` vs `--user` is the only conflict I have encountered, but based on the way it works, it could just as easily happen with any of the various other conflicting options in install (install_base, exec_prefix, etc.), though that might not be very common. There is a general question: If a Python distributor wants sys.prefix and default install_prefix to differ, what's the right way to do it? Setting it in distutils.cfg makes sense other than the conflicting option issues. Could there be a special `default_prefix` key that gets used as the final fallback (end of install.finalize_unix)? I would really like to avoid having a warning on every install, since warning suggests that something has been done incorrectly, which in turn suggests that `distutils.cfg` is the wrong place to set the install prefix. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22269 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22269] Resolve distutils option conflicts with priorities
New submission from Min RK: Background: Some Python distros (OS X, Debian, Homebrew, others) want the default installation prefix for packages to differ from sys.prefix. OS X and Debian accomplish this by patching distutils itself, with special cases like `if sys.prefix == '/System/Library/...': actually_do_something_else()`. Homebrew accomplishes this by writing a `distutils.cfg` with: [install] prefix = /usr/local The distutils.cfg approach is certainly simpler than shipping a patch, but has its own problems, because distutils doesn't differentiate the *source* of configuration options when resolving conflicts. That means that you can't do `python setup.py install --user` because it fails with the error can't combine user with prefix, ... without also specifying `--prefix=''` to eliminate the conflict. Proposal: I've included a patch for discussion, which uses the fact that the option_dict tracks the source of each option, and keeps track of the load order of each. In the case of an option conflict, the option that came from the lower priority source is unset back to None. If they come from the same source, then the same conflict error message is displayed as before. Even if this patch is rejected as madness, as I expect it might be, official recommendations on how to address the root question of `sys.prefix != install_prefix` would be appreciated. -- components: Distutils files: distutils_conflict.patch keywords: patch messages: 225843 nosy: dstufft, eric.araujo, minrk priority: normal severity: normal status: open title: Resolve distutils option conflicts with priorities versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36459/distutils_conflict.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22269 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21351] refcounts not respected at process exit
Min RK added the comment: Thanks for clarifying that there is indeed a reference cycle by way of the module, I hadn't realized that. The gc blocking behavior is exactly why I brought up the issue. The real code where this causes a problem (rather than the toy example I attached) is in pyzmq, where destroying a Context object calls `zmq_term`, a GIL-less C call that will (and should) block until all associated sockets are closed. Deleting a socket closes it. Sockets hold a reference to the Context and not vice versa, which has ensured that the sockets are collected before the Context until Python 3.4. Does this mean it is no longer possible to express that one object should be cleaned up before another via references? I think I will switch to adding an atexit call to set a flag that prevents any cleanup logic during the atexit process, since it does not appear to be possible to ensure deletion of one object before another in 3.4. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21351 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21351] refcounts not respected at process exit
Min RK added the comment: Thanks for your help and patience. Closing as slightly unfortunate, but not unintended behavior. -- resolution: - not a bug status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21351 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21351] refcounts not respected at process exit
New submission from Min RK: Reference counts appear to be ignored at process cleanup, which allows inter-dependent `__del__` methods to hang on exit. The problem does not seem to occur for garbage collection of any other context (functions, etc.). I have a case where one object must be cleaned up after some descendent objects. Those descendents hold a reference on the parent and not vice versa, which should guarantee that they are cleaned up before the parent. This guarantee is satisfied by Python 3.3 and below, but not 3.4. The attached test script hangs at exit on most (not all) runs on 3.4, but exits cleanly on earlier versions. -- components: Interpreter Core files: tstgc.py messages: 217168 nosy: minrk priority: normal severity: normal status: open title: refcounts not respected at process exit versions: Python 3.4 Added file: http://bugs.python.org/file35041/tstgc.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21351 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Python date time API
Hi, I am python novice, trying to convert a boost::gregorian::date out to python using PyDateTime C-api interface. I was coring because the C- API failed to initialize. The error was: AttributeError: module object has not attribute datetime_CAPI As indicated by the error, a dir(datetime) gives me the following output. ['MAXYEAR', 'MINYEAR', '__doc__', '__file__', '__name__', 'date', 'datetime', 'time', 'timedelta', 'tzinfo'] I am note sure why datetime_CAPI is missing? I would appreciate any input. Regards, Ramesh -- http://mail.python.org/mailman/listinfo/python-list
Re: python application ideas.
I need a python source code diagrammer that actually works out-of-the-box to explore all the code already written out there. something like SmallWorlds was to java before they got rid of it. -- http://mail.python.org/mailman/listinfo/python-list
Re: proposed Python logo
That's a good try... Can we get some street pros? http://www.graffitifonts.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: R Paul Johnson is out of the office.
ok, who's been playing with mailman? -- http://mail.python.org/mailman/listinfo/python-list
perspective on ruby
I apologize if this is a stupid question, I'm asking Python group for perspective on Ruby, but I don't see how the alternative of going to a ruby group for a perspective on Ruby is going to do me any good... I just unpacked and tried out InstantRails, after turning off the local Plone stack. Looking over the IR stack, making the required hacks to the examples, looking at all it's pieces(including some of the more powerful PHP support mixed in), looking at the shipped examples, I had to marvel again at how far behind these folks are compared to something like Zope. They are 10 years behind an integrated platform like that. I just don't get it. The scripted object-oriented clean programming language is done. I'm more than willing to supprt RoR if it's being sold as the popular alternative to .NET programming, which it is in some CS curriculum (where Java being thrown out). But, all those ENDs are getting on my nerves. Thx -- http://mail.python.org/mailman/listinfo/python-list
Re: Thanks from the Java Developer
Me too. I feel like I've been living under a rock. Did all this just happen in the last few years? -- http://mail.python.org/mailman/listinfo/python-list
newbie: plain old object kernel already built in?
I'm looking to do something like POJO/AOP in Python. (ref pojo, aspectj for Java, CodeFarms for C++ http://www.codefarms.com/ , esp see two-layer diagram #2 here: http://incode.sourceforge.net/index.html ). The thing with a two-layer design and plain old objects is you need a kernel to manage it, seems to me. JBoss is a pojo kernel in the java arena. Is this already built-in to python somehow? (had gone looking for unit testing in python and found it built-in, figured this question worth a shot) The trouble with existing pojo kernels implemented with aspect syntax is that the aspects are in the language, seems to me you need the second layer to be implemented in a rdbms if you want to be taken seriously by enterprises. Thanks, -Rich -- http://mail.python.org/mailman/listinfo/python-list
errors when trying to send mail
I've seen another bug submission similar to this. I am using 2.3.4 and I get almost the exact same error. I'm on a linux box (2.6.9-5.ELsmp) and the same code runs fine on other machines and previous versions of python - here's the code snippet: msg = MIMEMultipart() COMMASPACE = ', ' msg['Subject'] = 'NO Build (' + tstr + ')' msg['From'] = '[EMAIL PROTECTED]' msg['To'] = COMMASPACE.join(buildmgrs) msg['To'] = buildlead msg.preamble = 'Build Results' msg.epilogue = '' mailsrv = smtplib.SMTP('server') mailsrv.sendmail(buildlead, buildlead, msg.as_string()) time.sleep(5) mailsrv.close() Here's the error: Traceback (most recent call last): File ./testMail.py, line 60, in ? mailsrv.sendmail(buildlead, buildlead, msg.as_string(unixfrom=True)) File /usr/lib64/python2.3/email/Message.py, line 130, in as_string g.flatten(self, unixfrom=unixfrom) File /usr/lib64/python2.3/email/Generator.py, line 102, in flatten self._write(msg) File /usr/lib64/python2.3/email/Generator.py, line 137, in _write self._write_headers(msg) File /usr/lib64/python2.3/email/Generator.py, line 183, in _write_headers header_name=h, continuation_ws='\t').encode() File /usr/lib64/python2.3/email/Header.py, line 415, in encode return self._encode_chunks(newchunks, maxlinelen) File /usr/lib64/python2.3/email/Header.py, line 375, in _encode_chunks _max_append(chunks, s, maxlinelen, extra) File /usr/lib64/python2.3/email/quopriMIME.py, line 84, in _max_append L.append(s.lstrip()) AttributeError: 'list' object has no attribute 'lstrip' Thanks ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com