[issue47157] bijective invertible map
Jonathan Balloch added the comment: thank you!! On Tue, Mar 29, 2022 at 8:44 PM Raymond Hettinger wrote: > > Raymond Hettinger added the comment: > > This is indeed a duplicate. If needed just use one of implementations on > PyPI https://pypi.org/project/bidict/ > > -- > nosy: +rhettinger > resolution: -> duplicate > stage: -> resolved > status: open -> closed > > ___ > Python tracker > <https://bugs.python.org/issue47157> > ___ > -- ___ Python tracker <https://bugs.python.org/issue47157> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue47157] bijective invertible map
New submission from Jonathan Balloch : It would be powerful to have a native implementation of a bijective map (e.g. a dictionary that hashed only one-to-one, but as a result either the "key" or the "value" could do lookup in O(1) time with the only overhead being the additional memory overhead of O(2N) as many references. Calling the object type "bimap", this could be easily implemented by simply having a call to bimap.inverse[value]=key, where the 'inverse' keyword is a reference table to the value-->key references. This is an important enhancement because currently the most efficient way to implement this is python is to, either: (1) make a custom object type that keeps two dictionaries, one that maps v->k and one that maps k->v, which takes twice as much memory, or (2) an object that has a custom "inverse" lookup call, which will be slower than O(1). In both cases there is no implicit enforcement of values being unique (necessary for a bijection). This should be added to the `collections` library as it will fit well along side other unique hashed collections such as "OrderedDict" This will be beneficial to the community because transformations between semantic spaces (e.g. things that cannot be done in NumPy or similar) could be much more efficient and have cleaner, easier to read code if bijection maps were native and used one structure instead of two dictionaries. -- components: Interpreter Core messages: 416304 nosy: jon.balloch priority: normal severity: normal status: open title: bijective invertible map type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue47157> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46910] Expect IndentationError, get SyntaxError: 'break' outside loop
Jonathan Fine added the comment: My main concern is that the door not be closed on improving the user experience relating to this behaviour of the compiler. This issue was raised as a bug for the compiler (which is C-coded). I'd be very happy for this issue to be closed as 'not a bug' for the compiler, provided the door is left open for Python-coded improvements for the user experience. I suggest that the issue title be changed to: The two-pass compile(bad_src, ...) sometimes does not report first error in bad_src These two changes to the details of closure would be sufficient to meet my concern. I hope they can be accepted. By the way, I see these improvements being done as a third-party pure-Python module outside Python's Standard Library, at least until they've reached a wide measure of community acceptance. -- ___ Python tracker <https://bugs.python.org/issue46910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46910] Expect IndentationError, get SyntaxError: 'break' outside loop
Jonathan Fine added the comment: Many thanks Pablo for the clear explanation. I'd prefer that the issue remain open, as there's an important user experience issue here. I suspect there are other similar examples of how the compiler error messages could be improved. Here's a change that doesn't seem to be too hard, that could fix the problem at hand. The IndentationError occurred at a known location in the input string. So as part of error reporting truncate the input string and try to compile that. In other words, make a good faith attempt to find an earlier error. I've attached a funny_break_error_fix.py which is a first draft implementation of this idea. Here's the output: === $ python3 funny_break_error_fix.py funny_break_error.py unexpected indent (funny_break_error.py, line 6) Traceback (most recent call last): File "funny_break_error_fix.py", line 3, in compile_fix compile(source, filename, 'exec') File "funny_break_error.py", line 6 else: ^ IndentationError: unexpected indent During handling of the above exception, another exception occurred: Traceback (most recent call last): File "funny_break_error_fix.py", line 18, in compile_fix(src.read(), filename, 'exec') File "funny_break_error_fix.py", line 9, in compile_fix compile(new_source, filename, 'exec') File "funny_break_error.py", line 5 break ^ SyntaxError: 'break' outside loop === And in this case we've got hold of the first error (at the cost of compiling part of the source file twice). Many thanks again for the clear explanation, which I found most helpful when formulating the above fix. -- Added file: https://bugs.python.org/file50656/funny_break_error_fix.py ___ Python tracker <https://bugs.python.org/issue46910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46910] Expect IndentationError, get SyntaxError: 'break' outside loop
New submission from Jonathan Fine : This arises from a request for help made by Nguyễn Ngọc Tiến to the visually impaired programmers lists, see https://www.freelists.org/post/program-l/python,48. Please keep this in mind. Nguyễn asked for help with the syntax error created by === count = 0 while count < 1: count = count + 1 print(count) break else: print("no break") === When I saved this to a file and ran it I got: === $ python3.8 funny_break_error.py File "funny_break_error.py", line 6 else: ^ IndentationError: unexpected indent === However, remove the last two lines and you get the more helpful error === $ python3.8 funny_break_error.py File "funny_break_error.py", line 5 break ^ SyntaxError: 'break' outside loop === Python3.6 and 3.7 also behave as above. Note. I've heard that blind Python programmers prefer a single space to denote indent. I think this is because they hear the leading spaces via a screen reader, rather than see the indent with their eyes. -- components: Parser files: funny_break_error.py messages: 414424 nosy: jfine2358, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Expect IndentationError, get SyntaxError: 'break' outside loop type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file50655/funny_break_error.py ___ Python tracker <https://bugs.python.org/issue46910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46802] Wrong result unpacking binary data with ctypes bitfield.
Jonathan added the comment: True, have to admit, that I forgot to search first, that really looks like it is the same problem, especially when looking at https://bugs.python.org/msg289212. Would say this one can be closed. -- nosy: +helo9 stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue46802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46802] Wrong result unpacking binary data with ctypes bitfield.
Change by Jonathan : -- nosy: -helo9 ___ Python tracker <https://bugs.python.org/issue46802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46802] Wrong result unpacking binary data with ctypes bitfield.
New submission from Jonathan : I have issues unpacking binary data, produced by C++. The appended jupyter notebook shows the problem. It is also uploaded to github gist: https://gist.github.com/helo9/04125ae67b493e505d5dce4b254a2ccc -- components: ctypes files: ctypes_bitfield_problem.ipynb messages: 413559 nosy: helo9 priority: normal severity: normal status: open title: Wrong result unpacking binary data with ctypes bitfield. type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50633/ctypes_bitfield_problem.ipynb ___ Python tracker <https://bugs.python.org/issue46802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46667] SequenceMatcher & autojunk - false negative
Jonathan added the comment: I still don't get how UNIQUESTRING is the longest even with autojunk=True, but that's an implementation detail and I'll trust you that it's working as expected. Given this, I'd suggest the following then: * `Autojunk=False` should be the default unless there's some reason to believe SequenceMatcher is mostly used for code comparisons. * If - for whatever reason - the default can't be changed, I'd suggest a nice big docs "Warning" (at a minimum a "Note") saying something like "The default autojunk=True is not suitable for normal string comparison. See autojunk for more information". * Human-friendly doc explanation for autojunk. The current explanation is only going to be helpful to the tiny fraction of users who understand the algorithm. Your explanation is a good start: "Autojunk was introduced as a way to greatly speed comparing files of code, viewing them as sequences of lines. But it more often backfires when comparing strings (viewed as sequences of characters)" Put simply: The current docs aren't helpful to users who don't have text matching expertise, nor do they emphasise the huge caveat that autojunk=True raises. -- ___ Python tracker <https://bugs.python.org/issue46667> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46667] SequenceMatcher & autojunk - false negative
Jonathan added the comment: Gah. I mean 0.008 in both directions. I'm just going to be quiet now. :-) -- ___ Python tracker <https://bugs.python.org/issue46667> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46667] SequenceMatcher & autojunk - false negative
Jonathan added the comment: (Like the idiot I am, the example code is wrong. `autojunk` parameter should *not* be set for either of them to get the stated wrong results). In place of "UNIQUESTRING", any unique 3 character string triggers it (QQQ, EEE, ZQU...). And in those cases you get a ratio of 0.008! (and 0.993 in the other direction!) -- ___ Python tracker <https://bugs.python.org/issue46667> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46667] SequenceMatcher & autojunk - false negative
New submission from Jonathan : The following two strings are identical other than the text "UNIQUESTRING". UNIQUESTRING is at the start of first and at the end of second. Running the below gives the following output: 0.99830220713073 0.99830220713073 0.023769100169779286 # ratio 0.99830220713073 0.99830220713073 0.023769100169779286 # ratio As you can see, Ratio is basically 0. Remove either of the UNIQUESTRING pieces and it goes up to 0.98 (correct)... Remove both and you get 1.0 (correct) ``` from difflib import SequenceMatcher first = """ UNIQUESTRING Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum """ second = """ Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum UNIQUESTRING """ sm = SequenceMatcher(None, first, second, autojunk=False) print(sm.real_quick_ratio()) print(sm.quick_ratio()) print(sm.ratio()) print() sm2 = SequenceMatcher(None, second, first, autojunk=False) print(sm2.real_quick_ratio()) print(sm2.quick_ratio()) print(sm2.ratio()) ``` If I add `autojunk=False`, then I get a correct looking ratio (0.98...), however from my reading of the autojunk docs, UNIQUESTRING shouldn't be triggering it. Furthermore, looking in the code, as far as I can see autojunk is having no effect... Autojunk considers these items to be "popular" in that string: `{'n', 'p', 'a', 'h', 'e', 'u', 'I', 'r', 'k', 'g', 'y', 'm', 'c', 'd', 't', 'l', 'o', 's', ' ', 'i'}` If I remove UNIQUESTRING from `first`, this is the autojunk popular set: `{'c', 'p', 'a', 'u', 'r', 'm', 'k', 'g', 'I', 'd', ' ', 'o', 'h', 't', 'e', 'i', 'l', 's', 'y', 'n'}` They're identical! In both scenarios, `b2j` is also identical. I don't pretend to understand what the module is doing in any detail, but this certainly seems like a false positive/negative. Python 3.8.10 -- components: Library (Lib) messages: 412673 nosy: jonathan-lp priority: normal severity: normal status: open title: SequenceMatcher & autojunk - false negative type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue46667> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46181] Destroying an expaned Combobox prevents Entry focus until Alt+Tab
Jonathan Lahav added the comment: Here's a discussion about the issue. I asked about it in comp.lang.tcl: https://groups.google.com/g/comp.lang.tcl/c/C-uQIH-wP5w Someone there explains what's happening. -- ___ Python tracker <https://bugs.python.org/issue46181> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46181] Destroying an expaned Combobox prevents Entry focus until Alt+Tab
New submission from Jonathan Lahav : Happens on Windows. Observation: When an expanded Combobox is destroyerd, widgets in the window can't get focus until Alt+Tab forth and back. Buttons can still be clicked, but focus can't be obtained by widgets, entries fro example, not by clicking nor by the Tab or arrow keys. The attached file contains a minimal reproduction example. Motivation: I develop the GUI for a complex application at work which needs to recreate its GUI layout upon a combobox selection, thus destroying the combobox as well. -- components: Tkinter files: combobug.py messages: 409196 nosy: j.lahav priority: normal severity: normal status: open title: Destroying an expaned Combobox prevents Entry focus until Alt+Tab type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file50522/combobug.py ___ Python tracker <https://bugs.python.org/issue46181> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2628] ftplib Persistent data connection
Jonathan Bell added the comment: I should rephrase: There doesn't seem to be a practical way to verify BLOCK transmission mode against actual servers in the wild. As the Wikipedia article that Giampaolo referenced points out, BLOCK mode is a rarity that was primarily supported only by mainframe and minicomputer systems. Any compliant server not supporting BLOCK should respond with a non-200 response. The PR sends its request to enter BLOCK mode with self.voidcmd(), which handles non-200 responses by raising error_reply. When I originally wrote that patch in 2008, such a system was running on a DEC Alpha under OpenVMS. Within months of the first test suite appearing for ftplib, that same vendor replaced their systems. The new server had no BLOCK transmission support, but was capable of handling multiple consecutive passive mode STREAM data connections without fault. Even at the time, I couldn't find any other freely available FTP servers supporting BLOCK. But STREAM was and continues to be the standard. Essentially this means that any changes to the existing PR may not be work properly with actual servers. -- ___ Python tracker <https://bugs.python.org/issue2628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2628] ftplib Persistent data connection
Jonathan Bell added the comment: No practical method exists to verify BLOCK transmission mode, which as mentioned earlier, was rarely implemented even when this issue was opened. Given that reality, I'm inclined to close this issue. -- ___ Python tracker <https://bugs.python.org/issue2628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2628] ftplib Persistent data connection
Jonathan Bell added the comment: This issue is 13 years old. The original 2008 patch was used in a production environment against an OpenVMS server identifying itself as MadGoat. That use case involved downloading documents only, and no write permission was available. Therefore the patch only supports RETR. See the debug.log file attached to this issue for the server interaction. I no longer have a need for BLOCK mode, and don't know what modern servers would support it. mikecmcleod revived this issue so perhaps they can provide some ability for testing, or perspective on the current needs. The PR updates the patch to Python 3, and includes a test written against the minimal changes required for that 2.7->3.x update. -- ___ Python tracker <https://bugs.python.org/issue2628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2628] ftplib Persistent data connection
Change by Jonathan Bell : -- pull_requests: +27604 stage: test needed -> patch review pull_request: https://github.com/python/cpython/pull/29337 ___ Python tracker <https://bugs.python.org/issue2628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2628] ftplib Persistent data connection
Jonathan Bell added the comment: The CLA is signed, and I'm again able to work on this. I was able to update this locally for Python 3 with a minimal test case. What specifically were you looking for? -- ___ Python tracker <https://bugs.python.org/issue2628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45038] Bugs
New submission from Jonathan Isaac : Jonathan Isaac Sent with Aqua Mail for Android https://www.mobisystems.com/aqua-mail -- messages: 400479 nosy: bonesisaac1982 priority: normal severity: normal status: open title: Bugs ___ Python tracker <https://bugs.python.org/issue45038> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45037] theme-change.py for tkinter lib
Jonathan Isaac added the comment: Bugs -- components: +Parser nosy: +lys.nikolaou, pablogsal type: -> crash versions: +Python 3.11, Python 3.6 ___ Python tracker <https://bugs.python.org/issue45037> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45037] theme-change.py for tkinter lib
Jonathan Isaac added the comment: Get the code! -- nosy: +bonesisaac1982 ___ Python tracker <https://bugs.python.org/issue45037> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44623] help(open('/dev/zero').writelines) gives no help
Jonathan Fine added the comment: I used my default Python, which is Python 3.6. However, with 3.7 and 3.8 I get the same as Paul. So I'm closing this as 'not a bug' (as there's not an already-fixed option for closing). -- resolution: works for me -> not a bug status: pending -> open ___ Python tracker <https://bugs.python.org/issue44623> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44623] help(open('/dev/zero').writelines) gives no help
New submission from Jonathan Fine : On Linux >>> help(open('/dev/zero').writelines) gives However https://docs.python.org/3/library/io.html#io.IOBase.writelines gives Write a list of lines to the stream. Line separators are not added, so it is usual for each of the lines provided to have a line separator at the end. See also request that writelines support a line separator. https://mail.python.org/archives/list/python-id...@python.org/thread/A5FT7SVZBYAJJTIWQFTFUGNSKMVQNPVF/#A5FT7SVZBYAJJTIWQFTFUGNSKMVQNPVF -- assignee: docs@python components: Documentation messages: 397414 nosy: docs@python, jfine2358 priority: normal severity: normal status: open title: help(open('/dev/zero').writelines) gives no help type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue44623> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34629] Python3 regression for urllib(2).urlopen(...).fp for chunked http responses
Jonathan Schweder added the comment: Hello @tkruse, I have made some research and found that when using the Chunked transfer encoding [1], each chunk is preceded by its size in bytes, something that really happen if you check the content of one downloaded file from the example you provided [2]. So far, I would say that this is not a bug, it is just how the transfer encoding works. [1]: https://en.wikipedia.org/wiki/Chunked_transfer_encoding [2]: https://gist.github.com/jaswdr/95b2adc519d986c00b17f6572d470f2a -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue34629> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38193] http.client should be "runnable" like http.server
Change by Jonathan Schweder : -- keywords: +patch nosy: +jaswdr nosy_count: 1.0 -> 2.0 pull_requests: +25361 stage: -> patch review pull_request: https://github.com/python/cpython/pull/26775 ___ Python tracker <https://bugs.python.org/issue38193> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40938] urllib.parse.urlunsplit makes relative path to absolute (http:g -> http:///g)
Jonathan Schweder added the comment: Not exactly, in the RFC example they use a/b/c for the path, but when using http:g there is no nested path, so it should be http:///g, no? -- ___ Python tracker <https://bugs.python.org/issue40938> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40938] urllib.parse.urlunsplit makes relative path to absolute (http:g -> http:///g)
Jonathan Schweder added the comment: @op368 I don't think that this is a bug, [1] literally uses this exact example and shows the expected behaviour. [1] https://datatracker.ietf.org/doc/html/rfc3986#section-5.4.2 -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue40938> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43813] Denial of service on http.server module with large request method.
Jonathan Schweder added the comment: @demonia you are more than welcome to send a PR, sent it and add a reference to this issue, so it could be reviewed. -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue43813> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44104] http.cookies.CookieError: Illegal key
Jonathan Schweder added the comment: Simple example to reproduce the issue: from http import cookies C = cookies.SimpleCookie() C["ys-api/mpegts/service"] = "blabla" print(C.output()) @ra1nb0w so far as I have found [1][2], the "/" not a valid character for the Cookie name, [3] defines the list of valid characters and [4] is where the exception is raised, I also found that even with the RFC browsers have different rules for the Cookie name definitions, this could be reason why Python has, for example, the ":" character in the list. My conclusion is that the rule for the cookie name is not well-defined, there are some ambiguities here and there, but if we consider purely this case and the RFC, the "/" still is not a valid character for the cookie name, so I guess the best option for you is to filter it out any http.cookies.CookieError that happen. [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes [2] https://datatracker.ietf.org/doc/html/rfc2616#section-2.2 [3] https://github.com/python/cpython/blob/main/Lib/http/cookies.py#L162 [4] https://github.com/python/cpython/blob/main/Lib/http/cookies.py#L353 -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue44104> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44107] HTTPServer can't close http client completely
Jonathan Schweder added the comment: @ueJone according to the (RFC)[https://datatracker.ietf.org/doc/html/rfc6455#section-1.4] the FIN/ACK is not normative, in other words is recommended but not required, I've checked the syscalls of the server, see it below: ``` ... 1561 15143 write(2, "127.0.0.1 - - [11/May/2021 20:08"..., 60) = 60$ 1562 15143 sendto(4, "HTTP/1.0 200 OK\r\nServer: SimpleH"..., 154, 0, NULL, 0) = 154$ 1563 15143 sendto(4, " <https://bugs.python.org/issue44107> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43742] tcp_echo_client in asyncio streams example does not work. Hangs for ever at reaser.read()
Change by Jonathan Schweder : -- keywords: +patch pull_requests: +24563 stage: -> patch review pull_request: https://github.com/python/cpython/pull/25889 ___ Python tracker <https://bugs.python.org/issue43742> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43742] tcp_echo_client in asyncio streams example does not work. Hangs for ever at reaser.read()
Jonathan Schweder added the comment: @jcolo Awesome to hear that you were able to run the example, in fact I got in the same trap, thinking the same that the example should carry the server and client side, I guess we can improve the documentation to avoid it, I'll sent a PR to make the improvement. -- ___ Python tracker <https://bugs.python.org/issue43742> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43742] tcp_echo_client in asyncio streams example does not work. Hangs for ever at reaser.read()
Jonathan Schweder added the comment: I was able to execute the example in Debian 10 + Python 3.10+ Did you execute the server too? You need to create two files, one for the client code and one for the server code, the server as specified by the example should be something like the code below, try to save it to a file, then execute it, after that execute the client example that you have cited. import asyncio async def handle_echo(reader, writer): data = await reader.read(100) message = data.decode() addr = writer.get_extra_info('peername') print(f"Received {message!r} from {addr!r}") print(f"Send: {message!r}") writer.write(data) await writer.drain() print("Close the connection") writer.close() async def main(): server = await asyncio.start_server( handle_echo, '127.0.0.1', ) addr = server.sockets[0].getsockname() print(f'Serving on {addr}') async with server: await server.serve_forever() asyncio.run(main()) -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue43742> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43806] asyncio.StreamReader hangs when reading from pipe and other process exits unexpectedly
Jonathan Schweder added the comment: @kormang this is an expected behaviour, this is a problem even for the OS level, just because it is impossible to know when the reader needs to stop waiting, the best option here is to implement some timeout mechanism. -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue43806> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43991] asyncio lock does not get released after task is canceled
Jonathan Schweder added the comment: a.niederbuehl tasks are free of context, meaning that the task does not know what was done inside it, and by consequence is impossible to know when or not release a lock. This is by design and normally in these cases you need to be aware of the lock, by for example checking if the lock was released before cancelling the task. -- nosy: +jaswdr ___ Python tracker <https://bugs.python.org/issue43991> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41570] Add DearPyGui to faq/gui.rst
Jonathan Hoffstadt added the comment: I hoped someone else could complete it it. Sent from my iPhone > On Apr 4, 2021, at 10:03 AM, Irit Katriel wrote: > > > New submission from Irit Katriel : > > Jonathan, I see you closed the PR. Did you intend to close this issue as well? > > -- > nosy: +iritkatriel > status: open -> pending > > ___ > Python tracker > <https://bugs.python.org/issue41570> > ___ -- status: pending -> open ___ Python tracker <https://bugs.python.org/issue41570> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38263] [Windows] multiprocessing: DupHandle.detach() race condition on DuplicateHandle(DUPLICATE_CLOSE_SOURCE)
Jesvi Jonathan added the comment: File "c:/Users/jesvi/Documents/GitHub/Jesvi-Bot-Telegram/scripts/main.py", line 144, in thread_test p.start() File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle '_thread.lock' object Traceback (most recent call last): File "", line 1, in Traceback (most recent call last): File "", line 1, in File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 107, in spawn_main File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 107, in spawn_main new_handle = reduction.duplicate(pipe_handle, File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 79, in duplicate new_handle = reduction.duplicate(pipe_handle, File "C:\Users\jesvi\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 79, in duplicate return _winapi.DuplicateHandle( OSError: [WinError 6] The handle is invalid return _winapi.DuplicateHandle( OSError: [WinError 6] The handle is invalid -- nosy: +jesvi22j type: behavior -> compile error versions: -Python 3.10, Python 3.9 ___ Python tracker <https://bugs.python.org/issue38263> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43461] Tottime column for cprofile output does not add up
New submission from Jonathan Frawley : I am using cprofile and PStats to try and figure out where bottlenecks are in a program. When I sum up all of the times in the "tottime" column, it only comes to 57% of the total runtime. Is this due to rounding of times or some other issue? -- messages: 388430 nosy: jonathanfrawley priority: normal severity: normal status: open title: Tottime column for cprofile output does not add up type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue43461> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40897] Inheriting from class that defines __new__ causes inspect.signature to always return (*args, **kwargs) for constructor
Jonathan Slenders added the comment: The following patch to inspect.py solves the issue that inspect.signature() returns the wrong signature on classes that inherit from Generic. Not 100% sure though if this implementation is the cleanest way possible. I've been looking into attaching a __wrapped__ to Generic as well, without success. I'm not very familiar with the inspect code. To me, this fix is pretty important. ptpython, a Python REPL, has the ability to show the function signature of what the user is currently typing, and with codebases that have lots of generics, there's nothing really useful we can show. $ diff inspect.old.py inspect.py -p *** inspect.old.py 2021-02-17 11:35:50.787234264 +0100 --- inspect.py 2021-02-17 11:35:10.131407202 +0100 *** import sys *** 44,49 --- 44,50 import tokenize import token import types + import typing import warnings import functools import builtins *** def _signature_get_user_defined_method(c *** 1715,1720 --- 1716,1725 except AttributeError: return else: + if meth in (typing.Generic.__new__, typing.Protocol.__new__): + # Exclude methods from the typing module. + return + if not isinstance(meth, _NonUserDefinedCallables): # Once '__signature__' will be added to 'C'-level # callables, this check won't be necessary *** For those interested, the following monkey-patch has the same effect: def monkey_patch_typing() -> None: import inspect, typing def _signature_get_user_defined_method(cls, method_name): try: meth = getattr(cls, method_name) except AttributeError: return else: if meth in (typing.Generic.__new__, typing.Protocol.__new__): # Exclude methods from the typing module. return if not isinstance(meth, inspect._NonUserDefinedCallables): # Once '__signature__' will be added to 'C'-level # callables, this check won't be necessary return meth inspect._signature_get_user_defined_method = _signature_get_user_defined_method monkey_patch_typing() -- nosy: +jonathan.slenders ___ Python tracker <https://bugs.python.org/issue40897> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41632] Tkinter - Unexpected behavior after creating around 10000 widgets
Jonathan Lahav added the comment: Thank you for checking it so quickly, and answering nicely. I indeed forgot to mention that it happened to me on Windows. Sorry for that. The issue seems similar to the one you linked. I will try and take this to the TCL community since it impacts our product. Thank you for translating the code to TCL. If the python community has no interest in trying to push TCL to fix it, this issue can be closed. Thanks! -- ___ Python tracker <https://bugs.python.org/issue41632> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41632] Tkinter - Unexpected behavior after creating around 10000 widgets
New submission from Jonathan Lahav : Observation: After creating around 1 widgets (verified with ttk.Label), no more widgets get created, and sometimes graphical artifacts appear outside the application window. No error message or exception is raised. Expected: Either the limit can be removed (having dynamically created 1 widgets in data heavy applications is sometimes desired), or at least document and return runtime errors to prevent the weird behavior. Reproduction: This is the problematic part: for _ in range(1): ttk.Label(root, text='problematic') A full minimal example code is attached, though a better effect can be seen when running the above two lines in the context of a more advanced Tkinter application. -- components: Tkinter files: ten_k.py messages: 375888 nosy: gpolo, j.lahav, serhiy.storchaka priority: normal severity: normal status: open title: Tkinter - Unexpected behavior after creating around 1 widgets type: crash versions: Python 3.8 Added file: https://bugs.python.org/file49426/ten_k.py ___ Python tracker <https://bugs.python.org/issue41632> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41570] Add DearPyGui to faq/gui.rst
Change by Jonathan Hoffstadt : -- keywords: +patch pull_requests: +21026 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21911 ___ Python tracker <https://bugs.python.org/issue41570> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41570] Add DearPyGui to faq/gui.rst
Change by Jonathan Hoffstadt : -- assignee: docs@python components: Documentation nosy: docs@python, jhoffstadt priority: normal severity: normal status: open title: Add DearPyGui to faq/gui.rst type: enhancement versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue41570> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40365] argparse: action "extend" with 1 parameter splits strings into characters
Jonathan Haigh added the comment: >> But I wonder, was this situation discussed in the original bug/issue? >Doesn't look like it: I was looking at the wrong PR link. This has more discussion: https://github.com/python/cpython/pull/13305. nargs is discussed but I'm not sure it was realized that the nargs=None and nargs="?" cases would act in the way seen here rather than acting like append. Having a default nargs of "+" was suggested but that suggestion was not addressed. > I suggest that the default nargs for extend should be "*" or "+" and an > exception should be raised if nargs is given as "?". I'm not convinced about that any more. Using append's behaviour is probably more reasonable for nargs=None and nargs="?". -- nosy: +Anthony Sottile, BTaskaya, berker.peksag ___ Python tracker <https://bugs.python.org/issue40365> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40365] argparse: action "extend" with 1 parameter splits strings into characters
Jonathan Haigh added the comment: The situation for type=int and unspecified nargs or nargs="?" is also surprising: Python 3.8.3 (default, May 21 2020, 12:19:36) [GCC 9.2.1 20191008] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import argparse >>> p = argparse.ArgumentParser() >>> p.add_argument("--c", action="extend", type=int) _ExtendAction(option_strings=['--c'], dest='c', nargs=None, const=None, default=None, type=, choices=None, help=None, metavar=None) >>> p.parse_args("--c 1".split()) Traceback (most recent call last): File "", line 1, in File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1768, in parse_args args, argv = self.parse_known_args(args, namespace) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1800, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 2006, in _parse_known_args start_index = consume_optional(start_index) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1946, in consume_optional take_action(action, args, option_string) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1874, in take_action action(self, namespace, argument_values, option_string) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1171, in __call__ items.extend(values) TypeError: 'int' object is not iterable >>> p = argparse.ArgumentParser() >>> p.add_argument("--c", action="extend", type=int, nargs="?") _ExtendAction(option_strings=['--c'], dest='c', nargs='?', const=None, default=None, type=, choices=None, help=None, metavar=None) >>> p.parse_args("--c 1".split()) Traceback (most recent call last): File "", line 1, in File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1768, in parse_args args, argv = self.parse_known_args(args, namespace) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1800, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 2006, in _parse_known_args start_index = consume_optional(start_index) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1946, in consume_optional take_action(action, args, option_string) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1874, in take_action action(self, namespace, argument_values, option_string) File "/home/jonathan/.pyenv/versions/3.8.3/lib/python3.8/argparse.py", line 1171, in __call__ items.extend(values) TypeError: 'int' object is not iterable >>> I suggest that the default nargs for extend should be "*" or "+" and an exception should be raised if nargs is given as "?". I don't see the current behaviour with unspecified nargs or nargs="?" being useful (and it certainly is surprising). In both cases, I think the least surprising behaviour would be for extend to act the same as append (or for an exception to be raised). > But I wonder, was this situation discussed in the original bug/issue? Doesn't look like it: https://bugs.python.org/issue23378 https://github.com/python/cpython/commit/aa32a7e1116f7aaaef9fec453db910e90ab7b101 -- nosy: +Jonathan Haigh ___ Python tracker <https://bugs.python.org/issue40365> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39243] CDLL __init__ no longer supports name being passed as None when the handle is not None
Change by Jonathan Hsu : -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue39243> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40438] Python 3.9 eval on list comprehension sometimes returns coroutines
Jonathan Crall added the comment: This can be closed, but for completeness, the test you ran didn't verify that the bug was fixed. This is because the hard coded compile flags I gave in my example seem to have changed in Python 3.9 (is this documented?). In python3.8 the compile flags I specified correspond to division, print_function, unicode_literals, and absolute_import. python3.8 -c "import __future__; print(__future__.print_function.compiler_flag | __future__.division.compiler_flag | __future__.unicode_literals.compiler_flag | __future__.absolute_import.compiler_flag)" Results in: 221184 In Python 3.9 the same code results in: 3538944 I can modify the MWE to accommodate these changes: ./python -c "import __future__; print(eval(compile('[i for i in range(3)]', mode='eval', filename='fo', flags=__future__.print_function.compiler_flag | __future__.division.compiler_flag | __future__.unicode_literals.compiler_flag | __future__.absolute_import.compiler_flag)))" Which does produce the correct output as expected. So, the issue can remain closed. I am curious what the bug in 3.9.0a5 was though if you have any speculations. -- ___ Python tracker <https://bugs.python.org/issue40438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40438] Python 3.9 eval on list comprehension sometimes returns coroutines
Jonathan Crall added the comment: Ah, sorry. I neglected all the important information. I tested this using: Python 3.9.0a5 (default, Apr 23 2020, 14:11:34) [GCC 8.3.0] Specifically, I ran in a docker container: DOCKER_IMAGE=circleci/python:3.9-rc docker pull $DOCKER_IMAGE docker run --rm -it $DOCKER_IMAGE bash And then in the bash shell in the docker image I ran: python -c "print(eval(compile('[i for i in range(3)]', mode='eval', filename='foo', flags=221184)))" -- ___ Python tracker <https://bugs.python.org/issue40438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40438] Python 3.9 eval on list comprehension sometimes returns coroutines
New submission from Jonathan Crall : I first noticed this when testing xdoctest on Python 3.9, and then again when using IPython. I was finally able to generate a minimal working example in Python itself. The following code: python -c "print(eval(compile('[i for i in range(3)]', mode='eval', filename='foo', flags=221184)))" produces [0, 1, 2] in Python <= 3.8, but in 3.9 it produces: at 0x7fa336d40ec0> :1: RuntimeWarning: coroutine '' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback Is this an intended change? I can't find any notes in the CHANGELOG that seem to correspond to it. -- components: Interpreter Core messages: 367651 nosy: Jonathan Crall priority: normal severity: normal status: open title: Python 3.9 eval on list comprehension sometimes returns coroutines versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue40438> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40049] tarfile cannot extract from stdin
Jonathan Hsu added the comment: This is caused when tarfile tries to write a symlink that already exists. Any exceptions to os.symlink() as handled as if the platform doesn't support symlinks, so it scans the entire tar to try and find the linked files. When it resumes extraction, it needs to do a negative seek to pick up where it left off, which causes the exception. I've reproduced the error on both Windows 10 and Ubuntu running on WSL. Python 2.7 handled this situation by checking if the symlink exists, but it looks like the entire tarfile library was replaced with an alternate implementation that doesn't check if the symlink exists. I've created a pull request to address this issue. -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue40049> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40025] enum: _generate_next_value_ is not called if its definition occurs after calls to auto()
Jonathan Hsu added the comment: Thank you for the explanation. -- ___ Python tracker <https://bugs.python.org/issue40025> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20899] Nested namespace imports do not work inside zip archives
Jonathan Hsu added the comment: It appears this issue has been fixed, as I am unable to reproduce it on Windows 10/Python 3.7: Python 3.7.7 (tags/v3.7.7:d7c567b08f, Mar 10 2020, 10:41:24) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path += ['project1', 'project2.zip', 'project3', 'project4.zip'] >>> import parent.child.hello1 Hello 1 >>> import parent.child.hello2 Hello 2 >>> import parent.child.hello3 Hello 3 >>> import parent.child.hello4 Hello 4 >>> import boo boo! >>> import parent.boo boo! -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue20899> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36759] astimezone() fails on Windows for pre-epoch times
Jonathan Hsu added the comment: This exception is raised because astimezone() ends up calling time.localtime() to determine the appropriate time zone. If the datetime object has a pre-epoch value, it passes a negative timestamp to time.localtime(). On Windows, time.localtime() does not accept values greater than 0 (more discussion in issue #35796). This is the minimal code required to reproduce the error: from datetime import datetime datetime(1969, 1, 1).astimezone() Without the ability to ascertain the time zone with localtime(), I'm not sure if the time zone can be accurately determined. It's not clear what the proper behavior is. Maybe raise a ValueError? PEP 615 proposes to include the IANA tz database, which would negate the need for a system call. Should we wait for this PEP before fixing this issue? Thoughts? -- ___ Python tracker <https://bugs.python.org/issue36759> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36759] astimezone() fails on Windows for pre-epoch times
Jonathan Hsu added the comment: I'd like to take on this issue if no one else is working on it. -- ___ Python tracker <https://bugs.python.org/issue36759> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36759] astimezone() fails on Windows for pre-epoch times
Change by Jonathan Hsu : -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue36759> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38948] os.path.ismount() returns False for current working drive
Change by Jonathan Hsu : -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue38948> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40025] enum: _generate_next_value_ is not called if its definition occurs after calls to auto()
Jonathan Hsu added the comment: While the current behavior may be initially unexpected, it does match the way that python normally behaves when defining class variables. For example, the following class will throw an exception because the function number_two() is called before it is defined: class Numbers: one = 1 two = number_two() def number_two(): return 2 # NameError: name 'number_two' is not defined However, this version is fine: class Numbers: one = 1 def number_two(): return 2 two = number_two() -- nosy: +Jonathan Hsu ___ Python tracker <https://bugs.python.org/issue40025> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40028] Math module method to find prime factors for non-negative int n
Jonathan Fine added the comment: A pre-computed table of primes might be better. Of course, how long should the table be. There's an infinity of primes. Consider >>> 2**32 4294967296 This number is approximately 4 * (10**9). According to https://en.wikipedia.org/wiki/Prime_number_theorem, there are 50,847,534 primes less than 10**9. So, very roughly, there are 200,000,000 primes less than 2**32. Thus, storing a list of all these prime numbers as 32 bit unsigned integers would occupy about >>> 200_000_000 / (1024**3) * 4 0.7450580596923828 or in other words 3/4 gigabytes on disk. A binary search into this list, using as starting point the expected location provided by the prime number theorem, might very well require on average less than two block reads into the file that holds the prime number list on disk. And if someone needs to find primes of this size, they've probably got a spare gigabyte or two. I'm naturally inclined to this approach because by mathematical research involves spending gigahertz days computing tables. I then use the tables to examine hypotheses. See https://arxiv.org/abs/1011.4269. This involves subsets of the vertices of the 5-dimensional cube. There are of course 2**32 such subsets. -- nosy: +jfine2358 ___ Python tracker <https://bugs.python.org/issue40028> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31727] FTP_TLS errors when use certain subcommands
Jonathan Castro added the comment: I had the same problem but when i was trying to upload the files using FTPS with explicit TLS 1.2 over an AWS Lambda function. Each time that i was trying upload a file, there was an lambda timeout on the storbinary called, and the function ended whit error on each execution. Finally the only solution that i founded, to solve this issue, was: 1- It using a threading to do the storebinary process. 2- Put an sleep depending do File size. 3- After the sleep function, it using ftplib.dir. -- nosy: +unixjon ___ Python tracker <https://bugs.python.org/issue31727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22107] tempfile module misinterprets access denied error on Windows
Change by Jonathan Mills : -- nosy: +Jonathan Mills ___ Python tracker <https://bugs.python.org/issue22107> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22107] tempfile module misinterprets access denied error on Windows
Change by Jonathan Mills : -- versions: +Python 3.8 ___ Python tracker <https://bugs.python.org/issue22107> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39194] asyncio.open_connection returns a closed client when server fails to authenticate client certificate
New submission from Jonathan Martin : I'm trying to use SSL to validate clients connecting a an asyncio socket server by specifying CERT_REQUIRED and giving a `cafile` containing the client certificate to allow. client and server code attached. Certificates are generated with: openssl req -x509 -newkey rsa:2048 -keyout client.key -nodes -out client.cert -sha256 -days 100 openssl req -x509 -newkey rsa:2048 -keyout server.key -nodes -out server.cert -sha256 -days 100 Observed behavior with python 3.7.5 and openSSL 1.1.1d -- When the client tries to connect without specifying a certificate, the call to asyncio.open_connection succeeds, but the received socket is closed right away, or to be more exact an EOF is received. Observed behavior with python 3.7.4 and openSSL 1.0.2t -- When the client tries to connect without specifying a certificate, the call to asyncio.open_connection fails. Expected behavior - I'm not sure which behavior is to be considered the expected one, although I would prefer to connection to fail directly instead of returning a dead client. Wouldn't it be better to have only one behavior? Note that when disabling TLSv1.3, the connection does fail to open: ctx.maximum_version = ssl.TLSVersion.TLSv1_2 This can be reproduces on all latest releases of 3.6, 3.7, and 3.8 (which all have openssl 1.1.1d in my case) -- assignee: christian.heimes components: SSL, asyncio files: example_code.py messages: 359200 nosy: Jonathan Martin, asvetlov, christian.heimes, yselivanov priority: normal severity: normal status: open title: asyncio.open_connection returns a closed client when server fails to authenticate client certificate type: behavior versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48824/example_code.py ___ Python tracker <https://bugs.python.org/issue39194> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39010] ProactorEventLoop raises unhandled ConnectionResetError
Jonathan Slenders added the comment: Even simpler, the following code will crash after so many iterations: ``` import asyncio loop = asyncio.get_event_loop() while True: loop.call_soon_threadsafe(loop.stop) loop.run_forever() ``` Adding a little sleep of 0.01s after `run_forever()` prevents the issue. So, to me it looks like the cancellation of the `_OverlappedFuture` that wraps around the `_recv()` call from the self-pipe did not complete before we start `_recv()` again in the next `run_forever()` call. No idea if that makes sense... -- ___ Python tracker <https://bugs.python.org/issue39010> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39010] ProactorEventLoop raises unhandled ConnectionResetError
Jonathan Slenders added the comment: It looks like the following code will reproduce the issue: ``` import asyncio import threading loop = asyncio.get_event_loop() while True: def test(): loop.call_soon_threadsafe(loop.stop) threading.Thread(target=test).start() loop.run_forever() ``` Leave it running on Windows, in Python 3.8 for a few seconds, then it starts spawning `ConnectionResetError`s. -- ___ Python tracker <https://bugs.python.org/issue39010> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39010] ProactorEventLoop raises unhandled ConnectionResetError
Jonathan Slenders added the comment: Thanks Victor for the reply. It looks like it's the self-socket in the BaseProactorEventLoop that gets closed. It's exactly this FD for which the exception is raised. We don't close the event loop anywhere. I also don't see `_close_self_pipe` being called anywhere. Debug logs don't provide any help. I'm looking into a reproducer. -- ___ Python tracker <https://bugs.python.org/issue39010> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39010] ProactorEventLoop raises unhandled ConnectionResetError
Jonathan Slenders added the comment: Suppressing `ConnectionResetError` in `BaseProactorEventLoop._loop_self_reading`, like we do with `CancelledError` seems to fix it. Although I'm not sure what it causing the error, and whether we need to handle it somehow. -- ___ Python tracker <https://bugs.python.org/issue39010> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39010] ProactorEventLoop raises unhandled ConnectionResetError
New submission from Jonathan Slenders : We have a snippet of code that runs perfectly fine using the `SelectorEventLoop`, but crashes *sometimes* using the `ProactorEventLoop`. The traceback is the following. The exception cannot be caught within the asyncio application itself (e.g., it is not attached to any Future or propagated in a coroutine). It probably propagates in `run_until_complete()`. File "C:\Python38\lib\asyncio\proactor_events.py", line 768, in _loop_self_reading f.result() # may raise File "C:\Python38\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "C:\Python38\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) I can see that in `IocpProactor._poll`, `OSError` is caught and attached to the future, but not `ConnectionResetError`. I would expect that `ConnectionResetError` too will be attached to the future. In order to reproduce, run the following snippet on Python 3.8: from prompt_toolkit import prompt # pip install prompt_toolkit while 1: prompt('>') Hold down the enter key, and it'll trigger quickly. See also: https://github.com/prompt-toolkit/python-prompt-toolkit/issues/1023 -- components: asyncio messages: 358140 nosy: Jonathan Slenders, asvetlov, yselivanov priority: normal severity: normal status: open title: ProactorEventLoop raises unhandled ConnectionResetError versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue39010> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38791] readline history file is hard-coded
Jonathan Conder added the comment: I agree. Did a cursory search before posting but missed it somehow -- resolution: -> duplicate stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue38791> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38791] readline history file is hard-coded
Change by Jonathan Conder : -- keywords: +patch pull_requests: +16658 stage: -> patch review pull_request: https://github.com/python/cpython/pull/17149 ___ Python tracker <https://bugs.python.org/issue38791> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38791] readline history file is hard-coded
New submission from Jonathan Conder : Other tools such as bash and less allow their history file to be customised with an environment variable. Will add a patch for this in a bit. This could also be customised using PYTHONSTARTUP, but then the user has to duplicate a bunch of code which is already part of the site module. -- components: Library (Lib) messages: 356573 nosy: jconder priority: normal severity: normal status: open title: readline history file is hard-coded versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue38791> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38771] Bug in example of collections.ChainMap
Change by Jonathan Scholbach : -- keywords: +patch pull_requests: +16614 stage: -> patch review pull_request: https://github.com/python/cpython/pull/17108 ___ Python tracker <https://bugs.python.org/issue38771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38771] Bug in example of collections.ChainMap
New submission from Jonathan Scholbach : Below "Examples and Recipes", the Documentation of collections.ChainMap has an "Example of letting user specified command-line arguments take precedence over environment variables which in turn take precedence over default values:" In there, a ChainMap is created which represents the default values, updated by the command-line arguments, if they have been set. The relevant code snippet is the following: parser = argparse.ArgumentParser() parser.add_argument('-u', '--user') parser.add_argument('-c', '--color') namespace = parser.parse_args() command_line_args = {k:v for k, v in vars(namespace).items() if v} If user passes an empty string as value for any of the command-line arguments, that argument would not appear in `command_line_args` (because the boolean value of empty string is `False`). However, passing the empty string as a value for a command-line argument, would reflect the intent of overwriting the default value (setting it to the empty string). With the current example code, this would erroneously not be reflected in the `command_line_args`. This is caused by checking for the boolean of `v` instead of checking for `v` not being `None`. So, this should be handled correctly by writing command_line_args = {k: v for k, v in vars(namespace).items() if v is not None} -- assignee: docs@python components: Documentation messages: 356398 nosy: docs@python, jonathan.scholbach priority: normal severity: normal status: open title: Bug in example of collections.ChainMap versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue38771> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38189] pip does not run in virtual environment in 3.8
New submission from Jonathan Gossage : Python 3.8 was installed from source on Ubuntu 19.04 desktop and a virtual environment was created with python3.8 -m venv venvrh. When attempting to use pip to install a package, the following error was encountered: (venvrh) jgossage@jgossage-XPS-8700:~/Projects/Maintenance$ pip install sphinx WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting sphinx WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/sphinx/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/sphinx/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/sphinx/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/sphinx/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/sphinx/ Could not fetch URL https://pypi.org/simple/sphinx/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/sphinx/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement sphinx (from versions: none) ERROR: No matching distribution found for sphinx WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping -- assignee: christian.heimes components: SSL messages: 352564 nosy: Jonathan.Gossage, christian.heimes priority: normal severity: normal status: open title: pip does not run in virtual environment in 3.8 type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue38189> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38123] Unable to find Python3.8.0b4 on Ubuntu 19004 desktop
Jonathan Gossage added the comment: I now do not think that it is a Python problem. It only appears when Ubuntu 18.04 is upgraded to 19.04 by the upgrade process. The problem does not show up on a fresh install of Ubuntu 19.04 followed by a source install of Python 3.8.0b4 only if the install is preceded by a software upgrade of Ubuntu. On Thu, Sep 12, 2019 at 5:12 AM Zachary Ware wrote: > > Zachary Ware added the comment: > > If calling /usr/local/bin/python3.8 directly works as expected, there's > nothing for us to do here so I'm going to go ahead and close the issue. > Please reopen if you can demonstrate a real bug in the installation code, > though! > > -- > nosy: +zach.ware > resolution: -> not a bug > stage: -> resolved > status: open -> closed > > ___ > Python tracker > <https://bugs.python.org/issue38123> > ___ > -- ___ Python tracker <https://bugs.python.org/issue38123> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38123] Unable to find Python3.8.0b4 on Ubuntu 19004 desktop
New submission from Jonathan Gossage : I installed Python 3.8.0b4 manually on Ubuntu 19.04 desktop. After the installation that appeared to run OK, I was unable to find python3.8, even though it had been installed in /usr/local/bin and that directory was on the path. I got the result: jgossage@jgossage-XPS-8700:~$ python3.8 --version bash: /usr/bin/python3.8: No such file or directory There was no sign of Python in /etc/alternatives so, I assume that Linux alternatives were not part of the problem. I had no problem finding other files such as pip3.8. -- components: Installation messages: 352009 nosy: Jonathan.Gossage priority: normal severity: normal status: open title: Unable to find Python3.8.0b4 on Ubuntu 19004 desktop versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue38123> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37563] Documentation - default for StreamHandler
Jonathan added the comment: >What fallacy? You appeared to be saying (to paraphrase) "no-one else has ever reported this, so it's never been a problem". That's a fallacy. > I was responding to "does anyone else have opinions on this?" I was asking if anyone else wanted to chime in with an opinion. > There are numerous examples in the stdlib where None is passed in and some > other value (e.g. 'utf-8' for an encoding) are used as a default Then for clarity's purpose I'd suggest those be changed too, but that's another ticket. -- ___ Python tracker <https://bugs.python.org/issue37563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37563] Documentation - default for StreamHandler
Jonathan added the comment: > I'm not sure your tone is particularly constructive here. Apologies, my bad. > Which code are you looking at? The documentation code: `class logging.StreamHandler(stream=None)`. Sorry, I don't know what you'd call that. I'm not referring to the code proper. > As far as I can remember, you're the first person to bring this up since > logging was added to Python in 2003. This is a fallacy. Just because no-one else has reported it doesn't mean it hasn't caused a problem. I mean, I'm sure there are plenty of spelling errors/typos in the docs that no-one has reported for years, it doesn't mean they shouldn't be fixed when raised. It's also assuming you have seen and remember every single bug report related to this from the past 16 years which, nothing personal, seems incredibly unlikely given how poor humans are at remembering things in the first place. > And are you presuming to speak for all Python users here? I'm presuming to speak for end-users yes, why not? I did ask for other input too you'll note. After a few decades of practice I'm fairly decent at getting into the headspace of users (of which I am one in this case), and I know it's something many developers don't really do well. A common mistake we developers make is to assume that everyone knows what we know and thinks like us. -- ___ Python tracker <https://bugs.python.org/issue37563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37563] Documentation - default for StreamHandler
Jonathan added the comment: > The devil is in the detail. If stream=sys.stderr is specified, that takes > effect at import time. If stream=None is specified and the implementation > chooses to treat that as sys.stderr, that takes effect at the time of the > call. The two are not equivalent. But this isn't what the prose says at all. You're right, the prose clearly say that the default is sys.stderr, however the code doesn't show that, and many people won't read the prose (I don't a lot of the time), they'll only look at the code snippet because that's all they think they need. The code-snippet claims that the default is None, which from a user perspective isn't true. Again I point out that the documentation is for users, not implementers. We users Do. Not. Care. about how wonderfully clever your implementation is, we care about how it actually works. Whatever Rube-Goldbergian implementation details there are behind the scenes are of no interest to us. Yet again: There's a standard for documenting defaults for keyword arguments, I would ask that it please be used consistently to help us users. Fine, lets try this another way - does anyone else have opinions on this? What's the convention for documentation defaults? -- ___ Python tracker <https://bugs.python.org/issue37563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37563] Documentation - default for StreamHandler
Change by Jonathan : -- status: -> open ___ Python tracker <https://bugs.python.org/issue37563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37563] Documentation - default for StreamHandler
New submission from Jonathan : https://docs.python.org/2/library/logging.handlers.html https://docs.python.org/3/library/logging.handlers.html Both say: """class logging.StreamHandler(stream=None) Returns a new instance of the StreamHandler class. If stream is specified, the instance will use it for logging output; otherwise, sys.stderr will be used.""" Surely that means from an user perspective that the default is actually `sys.stderr`, not `None`? -- assignee: docs@python components: Documentation messages: 347677 nosy: docs@python, jonathan-lp priority: normal severity: normal status: open title: Documentation - default for StreamHandler versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
Jonathan added the comment: > Learning is not a waste of time. You're entitled to your opinion, but this is > not a bug in logging. We'll have to agree to disagree. I agree and value learning a great deal. However learning should happen on your own time, NOT when a program crashes randomly and tries taking you down the rabbit hole. I like learning but not about unrelated things when I'm trying to do useful work. Fine, if you don't consider this a bug, consider it a feature request. "User would like Python logging of Unicode characters to be consistent" is not an unreasonable request. -- status: closed -> open type: behavior -> enhancement ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
Jonathan added the comment: > I have no idea what you mean by this. I don't see how I can be clearer. What are the reasons for NOT implementing logging to file be unicode as a default? Logging to screen is unicode as a default. What are the reasons for not wanting consistency? > A simple Internet search for "basicConfig encoding" yields for me as the > second result this Stack Overflow question Indeed, and it was from that question I got my solution in fact. The problem was the 30-60 minutes I wasted before that trying to figure out why my program was crashing and why it was only crashing *sometimes*. I'd written the logging part of the program a year ago and not really touched it since, so the logging module being a possible culprit was not even in my mind when the program crashed. > As my example illustrated, it's quite easy to log Unicode in log files. Yes, things are easy when you know it's necessary. It's the process of discovery that's an unnecessary waste of people's time. That's why I raised this and that's why I would consider this a bug in my own software. It's inconsistent, it invites problems, and it wastes peoples time. -- ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
Jonathan added the comment: > Did you look at the basicConfig documentation before raising this issue? This seems like an attempt at victim blaming. But yes, I did. In fact, this is now the third time I've looked at that page - once before raising this issue, once before my previous reply, and now. I note that your example and nothing like your example is anywhere on that page. The word "encoding" doesn't appear anywhere on the page either. Sure "stream" is on there, but then you need to know about streams and make the association with logging which I apparently don't. You have to remember not everyone has your level of proficiency in the language. In fact, most Python users don't. Lets put this another way - is there a reason NOT to have Unicode logging as the default? Clearly Unicode was important enough for Guido-et-al to decide to throw Python 2 under the bus. I've listed the advantages of changing it, what are the disadvantages? -- ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
Jonathan added the comment: Thank you for your comments but this wasn't a question and I maintain this is a bug, or at least undesirable behaviour. I'd consider it a bug in my own software. Reasoning: * It's an inconsistent default with the logging to screen. This causes more complexity for users when their bug is intermittent. * Despite your assertion, it's not documented anywhere on the logging docs (I did check before creating this bug when trying to figure out what's going on) - The word "utf" or "unicode" doesn't appear on the logging page, or any of the two tutorials, or the logging.handlers page. There's something in the cookbook but that's about BOM's. * Much of the world's native characters won't log to ASCII Per this page: https://docs.python.org/3/howto/unicode.html "UTF-8 is one of the most commonly used encodings, and Python often defaults to using it." > People have been using logging, on Windows, without problems, for years, > often using utf-8 to encode their log files. I'm afraid this line of reasoning is suffering from selection bias, cherry picking, confirmation bias, and probably some others too. Clearly people have had problems before because it was from one of those folks I took the solution. Doing something as basic as logging unicode shouldn't require knowledge of "handlers" - that's failing "simple is better than complex". -- ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
Jonathan added the comment: It definitely claims to be "utf-8" in NotePad++. I've attached it if you want to double-check. (Windows 7) -- Added file: https://bugs.python.org/file48380/my_log.log ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37111] Logging - Inconsistent behaviour when handling unicode
New submission from Jonathan : Python is inconsistent in how it handles errors that have some unicode characters. It works to screen but fails to log This works: ``` >>> import logging >>> logging.error('จุด1') ERROR:root:จุด1 ``` The following breaks: ``` >>> import logging >>> logging.basicConfig(filename='c:\\my_log.log') >>> logging.error('จุด1') ``` This raises a unicode error: UnicodeEncodeError: 'charmap' codec can't encode characters in position 11-13: character maps to Python 3.6.3 Given that the file created by the logger is utf-8, it's unclear why it doesn't work. I found a workaround by using a Handler, but surely the loggers should all work the same way so that people don't get unpleasant surprises that even more painful to debug when things only break in certain logging modes? -- messages: 344053 nosy: jonathan-lp priority: normal severity: normal status: open title: Logging - Inconsistent behaviour when handling unicode versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue37111> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36520] Email header folded incorrectly
New submission from Jonathan Horn : I encountered a problem with replacing the 'Subject' header of an email. After serializing it again, the utf8 encoding was wrong. It seems to be occurring when folding the internal header objects. Example: >> email.policy.default.fold_binary('Subject', >> email.policy.default.header_store_parse('Subject', 'Hello Wörld! Hello >> Wörld! Hello Wörld! Hello Wörld!Hello Wörld!')[1]) Expected output: b'Subject: Hello =?utf-8?q?W=C3=B6rld!_Hello_W=C3=B6rld!_Hello_W=C3=B6rld!?=\n Hello =?utf-8?q?W=C3=B6rld!Hello_W=C3=B6rld!?=\n' (or similar) Actual output: b'Subject: Hello =?utf-8?q?W=C3=B6rld!_Hello_W=C3=B6rld!_Hello_W=C3=B6rld!?=\n Hello =?utf-8?=?utf-8?q?q=3FW=3DC3=3DB6rld!Hello=3F=3D_W=C3=B6rld!?=\n' I'm running Python 3.7.3 on Arch Linux using Linux 5.0. -- components: email messages: 339419 nosy: Jonathan Horn, barry, r.david.murray priority: normal severity: normal status: open title: Email header folded incorrectly type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue36520> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16385] evaluating literal dict with repeated keys gives no warnings/errors
Jonathan Fine added the comment: This is was closed and tagged as resolved in 2012. The status has not been changed since then. Using dict(a=1, ...) provides a workaround, but only when the keys are valid as variable names. The general workaround is something like helper([ (1, 'a'), (2, 'b'), #etc ]) The helper is necessary: >>> [(1, 2)] * 5 [(1, 2), (1, 2), (1, 2), (1, 2), (1, 2)] >>> dict([(1, 2)] * 5) {1: 2} -- ___ Python tracker <https://bugs.python.org/issue16385> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16385] evaluating literal dict with repeated keys gives no warnings/errors
Jonathan Fine added the comment: I mention this issue, and related pages, in [Python-ideas] dict literal allows duplicate keys https://mail.python.org/pipermail/python-ideas/2019-March/055717.html It arises from a discussion of PEP 584 -- Add + and - operators to the built-in dict class. Please send any follow-up to python-ideas (or this issue). -- nosy: +jfine2358 ___ Python tracker <https://bugs.python.org/issue16385> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26910] dictionary literal should not allow duplicate keys
Jonathan Fine added the comment: I mention this issue, and related pages, in [Python-ideas] dict literal allows duplicate keys https://mail.python.org/pipermail/python-ideas/2019-March/055717.html It arises from a discussion of PEP 584 -- Add + and - operators to the built-in dict class. Please send any follow-up to python-ideas (or #16385). -- nosy: +jfine2358 ___ Python tracker <https://bugs.python.org/issue26910> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36120] Regression - Concurrent Futures
Jonathan added the comment: The "ProcessPoolExecutor Example" on this page breaks for me: https://docs.python.org/3/library/concurrent.futures.html -- ___ Python tracker <https://bugs.python.org/issue36120> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36120] Regression - Concurrent Futures
Jonathan added the comment: There's also this error too: Traceback (most recent call last): File "c:\_libs\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "c:\_libs\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "c:\_libs\Python37\lib\concurrent\futures\process.py", line 226, in _process_worker call_item = call_queue.get(block=True) File "c:\_libs\Python37\lib\multiprocessing\queues.py", line 94, in get res = self._recv_bytes() File "c:\_libs\Python37\lib\multiprocessing\synchronize.py", line 98, in __exit__ return self._semlock.__exit__(*args) OSError: [WinError 6] The handle is invalid -- ___ Python tracker <https://bugs.python.org/issue36120> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36120] Regression - Concurrent Futures
New submission from Jonathan : I'm using Concurrent Futures to run some work in parallel (futures.ProcessPoolExecutor) on windows 7 x64. The code works fine in 3.6.3, and 3.5.x before that. I've just upgraded to 3.7.2 and it's giving me these errors: Process SpawnProcess-6: Traceback (most recent call last): File "c:\_libs\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "c:\_libs\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "c:\_libs\Python37\lib\concurrent\futures\process.py", line 226, in _process_worker call_item = call_queue.get(block=True) File "c:\_libs\Python37\lib\multiprocessing\queues.py", line 93, in get with self._rlock: File "c:\_libs\Python37\lib\multiprocessing\synchronize.py", line 95, in __enter__ return self._semlock.__enter__() PermissionError: [WinError 5] Access is denied If I switch back to the 3.6.3 venv it works fine again. -- messages: 336649 nosy: jonathan-lp priority: normal severity: normal status: open title: Regression - Concurrent Futures versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue36120> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35857] Stacktrace shows lines from updated file on disk, not code actually running
Jonathan Fine added the comment: For information - all taken from docs and Lib/*.py https://docs.python.org/3.7/library/traceback.html traceback -- Print or retrieve a stack traceback Source code: Lib/traceback.py === This module provides a standard interface to extract, format and print stack traces of Python programs. It exactly mimics the behavior of the Python interpreter when it prints a stack trace. This is useful when you want to print stack traces under program control, such as in a “wrapper” around the interpreter. === https://github.com/python/cpython/blob/3.7/Lib/traceback.py#L344-L359 === for f, lineno in frame_gen: co = f.f_code filename = co.co_filename name = co.co_name fnames.add(filename) linecache.lazycache(filename, f.f_globals) # Must defer line lookups until we have called checkcache. if capture_locals: f_locals = f.f_locals else: f_locals = None result.append(FrameSummary( filename, lineno, name, lookup_line=False, locals=f_locals)) for filename in fnames: linecache.checkcache(filename) === By the way, here fnames is a set. https://docs.python.org/3.7/library/linecache.html#module-linecache linecache -- Random access to text lines === The linecache module allows one to get any line from a Python source file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file. This is used by the traceback module to retrieve source lines for inclusion in the formatted traceback. === === linecache.checkcache(filename=None) Check the cache for validity. Use this function if files in the cache may have changed on disk, and you require the updated version. If filename is omitted, it will check all the entries in the cache. linecache.lazycache(filename, module_globals) Capture enough detail about a non-file-based module to permit getting its lines later via getline() even if module_globals is None in the later call. This avoids doing I/O until a line is actually needed, without having to carry the module globals around indefinitely. === -- ___ Python tracker <https://bugs.python.org/issue35857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35857] Stacktrace shows lines from updated file on disk, not code actually running
Jonathan Fine added the comment: The problem, as I understand it, is a mismatch between the code object being executed and the file on disk referred to by the code object. When a module is reloaded it is first recompiled, if the .py file is newer than the .pyc file. (I've tested this at a console.) Suppose wibble.py contains a function fn. Now do import wibble fn = wibble.fn # Modify and save wibble.py reload(wibble) fn() It seems to me that 1) We have a mismatch between fn (in module __main__) and the file on disk. 2) Comparison will show that wibble.pyc is later than wibble.py. 3) There's no reliable way to discover that fn is not the current fn ... 4) ... other than comparing its bytecode with that of the current value of wibble.fn. Regarding (4) there might be another method. But I can't think of one that's reliable. -- nosy: +jfine2358 ___ Python tracker <https://bugs.python.org/issue35857> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35698] [statistics] Division by 2 in statistics.median
Jonathan Fine added the comment: I'm still thinking about this. I find Steve's closing of the issue premature, but I'm not going to reverse it. -- ___ Python tracker <https://bugs.python.org/issue35698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35698] Division by 2 in statistics.median
Jonathan Fine added the comment: It might be better in my sample code to write isinstance(p, int) instead of type(p) == int This would fix Rémi's example. (I wanted to avoid thinking about (False // True).) For median([1, 1]), I am not claiming that 1.0 is wrong and 1 is right. I'm not saying the module is broken, only that it can be improved. For median([1, 1]), I believe that 1 is a better answer, particularly for school students. In other words, that making this change would improve Python. As a pure mathematician, to me 1.0 means a number that is close to 1. Whereas 1 means a number that is exactly 1.. -- ___ Python tracker <https://bugs.python.org/issue35698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35698] Division by 2 in statistics.median
Jonathan Fine added the comment: Here's the essence of a patch. Suppose the input is Python integers, and the output is a mathematical integer. In this case we can make the output a Python integer by using the helper function >>> def wibble(p, q): ... if type(p) == type(q) == int and p%q == 0: ... return p // q ... else: ... return p / q ... >>> wibble(4, 2) 2 >>> wibble(3, 2) 1.5 This will also work for average. -- ___ Python tracker <https://bugs.python.org/issue35698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35698] Division by 2 in statistics.median
Jonathan Fine added the comment: I read PEP 450 as saying that statistics.py can be used by "any secondary school student". This is not true for most Python libraries. In this context, the difference between a float and an int is important. Consider statistics.median([2] * n) As a secondary school student, knowing the definition of median, I might expect the value to be 2, for any n > 0. What else could it be. However, the present code gives 2 for n odd, and 2.0 for n even. I think that this issue is best approached by taking the point of view of a secondary school student. Or perhaps even a primary school student who knows fractions. (A teacher might use statistics.py to create learning materials.) By the way, 2 and 2.0 are not interchangeable. For example >>> [1] * 2.0 TypeError: can't multiply sequence by non-int of type 'float' -- ___ Python tracker <https://bugs.python.org/issue35698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com