[issue44493] Missing terminated NUL in the length of sockaddr_un
Aaron Gallagher <_...@habnab.it> added the comment: sigh.. adding myself to nosy here too in the hope that this gets any traction -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue44493> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: The frame object I am referring to was: PyFrameObject *frame = PyThreadState_GetFrame(PyThreadState_Get()); This frame can not be used with PyObject_GetAttrString. Is there anyway to get the PyObject* associated with a PyFrameObject*? It seems weird that some functionality is just not accessible using the Stable ABI of PyThreadState_GetFrame . To elabroate: I was referring to the migration guide in the changelog btw: f_code: removed, use PyFrame_GetCode() instead. Warning: the function returns a strong reference, need to call Py_DECREF(). f_back: changed (see below), use PyFrame_GetBack(). f_builtins: removed, use PyObject_GetAttrString(frame, "f_builtins"). // this frame object actually has to be a PyObject*, the old one was a PyFrameObject* . Dropping this in does not work. f_globals: removed, use PyObject_GetAttrString(frame, "f_globals"). f_locals: removed, use PyObject_GetAttrString(frame, "f_locals"). f_lasti: removed, use PyObject_GetAttrString(frame, "f_lasti"). I tried importing sys._getframe(), but that gave an attribute error interestingly enough. Run a full code snippit here works: https://github.com/pybind/pybind11/blob/96b943be1d39958661047eadac506745ba92b2bc/include/pybind11/pybind11.h#L2429, but is really slow and we would like avoid having to rely on it. Not to mention relying on a function that is an starts with an underscore seems like it really should be avoided. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: I saw the latest Python 3.11 5A release notes on the frame API changes. Do the notes mean the only officially supported way of accessing co_varnames is now through the Python interface and the inspect module? By using PyObject_GetAttrString? Also, the documentation in the WhatsNew is a bit unclear as PyObject_GetAttrString(frame, "f_locals") doesn't work for PyFrameObject*, only PyObject* and it doesn't describe how to get the PyObject* version of FrameObject. The same problem also happens when trying to access the co_varnames field of the PyCodeObject*. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: `PyCodeObject_GetVariableName()` and `PyCodeObject_GetVariableKind()` work? - Some public-gettters such as these functions would be ideal. OOI, how do you cope with non-local self? - We only care about checking self to prevent an infinite recursion in our method dispatch code so I am not sure a non-local self would be applicable in this case? Correct me if I am wrong. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
Aaron Gokaslan added the comment: We didn't want to read colocalsplus directly because we were worried about the stability of that approach and the code complexity / readability. Also, I wasn't aware that colocalsplus would work or if that was lazily populated as well. The functions used in CPython to extract the args from colocalsplus do not seem to be public and would need to be reimplemented by PyBind11, right? That seems very brittle as try to support future Python versions and may break in the future. Having a somewhat stable C-API to query this information seems like it would be the best solution, but I am open to suggestions on how to best proceed. How would you all recommend PyBind11 proceed with supporting 3.11 if not a C-API addition? The PyBind11 authors want to resolve this before the API becomes too locked down for 3.11. -- ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46223] asyncio cause infinite loop during debug
aaron added the comment: '@reprlib.recursive_repr' decorator to 'events.Handle.__repr__()' could you tell me which file should I change? and why? -- ___ Python tracker <https://bugs.python.org/issue46223> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46223] asyncio cause infinite loop during debug
aaron added the comment: "When running code in debug mode" means we're debug the code. We have used both vscode and pycharm. Same result. -- ___ Python tracker <https://bugs.python.org/issue46223> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46223] asyncio cause infinite loop during debug
New submission from aaron : When running code in debug mode, asyncio sometimes enter into infinite loop, shows as the following: ``` Current thread 0x7f1c15fc5180 (most recent call first): File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 58 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 47 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 32 in format_cb File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 37 in _format_callbacks File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 76 in _future_repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 51 in _repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 61 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 40 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 47 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 32 in format_cb File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 37 in _format_callbacks File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/base_futures.py", line 76 in _future_repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 38 in _format_args_and_kwargs File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 56 in _format_callback File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/format_helpers.py", line 23 in _format_callback_source File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 51 in _repr_info File "/root/miniconda3/envs/omicron/lib/python3.9/asyncio/events.py", line 61 in __repr__ File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 139 in repr_instance File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 62 in repr1 File "/root/miniconda3/envs/omicron/lib/python3.9/reprlib.py", line 52 in repr File "/root/minic
[issue46166] Get "self" args or non-null co_varnames from frame object with C-API
New submission from Aaron Gokaslan : Hello, I am a maintainer with the PyBind11 project. We have been following the 3.11 development branch and have noticed an issue we are encountering with changes to the C-API. Particularly, we have an edge case in our overloading dispatch mechanism that we used to solve by inspecting the "self" argument in the co_varnames member of the python frame object: (https://github.com/pybind/pybind11/blob/a224d0cca5f1752acfcdad8e37369e4cda42259e/include/pybind11/pybind11.h#L2380). However, in the new struct, the co_varnames object can now be null. There also doesn't appear to be any public API to populate it on the C-API side. Accessing it via the "inspect" module still works, but that requires us to run a Python code snippit in a potentially very hot code path: (https://github.com/pybind/pybind11/blob/a224d0cca5f1752acfcdad8e37369e4cda42259e/include/pybind11/pybind11.h#L2408). As such, we were hoping that either there is some new API change we have missed, or if there is some way other modern (and hopefully somewhat stable way to access the API) so we can emulate the old behavior with the C-API. -- components: C API messages: 409100 nosy: Skylion007 priority: normal severity: normal status: open title: Get "self" args or non-null co_varnames from frame object with C-API type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue46166> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: I will note, Raymond, that I’ve wanted this for years before discovering this bpo issue, and I found it because you linked it on Twitter. ;) On Wed, Dec 8, 2021 at 19:08 Raymond Hettinger wrote: > > Raymond Hettinger added the comment: > > Another thought: Given that this tracker issue has been open for a decade > without resolution, we have evidence that this isn't an important problem > in practice. > > Arguably, people have been better off being nudged in another direction > toward better design or having been forced to be explicit about what method > is called and when. > > -- > > ___ > Python tracker > <https://bugs.python.org/issue14965> > ___ > -- ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42109] Use hypothesis for testing the standard library, falling back to stubs
Change by Aaron Meurer : -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue42109> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45473] Enum add "from_name" and "from_value" class methods
Aaron Koch added the comment: Are there any other names that you would contemplate besides `from_name` and `from_value`? My reading of your response indicates that you are fundamentally opposed to the addition of class methods, since they would limit the space of possible instance methods/members. Is that a fair reading? If it is not, would you be open to different method names? Do you agree with the fundamental issue that is identified: that the parenthesis/square bracket construction is difficult to read and makes implementation mistakes more likely? That it would be good to have some what to make it more explicit at both read and write time whether the enum is being constructed using the name or the value. One alternative to the class methods I might propose is to use a keyword argument in the __init__ function. SomeEnum(name="foo") SomeEnum(value="bar") This would also solve the stated problem, but I suspect that messing with the init function introduces more limitations to the class than the classmethod solution. -- ___ Python tracker <https://bugs.python.org/issue45473> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45473] Enum add "from_name" and "from_value" class methods
New submission from Aaron Koch : Documentation: https://docs.python.org/3/library/enum.html#creating-an-enum Current behavior: SomeEnum[name] is used to construct an enum by name SomeEnum(value) is used to construct an enum by value Problem: As a user of enums, it is difficult to remember the mapping between parenthesis/square brackets and construct from name/construct from value. Suggestion: Add two class methods to Enum @classmethod def from_name(cls, name): return cls[name] @classmethod def from_value(cls, value): return cls(value) Benefits: This is an additive change only, it doesn't change any behavior of the Enum class, so there are no backwards compatibility issues. Adding these aliases to the Enum class would allow readers and writers of enums to interact with them more fluently and with less trips to the documentation. Using these aliases would make it easier to write the code you intended and to spot bugs that might arise from the incorrect use of from_name or from_value. -- messages: 403936 nosy: aekoch priority: normal severity: normal status: open title: Enum add "from_name" and "from_value" class methods type: enhancement ___ Python tracker <https://bugs.python.org/issue45473> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17792] Unhelpful UnboundLocalError due to del'ing of exception target
Aaron Smith added the comment: I encountered the similar behavior unexpectedly when dealing with LEGB scope of names. Take the following example run under Python 3.9.2: def doSomething(): x = 10 del x print(x) x = 5 doSomething() This produces a UnboundLocalError at print(x) even though "x" can still be found in the global scope. Indeed if your add print(globals()) before the print(x) line, you can see "x" listed. By contrast, LEGB scope behavior works as expected in this example: def doSomething(): print(x) x = 5 doSomething() The former example yielding the UnboundLocalError when dealing with name scope feels like a bug that lines up with the original behavior described in this enhancement request, as I believe "x" is still a bounded name in the global scope, but was explicitly deleted from the local scope. -- nosy: +aaronwsmith ___ Python tracker <https://bugs.python.org/issue17792> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43306] Error in multiprocessing.Pool's initializer doesn't stop execution
Aaron added the comment: What should the behavior be if an exception is raised in a pool worker during bootstrapping / initialization function execution? I think an exception should be raised in the process owning the Pool, and in the fix I'm tinkering around with I just raise a RuntimeError currently. I can see an argument also for raising different exceptions (or having different behavior) for bootstrapping error vs init function, but implementation is more complicated. My current implementation simply creates a lock in _repopulate_pool_static, acquires it, and waits for the worker function to release it. By polling every 100ms I also detect if the process exited before releasing the lock in which case I raise a Runtime error. I just started testing this implementation, but I'll provide it for anyone else who wants to test / comment. -- Added file: https://bugs.python.org/file50230/pool.py ___ Python tracker <https://bugs.python.org/issue43306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43306] Error in multiprocessing.Pool's initializer doesn't stop execution
Aaron added the comment: I ran into this bug answering this question on Stack Overflow: https://stackoverflow.com/questions/68890437/cannot-use-result-from-multiprocess-pool-directly I have minimized the code required to replicate the behavior, but it boils down to: when using "spawn" to create a multiprocessing pool, if an exception occurs during the bootstrapping phase of the new child or during the initialization function with any start method, it is just cleaned up, and another takes its place (which will also fail). This creates an infinite loop of creating child workers, workers exiting due to an exception, and re-populating the pool with new workers. ``` import multiprocessing multiprocessing.set_start_method("spawn") # bootstraping only problem with spawn def task(): print("task") if __name__ == "__main__": with multiprocessing.Pool() as p: p.apply(task) else: raise Exception("raise in child during bootstraping phase") # # or # import multiprocessing # multiprocessing.set_start_method("fork") # fork or spawn doesn't matter def task(): print("task") def init(): raise Exception("raise in child during initialization function") if __name__ == "__main__": with multiprocessing.Pool(initializer=init) as p: p.apply(task) ``` If Pool._join_exited_workers could determine if a worker exited before bootstrapping, or the initialization function completed, It would indicate a likely significant problem. I'm fine with exceptions in the worker target function not being re-raised in the parent, however it seems the Pool should stop trying if it's failing to create new workers. -- nosy: +athompson6735 versions: +Python 3.9 -Python 3.8 ___ Python tracker <https://bugs.python.org/issue43306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44603] REPL: exit when the user types exit instead of asking them to explicitly type exit()
Aaron Meurer added the comment: When talking about making exit only work when typed at the interpreter, something to consider is the confusion that it can cause when there is a mismatch between the interactive interpreter and noninteractive execution, especially for novice users. I've seen beginner users add exit() to the bottom of Python scripts, presumably because the interpreter "taught" them that you have to end with that. Now imagine someone trying to use exit as part of control flow if input("exit now? ") == "yes": exit Unless exit is a full blown keyword, that won't work. And the result is yet another instance in the language where users become confused if they run across it, because it isn't actually consistent in the language model. There are already pseudo-keywords in the language, in particular, super(), but that's used to implement something which would be impossible otherwise. Exiting is not impossible otherwise, it just requires typing (). But that's how everything in the language works. I would argue it's a good thing to reinforce the idea that typing a variable by itself with no other surrounding syntax does nothing. This helps new users create the correct model of the language in their heads. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue44603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16959] rlcompleter doesn't work if __main__ can't be imported
Aaron Meurer added the comment: A quick glance at the source shows that it still imports __main__ at the top-level. I have no idea how legitimate it is that the App Engine (used to?) makes it so that __main__ can't be imported. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue16959> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44455] compileall should exit nonzero for nonexistent directories
New submission from Aaron Meurer : $ ./python.exe -m compileall doesntexist Listing 'doesntexist'... Can't list 'doesntexist' $ echo $? 0 It's standard for a command line tool that processes files to exit nonzero when given a directory that doesn't exist. -- messages: 396087 nosy: asmeurer priority: normal severity: normal status: open title: compileall should exit nonzero for nonexistent directories ___ Python tracker <https://bugs.python.org/issue44455> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40199] Invalid escape sequence DeprecationWarnings don't trigger by default
Aaron Gallagher <_...@habnab.it> added the comment: This is definitely not windows-specific. On macos: $ python3.9 Python 3.9.4 (default, Apr 5 2021, 01:47:16) [Clang 11.0.0 (clang-1100.0.33.17)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> '\s' '\\s' -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue40199> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: @daniel.urban would you kindly resubmit your patch as a PR to the cpython repo? I've learned out-of-band from someone else that putting patches on bpo is considered obsolete. you can use the PR I've submitted (https://github.com/python/cpython/pull/26194) and reset the author. I'd be happy to do it myself (giving you a branch that's all set up, so all you need to do is click the 'new PR' button) if you tell me what to set the author to. -- ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Aaron Gallagher <_...@habnab.it> added the comment: @daniel.urban I'm attempting to move this patch along, but since the contributing process has changed in the years since your patch, you'll need to sign the CLA. Are you interested in picking this back up at all? I haven't been given any indication of how to proceed if I'm doing this on your behalf, but hopefully the core team will enlighten us. -- nosy: +habnabit ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14965] super() and property inheritance behavior
Change by Aaron Gallagher : -- nosy: +Aaron Gallagher nosy_count: 20.0 -> 21.0 pull_requests: +24811 stage: needs patch -> patch review pull_request: https://github.com/python/cpython/pull/26194 ___ Python tracker <https://bugs.python.org/issue14965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43420] Optimize rational arithmetics
Aaron Meurer added the comment: I'm surprised to hear that the "typical use-case" of Fraction is fractions converted from floats. Do you have evidence in the wild to support that? I would expect any application that uses fractions "generically" to run into the same sorts of problems SymPy does. The issue is that the sum or product of two unrelated fractions has a denominator that is ~ the product of the denominators of each term. So they tend to grow large, unless there is some structure in the terms that results in lots of cancellation. That's why real world numeric typically doesn't use exact arithmetic, but there are legitimate use-cases for it (computer algebra being one). This actually also applies even if the denominators are powers of 2. That's why arbitrary precision floating point numbers like Decimal or mpmath.mpf limit the precision, or effectively, the power of 2 in the denominator. By the way, the "algorithm" here really isn't that complicated. I didn't even realize it had a name. The idea is that for a/b * c/d, if a/b and c/d are already in lowest terms, then the only cancellation that can happen is from a/d or from c/b. So instead of computing gcd(a*c, b*d), we only compute gcd(a, d) and gcd(c, b) and cancel them off the corresponding terms. It turns out to be faster to take two gcds of smaller numbers than one gcd of big ones. The algorithm for addition is a bit more complicated, at least to see that it is correct, but is still not that bad (the paper linked in the OP explains it clearly in one paragraph). It's far less complicated than, for example, Lehmer's gcd algorithm (which is implemented in math.gcd). -- ___ Python tracker <https://bugs.python.org/issue43420> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43420] Optimize rational arithmetics
Change by Aaron Meurer : -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue43420> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39820] Bracketed paste mode for REPL: don't execute pasted command before ENTER is pressed explicitly
Aaron Meurer added the comment: To reiterate some points I made in the closed issues https://bugs.python.org/issue42819 and https://bugs.python.org/issue32019. A simple "fix" would be to emulate the non-bracketed paste buffering. That is, accept the input using bracketed paste, but split it line by line and send that to the REPL. That would achieve some of the benefits of bracketed paste (faster pasting), without having to change how the REPL works. For actually allowing multiline input in the REPL, one issue I see is that the so-called "single" compile mode is fundamentally designed around single line evaluation. To support proper multiline evaluation, it would need to break from this model (which in my opinion is over engineered). In one of my personal projects, I use a function along the lines of import ast def eval_exec(code, g=None, l=None, *, filename="", noresult=None): if g is None: g = globals() if l is None: l = g p = ast.parse(code) expr = None res = noresult if p.body and isinstance(p.body[-1], ast.Expr): expr = p.body.pop() code = compile(p, filename, 'exec') exec(code, g, l) if expr: code = compile(ast.Expression(expr.value), filename, 'eval') res = eval(code, g, l) return res This function automatically execs the code, but if the last part of it is an expression, it returns it (note that this is much more useful than simply printing it). Otherwise it returns a noresult marker (None by default). I think this sort of functionality in general would be useful in the standard library (much more useful than compile('single')), but even ignoring whether it should be a public function, this is the sort of thing that is needed for "proper" multiline execution in a REPL. Terry mentioned that idle supports multiline already. But I tried pasting a = 1 a into idle (Python 3.9), and I get the same "SyntaxError: multiple statements found while compiling a single statement" error, suggesting it still has the same fundamental limitation. Also, if it wasn't clear, I should note that this is independent of pasting. You can already write def func(): return 1 func() manually in the interpreter or IDLE and it will give a syntax error. -- ___ Python tracker <https://bugs.python.org/issue39820> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42819] readline 8.1 enables the bracketed paste mode by default
Aaron Meurer added the comment: Instead of enabling it by default, why not just keep it but emulate the old behavior by splitting and buffering the input lines? That way you still get some of the benefits of bracketed paste, i.e., faster pasting, but without the hard work of fixing the REPL to actually support native multiline editing + execing multiline statements (the broken "simple" design). -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue42819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21821] The function cygwinccompiler.is_cygwingcc leads to FileNotFoundError under Windows 7
Aaron Meurer added the comment: Is find_executable() going to be extracted from distutils to somewhere else? It's one of those functions that is useful outside of packaging, and indeed, I've seen it imported in quite a few codes that aren't related to packaging. If so, the patch I mentioned could still be relevant for it (if it hasn't been fixed already). -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue21821> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30384] traceback.TracebackException.format shouldn't format_exc_only() when __traceback__ is None
Aaron Meurer added the comment: Neither of those things preclude the possibility of the traceback module doing a better job of printing tracebacks for exceptions where __traceback__ = None. -- ___ Python tracker <https://bugs.python.org/issue30384> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30384] traceback.TracebackException.format shouldn't format_exc_only() when __traceback__ is None
Aaron Meurer added the comment: I don't think it's helpful to make such a literalistic interpretation. Just because the variable is called "traceback" doesn't mean it should apply only to the things that are *technically* a traceback (and I don't agree anyway that the line containing the exception isn't part of the "traceback"). > I guess you're saying that the __context__ exception of the TypeError in your > example has an empty traceback, which means it was never raised Does it mean that? Again, __traceback__ isn't documented anywhere, so I don't know what it being None really means. All I know is that it apparently disables the printing of tracebacks in the traceback module, but fails to omit the exception line itself, leading to an unreadable traceback in my example. -- ___ Python tracker <https://bugs.python.org/issue30384> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30384] traceback.TracebackException.format shouldn't format_exc_only() when __traceback__ is None
Aaron Meurer added the comment: I think I found another way to achieve what I was trying to do, which is why I never pursued this. But I still think it's a bug. __traceback__ = None isn't documented anywhere that I could find, so I was only able to deduce how it should work from reading the source code. If it is documented somewhere let me know. I admit my initial report is a bit unclear. If you play with the test.py you can see what is going on import traceback try: raise ValueError except Exception as e: e.__traceback__ = None try: raise TypeError except: traceback.print_exc() produces this output: ValueError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test.py", line 8, in raise TypeError TypeError My goal is to completely hide the caught exception in the traceback printed from the traceback module. It seems odd that it hides everything except for the actual ValueError. -- status: pending -> open ___ Python tracker <https://bugs.python.org/issue30384> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30384] traceback.TracebackException.format shouldn't format_exc_only() when __traceback__ is None
Aaron Meurer added the comment: > It's not entirely clear to me what you are trying to do (what is the output > you are hoping to get?) but this looks more like a question than a bug > report, so I am closing this issue. If this is still relevant, I'd suggest > you ask on the python users list or StackOverflow - you are more likely to > receive a prompt response there. If you don't understand an issue, the correct response isn't to close it because you don't understand it. If an issue is unclear, you should ask for clarification, not insult the person who opened it. What you described *is* the bug report. If you read even the title you would see that the report is that setting __traceback__ to None doesn't affect the printing of the exception. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue30384> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35212] Expressions with format specifiers in f-strings give wrong code position in AST
Aaron Meurer added the comment: The same thing occurs with specifiers like {a!r}. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue35212> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41506] Inclusion or documentation of extended with syntax in 3.9
New submission from Aaron Meurer : This discussion started at https://github.com/python/cpython/pull/19503 (actually on Twitter https://twitter.com/asmeurer/status/1289304407696261120), but Guido asked me to move it bpo. Alongside the implementation of Python 3.9's new PEG parser, a new syntax feature has been added, which is the ability to use parentheses in with statements, like with (open('a') as f1, open('b') as f2): ... This is an error in lower versions of Python (or an error about tuple not having __enter__ if the "as" parts are omitted). This new feature is not documented in the "What's new in Python 3.9" document https://docs.python.org/3.9/whatsnew/3.9.html. It also apparently goes against PEP 627 https://www.python.org/dev/peps/pep-0617/, which says (in bold), "no new Python Grammar addition will be added that requires the PEG parser". Note that this feature does indeed rely on the PEG parser, and it stops working if you use python -X oldparser or PYTHONOLDPARSER=1. I think this feature should either 1. be removed from 3.9 and held until 3.10, or 2. be documented properly, including in the document for the "with" statement and the "what's new" document. Also the PEP should be updated if this option is chosen. Others have stated opinions about this on the issue or on Twitter, but I'll let them repeat them here rather than trying to summarize. -- messages: 375029 nosy: asmeurer priority: normal pull_requests: 20921 severity: normal status: open title: Inclusion or documentation of extended with syntax in 3.9 versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue41506> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41502] Option for Colored Logging in http.server Module
New submission from Aaron Lichtman : It would be useful if the http.server module had an option for colored logging to help users visually parse the HTTP traffic logs. $ python3 -m http.server 80 --color is along the lines of what I'm thinking. -- components: Library (Lib) messages: 374997 nosy: alichtman priority: normal severity: normal status: open title: Option for Colored Logging in http.server Module type: enhancement versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue41502> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32958] socket module calls with long host names can fail with idna codec error
Aaron Black added the comment: joseph.hackman I don't think that the 63 character limit on a label is the problem specifically, merely it's application. The crux of my issue was that credentials passed with the url in a basic-authy fashion (as some services require) count against the label length. For example, this would trigger the error: h = "https://ablack:very_long_api_key_0123456789012345678901234567890123456789012345678901234567890...@www.example.com; Since the first label would be treated as: "ablack:very_long_api_key_0123456789012345678901234567890123456789012345678901234567890123@www" My specific issue goes away if any text up to / including an "@" in the first label section is not included in the label validation. I don't know off hand if that information is supposed to be included per the label in the DNS spec though. -- ___ Python tracker <https://bugs.python.org/issue32958> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39820] Bracketed paste mode for REPL
Aaron Meurer added the comment: Related issue https://bugs.python.org/issue32019 -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue39820> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39854] f-strings with format specifiers have wrong col_offset
New submission from Aaron Meurer : This is tested in CPython master. The issue also occurs in older versions of Python. >>> ast.dump(ast.parse('f"{x}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=-1, format_spec=None)]))], type_ignores=[])" >>> ast.dump(ast.parse('f"{x!r}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=114, format_spec=None)]))], type_ignores=[])" >>> ast.parse('f"{x}"').body[0].value.values[0].value.col_offset 3 >>> ast.parse('f"{x!r}"').body[0].value.values[0].value.col_offset 1 The col_offset for the variable x should be 3 in both instances. -- messages: 363375 nosy: asmeurer priority: normal severity: normal status: open title: f-strings with format specifiers have wrong col_offset versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue39854> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36144] Dictionary addition. (PEP 584)
Aaron Hall added the comment: Another obvious way to do it, but I'm +1 on it. A small side point however - PEP 584 reads: > To create a new dict containing the merged items of two (or more) dicts, one > can currently write: > {**d1, **d2} > but this is neither obvious nor easily discoverable. It is only guaranteed to > work if the keys are all strings. If the keys are not strings, it currently > works in CPython, but it may not work with other implementations, or future > versions of CPython[2]. ... > [2] Non-string keys: https://bugs.python.org/issue35105 and > https://mail.python.org/pipermail/python-dev/2018-October/155435.html The references cited does not back this assertion up. Perhaps the intent is to reference the "cool/weird hack" dict(d1, **d2) (see https://mail.python.org/pipermail/python-dev/2010-April/099485.html and https://mail.python.org/pipermail/python-dev/2010-April/099459.html), which allowed any hashable keys in Python 2 but only strings in Python 3. If I see {**d1, **d2}, my expectations are that this is the new generalized unpacking and I currently expect any keys to be allowed, and the PEP should be updated to accurately reflect this to prevent future misunderstandings. -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue36144> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38719] Surprising and possibly incorrect passing of InitVar to __post_init__ method of data classes
New submission from Aaron Ecay : I have discovered that InitVar's are passed in a surprising way to the __post_init__ method of python dataclasses. The following program illustrates the problem: = from dataclasses import InitVar, dataclass @dataclass class Foo: bar: InitVar[str] quux: InitVar[str] def __post_init__(self, quux: str, bar: str) -> None: print(f"bar is {bar}; quux is {quux}") Foo(bar="a", quux="b") = The output (on python 3.7.3 and 3.8.0a3) is (incorrectly): bar is b; quux is a This behavior seems like a bug to me, do you agree? I have not looked into the reason why it behaves this way, but I suspect that the InitVar args are passed positionally, rather than as key words, to __post_init__. This requires the order of arguments in the definition of __post_init__ to be identical to the order in which they are specified in the class. I would expect the arguments to be passed as keywords instead, which would remove the ordering dependency. If there is agreement that the current behavior is undesirable, I can look into creating a patch to change it. -- components: Library (Lib) messages: 356125 nosy: Aaron Ecay priority: normal severity: normal status: open title: Surprising and possibly incorrect passing of InitVar to __post_init__ method of data classes versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue38719> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32912] Raise non-silent warning for invalid escape sequences
Aaron Meurer added the comment: Are there issues tracking the things I mentioned, which should IMO happen before this becomes a hard error (making the warnings reproduce even if the file has already been compiled, and making warning message point to the correct line in multiline strings)? And is it too late to potentially get some of those things in 3.8? -- ___ Python tracker <https://bugs.python.org/issue32912> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37433] syntax error in multiline f-string produces ~40k spaces output
Aaron Meurer added the comment: This seems related. It's also possible I'm misunderstanding what is supposed to happen here. If you create test.py with just the 2 lines: """ a and run python test.py from CPython master, you get $./python.exe test.py File "/Users/aaronmeurer/Documents/cpython/test.py", line 4 a ^ SyntaxError: EOF while scanning triple-quoted string literal Notice that it reports line 4 even though the file only has 2 lines. The offset in the syntax error is 6 columns (line numbers and column offsets in SyntaxErrors count from 1) >>> try: ... compile('"""\na', '', 'exec') ... except SyntaxError as e: ... print(repr(e)) ... SyntaxError('EOF while scanning triple-quoted string literal', ('', 2, 6, '"""\na\n')) -- ___ Python tracker <https://bugs.python.org/issue37433> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32912] Raise non-silent warning for invalid escape sequences
Aaron Meurer added the comment: Well paradoxically, the bugs that this prevents are the ones it doesn't warn about. If someone writes '\tan(x)' thinking it is a string representing a LaTeX formula for the tangent of x, they won't realize that they actually created a string with a tab plus "an(x)". So actually I would argue that the end goal *is* to make people aware of which escape characters exist, or at the very least, always make strings raw if there's even the remotest chance they will contain a backslash character. Is it the best way to go about this? I don't know. The whole thing sort of makes me think raw strings should have been the default, but it's obviously too late to change that. I personally don't feel strongly about the warnings being enabled by default or not. My big gripe is that if you actually want the warnings they are difficult to get in a reproducible way. I'm actually surprised they are so annoying for you. Once a py file is compiled into a pyc file the warnings completely disappear, even if you want them! The fact that you can't use a real escape sequence in a raw string is annoying but not the end of the world given that it's trivial to concatenate strings. -- ___ Python tracker <https://bugs.python.org/issue32912> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32912] Raise non-silent warning for invalid escape sequences
Aaron Meurer added the comment: Raymond, are you in agreement that these warnings should at some point eventually become syntax errors? -- ___ Python tracker <https://bugs.python.org/issue32912> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37433] syntax error in f-string in multiline string produces ~40k spaces of output
Aaron Meurer added the comment: This looks like the same issue I mentioned here https://bugs.python.org/msg344764 -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue37433> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32912] Raise non-silent warning for invalid escape sequences
Aaron Meurer added the comment: I agree. Please someone else do that. I don't know what already has issues and I unfortunately don't have time right now to help out with any of this. I simply mentioned all these things as arguments why Python should not (yet) make these warnings errors, which is the point of this issue. Also, for the pyparsing example, you would have gotten lucky because it also contained \w, which is not a valid escape. If it didn't, you wouldn't be warned. So clearly this will help things, but it will also be good to have linting tools that, for example, warn about any escape sequences inside docstrings. -- ___ Python tracker <https://bugs.python.org/issue32912> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35586] Open pyexpat compilation, Make shows error(missing separator)
Aaron Hurst added the comment: I believe this bug can be closed now that the following have landed: New changeset 408a2ef1aceff1f4270c44552fa39ef93d9283e3 by Benjamin Peterson (aaronpaulhurst) in branch 'master': closes bpo-35184: Fix XML_POOR_ENTROPY option that breaks makesetup parsing of pyexpat line in Setup. (GH-13064) https://github.com/python/cpython/commit/408a2ef1aceff1f4270c44552fa39ef93d9283e3 New changeset 5b94b857f590db80aab69c31f88dd2a4978f8329 by Miss Islington (bot) in branch '3.8': closes bpo-35184: Fix XML_POOR_ENTROPY option that breaks makesetup parsing of pyexpat line in Setup. (GH-13064) https://github.com/python/cpython/commit/5b94b857f590db80aab69c31f88dd2a4978f8329 New changeset 30fd7a476bbd6bb8096c1349698463fa8a3bca18 by Miss Islington (bot) in branch '3.7': closes bpo-35184: Fix XML_POOR_ENTROPY option that breaks makesetup parsing of pyexpat line in Setup. (GH-13064) https://github.com/python/cpython/commit/30fd7a476bbd6bb8096c1349698463fa8a3bca18 -- ___ Python tracker <https://bugs.python.org/issue35586> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32912] Raise non-silent warning for invalid escape sequences
Aaron Meurer added the comment: I agree with Raymond that third party libraries are not ready for this. My biggest issue is that the way Python warns about this makes it very difficult for library authors to fix this. Most won't even notice. The problem is the warnings are only shown once, when the file is compiled. So if you run the code in any way that causes Python to compile it, a further run of 'python -Wd' will show nothing. I don't know if it's reasonable, but it would be nice if Python recompiled when given -Wd, or somehow saved the warning so it could be shown if that flag is given later. As an anecdote, for SymPy's CI, we went through five (if I am counting correctly) iterations of trying to test this. Each of the first four were subtly incorrect, until we finally managed to find the correct one (for reference, 'python -We:invalid -m compileall -f -q module/'). So most library authors who will attempt to add tests against this will get it wrong. Simply adding -Wd as you would expect is wrong. If the code is already compiled (which it probably is, e.g., if you ran setup.py), it won't show the warnings. At the very least the "correct" way to test this should be documented. Things would probably be improved if the warnings were always shown, as at least then devs will see the error once (although most will probably be confused when the warning doesn't repeat). Another problem is the information in the warnings. It seems the line number of the string is now shown, which is an improvement (https://bugs.python.org/issue28128). It would be nice if it showed the actual line and column number in the file of the invalid escape. This is especially annoying when an escape appears in a docstring. It just shows """ as the offending line. We have a lot of LaTeX in our docstrings in SymPy, so we had quite a few of these to fix. SymPy doesn't have invalid escapes anymore because I was proactive about it, but from what I've seen, most library authors haven't been. By the way, this looks like a bug (python 3.8b1): $ cat test.py """ a \p """
[issue36927] traceback docstrings should explicitly state return values instead of referring to other functions
New submission from Aaron Hall : I've written three (or more) answers on Stack Overflow about how to use the functions in the traceback module, and I code Python all day long. Embarrassing confession: I just recommended the wrong traceback function in email to fix the incorrect usage of another of these functions after pulling up the docs because. I corrected myself before anyone else could correct me, but I find these docstrings incredibly frustrating and problematic. May I please give them a little more verbiage about their return values? e.g.: def format_tb(tb, limit=None): """A shorthand for 'format_list(extract_tb(tb, limit))'.""" return extract_tb(tb, limit=limit).format() should be: def format_tb(tb, limit=None): """A shorthand for 'format_list(extract_tb(tb, limit))', which returns a list of strings ready for printing'. """ return extract_tb(tb, limit=limit).format() In fact, perhaps the "shorthand" part is an implementation detail that may not even be correct (it doesn't immediately seem to be) and should be removed. -- assignee: docs@python components: Documentation messages: 342588 nosy: Aaron Hall, docs@python priority: normal severity: normal status: open title: traceback docstrings should explicitly state return values instead of referring to other functions versions: Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue36927> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34648] Confirm the types of parameters of traceback.format_list and traceback.StackSummary.from_list post-3.5
Change by Aaron Hall : -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue34648> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Aaron Hurst added the comment: Hi Ned, Thanks for testing this. I also observe that macOS compiles "without error"... but it's still broken... and silently. This is because the pyexpat line isn't being turned into the expected set of source compilation rules, but it is instead being dumped into the variable definition section. Why is it being interpreted by makesetup as a variable definition? With the equals character, it matches this pattern: # Lines can also have the form # # = # # which defines a Make variable definition inserted into Makefile.in But it is intended to match this pattern: # Lines have the following structure: # # ... [ ...] [ ...] [ ...] For reference, here is the corresponding rule in makesetup: > *=*)DEFS="$line$NL$DEFS"; continue;; I fully support tweaking this pattern to better differentiate when "=" means a variable definition and when "=" is part of a compilation flag, but given that pyexpat is the only such case, my one-line fix makes things consistent again. For now. -- ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Aaron Hurst added the comment: Hi Ned, >From a fresh checkout of master on Ubuntu 18.04, I uncomment the pyexpat line >in Modules/Setup and run: cpython$ ./configure ... cpython$ make Makefile:273: *** missing separator. Stop. Here is the offending section of the resulting Makefile: 269 # === Definitions added by makesetup === 270 271 LOCALMODLIBS= 272 BASEMODLIBS= 273 pyexpat expat/xmlparse.c expat/xmlrole.c expat/xmltok.c pyexpat.c -I$(srcdir)/Modules/expat -DHAVE_EXPAT_CONFIG_H -DXML_$ 274 PYTHONPATH=$(COREPYTHONPATH) 275 COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH) -- ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Aaron Hurst added the comment: Sorry for my misunderstanding of the process, and thanks for explaining. I resubmitted the PR against the master branch. -- ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Change by Aaron Hurst : -- pull_requests: +12981 ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35586] Open pyexpat compilation, Make shows error(missing separator)
Aaron Hurst added the comment: This is the same issue as https://bugs.python.org/issue35184 I can reproduce this issue by uncommenting the pyexpat line in Setup.dist and compiling. The issue is with -DXML_POOR_ENTROPY=1. The equals character causes the line to be incorrectly interpreted as a macro definition by makesetup. This results in an invalid Makefile output. I've submitted a PR, but a quick work-around is to remove the "=1". It is not necessary. -- keywords: +patch nosy: +ahurst pull_requests: +12830 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35586> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Aaron Hurst added the comment: I can reproduce this issue by uncommenting the pyexpat line in Setup.dist and compiling. The issue is with -DXML_POOR_ENTROPY=1. The equals character causes the line to be incorrectly interpreted as a macro definition by makesetup. This results in an invalid Makefile output. I've submitted a PR, but a quick work-around is to remove the "=1". It is not necessary. -- nosy: +ahurst ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35184] Makefile is not correctly generated when compiling pyextat with DXML_POOR_ENTROPY=1
Change by Aaron Hurst : -- keywords: +patch pull_requests: +12827 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35184> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36551] Optimize list comprehensions with preallocate size and protect against overflow
Change by Aaron Hall : -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue36551> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35625] Comprehension doc doesn't mention buggy class scope behavior
Change by Aaron Hall : -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue35625> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
Aaron Hall added the comment: No need to keep this open, I agree with the core developers this shouldn't be changed. -- status: open -> closed ___ Python tracker <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26103] Contradiction in definition of "data descriptor" between (dotted lookup behavior/datamodel documentation) and (inspect lib/descriptor how-to)
Change by Aaron Hall : -- stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue26103> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36370] "Fatal Python error: Cannot recover from stack overflow" from SymPy tests
New submission from Aaron Meurer : I am getting a Fatal Python error: Cannot recover from stack overflow. running the SymPy tests on a branch of mine where the tests fail. I have reproduced this in Python 3.6.7, as well as CPython master (fc96e5474a7bda1c5dec66420e4467fc9f7ca968). Here are the repro steps: 1. Check out my git branch https://github.com/asmeurer/sympy/tree/python-crash 2. Install or point PYTHONPATH to mpmath 3. Run python and type from sympy import test test('sets', subprocess=False) The tests will run (with failures) until they reach a point where Python crashes with Fatal Python error: Cannot recover from stack overflow. Current thread 0x7fffa8e623c0 (most recent call first): File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/core/relational.py", line 385 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1594 in _contains File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 286 in contains File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1257 in File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/core/logic.py", line 139 in fuzzy_and File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1257 in _handle_finite_sets File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1900 in simplify_intersection File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1200 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 109 in intersect File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 309 in is_subset File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1348 in reduce File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1338 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 551 in __sub__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1269 in _handle_finite_sets File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1900 in simplify_intersection File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1200 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 109 in intersect File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 309 in is_subset File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1348 in reduce File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1338 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 551 in __sub__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1269 in _handle_finite_sets File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1900 in simplify_intersection File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1200 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 109 in intersect File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 309 in is_subset File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1348 in reduce File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1338 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 551 in __sub__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1269 in _handle_finite_sets File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1900 in simplify_intersection File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1200 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 109 in intersect File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 309 in is_subset File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1348 in reduce File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1338 in __new__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 551 in __sub__ File "/Users/aaronmeurer/Documents/Python/sympy/sympy/sympy/sets/sets.py", line 1269 in _handle_finite_sets File "/
[issue16482] pdb.set_trace() clobbering traceback on error
Aaron Meurer added the comment: You can download the branch for a pull request even if the repo is deleted using this https://stackoverflow.com/a/28622034/161801. That will let you keep the original commits intact. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue16482> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33899] Tokenize module does not mirror "end-of-input" is newline behavior
Aaron Meurer added the comment: Is it expected behavior that comments produce NEWLINE if they don't have a newline and don't produce NEWLINE if they do (that is, '# comment' produces NEWLINE but '# comment\n' does not)? -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue33899> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35059] Convert Py_INCREF() and PyObject_INIT() to inlined functions
Change by Aaron Hall : -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue35059> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14102] argparse: add ability to create a man page
Aaron Meurer added the comment: I see. I haven't dug much into the argoarse source, so I don't have a good feel for how feasible such a tool would be to write. Such refactoring would also be useful for generating HTML or RST for the help. I've previously used help2man and man2html to generate html help, but both tools are very limited in what they can do. -- ___ Python tracker <https://bugs.python.org/issue14102> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14102] argparse: add ability to create a man page
Aaron Meurer added the comment: Couldn't such a tool exist outside the standard library. I'm thinking a function that you would import and wrap the parser object, similar to how argcomplete works (https://argcomplete.readthedocs.io/en/latest/index.html). The downside is that developers would have to opt-in for it to work (much like they currently have to opt-in to bash completion with things like argcomplete). But it would allow much more flexibility being outside the standard library. I completely agree that it should be done in Python either way. help2man is very limited in its flexibility (it doesn't help that it's written in Perl), and there are fundamental limits to what you can do from parsing the --help output, vs. just generating correct troff from the source. Installing the manpage is a separate concern. That would need to go in setuptools or distutils, if anywhere. But before you can worry about how to install it you need to be able to generate it in the first place. -- nosy: +asmeurer ___ Python tracker <https://bugs.python.org/issue14102> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24622] tokenize.py: missing EXACT_TOKEN_TYPES
Aaron Meurer added the comment: I would suggest adding this to the what's new document https://docs.python.org/3.7/whatsnew/3.7.html. The change affects user-facing code (the exact_type attribute of TokenInfo is OP for ... and -> tokens prior to this patch). I would also point out that this directly contradicts the documentation (from https://docs.python.org/3/library/tokenize.html, "To simplify token stream handling, all operator and delimiter tokens and Ellipsis are returned using the generic OP token type. The exact type can be determined by checking the exact_type property on the named tuple returned from tokenize.tokenize()."), so I don't see why it can't be backported. -- nosy: +Aaron.Meurer ___ Python tracker <https://bugs.python.org/issue24622> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12154] PyDoc Partial Functions
Aaron Hall added the comment: Should pydoc treat a partial object like a function? Should a partial be an instance of a function? Should we be able to add all the nice things that functions have to it? If we want that, should we simply instantiate a function the normal way, with a new function definition? That is, instead of this: >>> from functools import partial >>> basetwo = partial(int, base=2) >>> basetwo.__doc__ = 'convert base 2 string to int' do this: def basetwo(string:str) -> int: 'convert base 2 string to int' return int(string, base=2) Otherwise, either the partial definition or pydoc needs some work. (Cheers and bump!) -- ___ Python tracker <https://bugs.python.org/issue12154> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30129] functools.partialmethod should look more like what it's impersonating.
Change by Aaron Hall : -- nosy: +Aaron Hall ___ Python tracker <https://bugs.python.org/issue30129> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32400] inspect.isdatadescriptor false negative
Change by Aaron Hall <aaronch...@yahoo.com>: -- nosy: +Aaron Hall ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32400> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25457] json dump fails for mixed-type keys when sort_keys is specified
Aaron Hall <aaronch...@yahoo.com> added the comment: >From a design standpoint, I'm fairly certain the sort_keys argument was >created due to Python's dicts being arbitrarily ordered. Coercing to strings before sorting is unsatisfactory because, e.g. numbers sort lexicographically instead of by numeric value when strings. >>> import json >>> json.dumps({i:i**2 for i in range(15)}, sort_keys=True) '{"0": 0, "1": 1, "2": 4, "3": 9, "4": 16, "5": 25, "6": 36, "7": 49, "8": 64, "9": 81, "10": 100, "11": 121, "12": 144, "13": 169, "14": 196}' >>> json.dumps({str(i):i**2 for i in range(15)}, sort_keys=True) '{"0": 0, "1": 1, "10": 100, "11": 121, "12": 144, "13": 169, "14": 196, "2": 4, "3": 9, "4": 16, "5": 25, "6": 36, "7": 49, "8": 64, "9": 81}' Changing the order of operations is just going to create more issues, IMHO. Now that users can sort their dicts prior to providing them to the function, e.g.: >>> json.dumps({str(i):i**2 for i in range(15)}) '{"0": 0, "1": 1, "2": 4, "3": 9, "4": 16, "5": 25, "6": 36, "7": 49, "8": 64, "9": 81, "10": 100, "11": 121, "12": 144, "13": 169, "14": 196}' we could deprecate the argument, or just keep it as-is for hysterical raisins. Regardless, I'd close this as "won't fix". -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue25457> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25457] json dump fails for mixed-type keys when sort_keys is specified
Aaron Hall <aaronch...@yahoo.com> added the comment: Now that dicts are sortable, does that make the sort_keys argument redundant? Should this bug be changed to "won't fix"? ------ nosy: +Aaron Hall ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue25457> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33498] pathlib.Path wants an rmtree method
Aaron Hall <aaronch...@yahoo.com> added the comment: > What is wrong with just using shutil.rmtree()? 0. It's awkward to import just for demonstrations. 1. It's harder for new pythonists to discover. 2. A method provides discoverability in an object's namespace. 3. rmtree is a method of paths (typical usage is rmtree(path)). 4. rmtree is clearly functionality that is missing from the Path object (which has effectively rm, rm -d, but not rm -r). > You can't deal with files with only using pathlib. You can't read ZIP > archives, open temporary files, import modules, download files from Internet > and open them in the webbrowser, run subprocesses, send files by e-mail, > determine the MIME type of files, read WAV files with only using pathlib. I wasn't suggesting those things. After some thought, I would probably not support those things to be in pathlib either. Maybe they are "file" methods, but to me, they are not semantically "path" methods. That functionality is in much more specialized domain-oriented modules, and easy to discover. We need a recursive rmdir so that users aren't tempted to roll their own - and wind up deleting symlinked things. I *would* support some of those other shutil functions to become Path methods, perhaps move, copy2, and copytree, as they *are* path methods (you just need to supply another destination path), but I'm not finding it to be a pain point yet. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33498> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33498] pathlib.Path wants an rmtree method
New submission from Aaron Hall <aaronch...@yahoo.com>: pathlib.Path wants the rmtree method from shutil I think we need this method for a couple of reasons. 1. in shell, rm has the -r flag - In Python, we use shutil.rmtree as a best practice for this. 2. I prefer to teach my students about pathlib.Path as opposed to other ways of dealing with files. It's a great abstraction. But it's somewhat leaky, especially when it comes to recursively deleting a directory with its contents, as I now need to import rmtree from shutil. Perhaps we need this as a method in the abstract base class that recursively uses the methods provided by the concrete implementations. I can look at the rmtree method for a reference implementation. Perhaps we should just give Path.rmdir a default recursive argument? Default would be False, of course, to retain current behavior. -- components: Library (Lib) messages: 316511 nosy: Aaron Hall priority: normal severity: normal status: open title: pathlib.Path wants an rmtree method type: enhancement versions: Python 3.8 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33498> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11594] 2to3 does not preserve line endings
Aaron Ang <awz@gmail.com> added the comment: @Jason R. Coombs You are right. I managed to reproduce the problem with a test. It only occurs when a fix is applied. Also, I figured out that the refactoring reads in the file using `open(file, 'r')`, which basically transforms all line-endings to LF regardless the used line-endings. I think I fixed the problem, looking forward to receiving feedback -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue11594> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11594] 2to3 does not preserve line endings
Change by Aaron Ang <awz@gmail.com>: -- keywords: +patch pull_requests: +6181 stage: test needed -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue11594> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11594] 2to3 does not preserve line endings
Aaron Ang <awz@gmail.com> added the comment: I couldn't reproduce this issue. I tried reproducing this problem by extending the TestRefactoringTool class and creating two files: one file with LF line-endings and one file with CRLF line-endings. The changes that I made can be found here: https://github.com/aaronang/cpython/commit/55e8bd317f37923e6e23780e6ae41858493e98d8. The output of the tests: Before: b'print("hi")\n\nprint("Like bad Windows newlines?")\n' After: b'print("hi")\n\nprint("Like bad Windows newlines?")\n' Before: b'print("hi")\r\n\r\nprint("Like bad Windows newlines?")\r\n' After: b'print("hi")\r\n\r\nprint("Like bad Windows newlines?")\r\n' Maybe this problem has been resolved? -- nosy: +Aaron Ang ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue11594> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31201] make test: module test that failed doesn't exist
Change by Aaron Ang <awz@gmail.com>: -- keywords: +patch pull_requests: +6119 stage: needs patch -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31201> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28677] difficult to parse sentence structure in "When an instance attribute is referenced that isn't a data attribute"
Change by Aaron Ang <awz@gmail.com>: -- keywords: +patch pull_requests: +5952 stage: needs patch -> patch review ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue28677> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32958] socket module calls with long host names can fail with idna codec error
Aaron Black <aaron.bl...@jpl.nasa.gov> added the comment: Just to be clear, I don't know if the socket needs to support 64 character long host name sections, so here's an example url that is at the root of my problem that I'm pretty sure it should support: >>> import socket >>> h = >>> "username:long_api_key0123456789012345678901234567890123456...@www.example.com" >>> socket.gethostbyname(h) Traceback (most recent call last): File "/Users/ablack/miniconda3/lib/python3.6/encodings/idna.py", line 165, in encode raise UnicodeError("label empty or too long") UnicodeError: label empty or too long The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long) -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32958> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32977] added acts_like decorator to dataclasses module
New submission from Aaron Christianson <ninjaa...@gmail.com>: I'm always writting these wrapper classes where I want to selectively want to expose the interface of some of the methods of certain attributes to co the containing object. This can mean I spend a lot of time implementing wrapper methods. That's no good. I wrote a class decorator to make this easy, and I realized it's a perfect complement to the new dataclasses module, though it can also be used with normal classes. I figured I'd check if you're interested in that. The interface looks like this: >>> from dataclasses import dataclass, acts_like >>> @acts_like('weight', ['__add__']) ... @acts_like('still_fresh', ['__bool__']) ... @dataclass ... class Spam: ... weight: int ... still_fresh: bool >>> s = Spam(42, False) >>> s + 3 45 >>> if not s: ... print('the spam is bad') the spam is bad It's a handy way to build objects with composition, but still get some of the benefits of inheritance in a *selective* and *explicite* way. Here's the code: https://github.com/ninjaaron/cpython/blob/acts_like/Lib/dataclasses.py#L978 May require some addtional twiddling to make it work with frozen dataclasses, but I don't think it should be a problem. -- components: Library (Lib) messages: 313096 nosy: eric.smith, ninjaaron priority: normal severity: normal status: open title: added acts_like decorator to dataclasses module type: enhancement versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32977> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32958] Urllib proxy_bypass crashes for urls containing long basic auth strings
New submission from Aaron Black <aaron.bl...@jpl.nasa.gov>: While working on a custom conda channel with authentication, I ran into the following UnicodeError: Traceback (most recent call last): File "/Users/ablack/miniconda3/lib/python3.6/site-packages/conda/core/repodata.py", line 402, in fetch_repodata_remote_request timeout=timeout) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 521, in get return self.request('GET', url, **kwargs) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 499, in request prep.url, proxies, stream, verify, cert File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 672, in merge_environment_settings env_proxies = get_environ_proxies(url, no_proxy=no_proxy) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/utils.py", line 692, in get_environ_proxies if should_bypass_proxies(url, no_proxy=no_proxy): File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/utils.py", line 676, in should_bypass_proxies bypass = proxy_bypass(netloc) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2612, in proxy_bypass return proxy_bypass_macosx_sysconf(host) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2589, in proxy_bypass_macosx_sysconf return _proxy_bypass_macosx_sysconf(host, proxy_settings) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2562, in _proxy_bypass_macosx_sysconf hostIP = socket.gethostbyname(hostonly) UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long) The error can be consistently reproduced when the first substring of the url hostname is greater than 64 characters long, as in "0123456789012345678901234567890123456789012345678901234567890123.example.com". This wouldn't be a problem, except that it doesn't seem to separate out credentials from the first substring of the hostname so the entire "[user]:[secret]@XXX" section must be less than 65 characters long. This is problematic for services that use longer API keys and expect their submission over basic auth. -- components: Library (Lib) messages: 312947 nosy: ablack priority: normal severity: normal status: open title: Urllib proxy_bypass crashes for urls containing long basic auth strings type: crash versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32958> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32300] print(os.environ.keys()) should only print the keys
Aaron Meurer <asmeu...@gmail.com> added the comment: Can't third party code write their own proxies? Why do we have to do that? -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32300] print(os.environ.keys()) should only print the keys
Aaron Meurer <asmeu...@gmail.com> added the comment: Serhiy, isn't option 4? 4. Make KeysView.__repr__ show list(self). Add a custom wrapper for Shelf's KeysView so that it doesn't do this. This seems to be what Victor is suggesting. It makes the most sense to me for the common (i.e., default) case to be to show the keys (and just the keys), and for use cases that want otherwise to subclass and modify. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32313] Wrong inspect.getsource for datetime
New submission from Aaron Meurer <asmeu...@gmail.com>: inspect.getsource(datetime) shows the Python source code for datetime, even when it is the C extension. This is very confusing. I believe it's because _datetime is used to override everything in datetime at the end of the file (here https://github.com/python/cpython/blob/11a247df88f15b51feff8a3c46005676bb29b96e/Lib/datetime.py#L2285), but __file__ is not imported. -- messages: 308255 nosy: Aaron.Meurer priority: normal severity: normal status: open title: Wrong inspect.getsource for datetime ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32313> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32300] print(os.environ.keys()) should only print the keys
Aaron Meurer <asmeu...@gmail.com> added the comment: So the best fix is to just override keys() in the _Environ class, so that it returns an EnvironKeysView class that overrides __repr__? -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32300] print(os.environ.keys()) should only print the keys
New submission from Aaron Meurer <asmeu...@gmail.com>: Take the following scenario which happened to me recently. I am trying to debug an issue on Travis CI involving environment variables. Basically, I am not sure if an environment variable is being set correctly. So in my code, I put print(os.environ.keys()) The reason I put keys() was 1, I didn't care about the values, and 2, I have secure environment variables set on Travis. To my surprise, in the Travis logs, I found something like this KeysView(environ({'TRAVIS_STACK_FEATURES': 'basic cassandra chromium couchdb disabled-ipv6 docker docker-compose elasticsearch firefox go-toolchain google-chrome jdk memcached mongodb mysql neo4j nodejs_interpreter perl_interpreter perlbrew phantomjs postgresql python_interpreter rabbitmq redis riak ruby_interpreter sqlite xserver', 'CI': 'true', ..., 'MANPATH': '/home/travis/.nvm/versions/node/v7.4.0/share/man:/home/travis/.kiex/elixirs/elixir-1.4.5/man:/home/travis/.rvm/rubies/ruby-2.4.1/share/man:/usr/local/man:/usr/local/clang-3.9.0/share/man:/usr/local/share/man:/usr/share/man:/home/travis/.rvm/man'})) So instead of just printing the keys like I asked for, it printed the whole environment, plus "KeysView(environ(". Included here was my secure environment variable. Now, fortunately, Travis hides the contents of secure environment variables in the logs, but it didn't used to (https://blog.travis-ci.com/2017-05-08-security-advisory). Aside from being a potential security issue, it's just annoying that it prints the whole environment. The values are much larger than the keys. With a normal dictionary, print(d.keys()) just prints the keys: >>> print(dict(a=1, b=2).keys()) dict_keys(['a', 'b']) -- messages: 308190 nosy: Aaron.Meurer priority: normal severity: normal status: open title: print(os.environ.keys()) should only print the keys versions: Python 3.7 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32300> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24294] DeprecationWarnings should be visible by default in the interactive REPL
Aaron Meurer <asmeu...@gmail.com> added the comment: If it's of any interest to this discussion, for SymPy (for some time) we have used a custom subclass of DeprecationWarning that we enable by default https://github.com/sympy/sympy/blob/master/sympy/utilities/exceptions.py. I don't know if there are major libraries that do something similar. Our reasoning is that we really do want everybody to see the warnings. Obviously direct users of SymPy (both interactive users and library developers) need to see them so they can fix their code. But also if library X uses a deprecated behavior and a user of library X sees a deprecation warning for SymPy inside of library X, that incentivises them to bug the library X developers to fix the behavior (or PR it). The whole point of warnings as we see it is to be as loud as possible while still keeping things working, to avoid the situation where things stop working (when the deprecated behavior is removed). And + to Nathaniel's point that DeprecationWarnings are about more than just the standard library. Tons of libraries use the built in warnings, and the default warnings behavior makes no distinction between warnings coming from the standard library and warnings coming from other places. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue24294> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32019] Interactive shell doesn't work with readline bracketed paste
New submission from Aaron Meurer <asmeu...@gmail.com>: Here are the steps to reproduce this: - Compile and link Python against readline version 7.0 or higher. - Add set enable-bracketed-paste on to your ~/.inputrc - Start python and paste the following two lines. Make sure to use a terminal emulator that supports bracketed paste (most modern ones do). You'll need to type enter after pasting the lines. a = 1 a You get something like >>> a = 1 a File "", line 1 a ^ SyntaxError: multiple statements found while compiling a single statement It does work, however, if you paste something that has a newline but is a single statement, like (1, 2) Fixing this in the right way might not be so easy, due to the way that compile('single') is over-engineered. A simple fix would be to disable bracketed paste in the Python shell. I tested this with Python 3.6.3. I was not able to get the git master to compile, so I couldn't test it there. -- messages: 306176 nosy: Aaron.Meurer priority: normal severity: normal status: open title: Interactive shell doesn't work with readline bracketed paste ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32019> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30670] pprint for dict in sorted order or insert order?
Aaron Hall <aaronch...@yahoo.com> added the comment: If/when order is guaranteed (3.7?) we should have a pprint that respects current order, -or- we should get an improved pprint (maybe named pp or print?) that understands mappings and other abstract data types. I had a conversation about pprint at the Python meetup last night. It kinda went like this: https://www.youtube.com/watch?v=NpYEJx7PkWE Maybe now's the time for improved behavior? -- nosy: +Aaron Hall ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue30670> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
Aaron Hall <aaronch...@yahoo.com> added the comment: New information: I think I have pinpointed at least a contributor to the difference - closure lookups seem to be currently slightly slower (by a few percent) than global lookups (see https://stackoverflow.com/a/46798876/541136). And as we can see, an inner function that references itself is a closure on itself (see LOAD_DEREF): >>> def foo(): ... def bar(): ... return bar ... return bar ... >>> bar = foo() >>> import dis >>> dis.dis(bar) 3 0 LOAD_DEREF 0 (bar) 2 RETURN_VALUE This, at least to me, explains why the performance difference doesn't completely amortize away. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
Aaron Hall <aaronch...@yahoo.com> added the comment: Static analysis: My mental model currently says the rebuilt function every outer call is an expense with no offsetting benefit. It seems that a function shouldn't build a closure on every call if the closure doesn't close over anything immediately used by the functionality. But I can't explain why the cost doesn't amortize toward zero in my testing. Usage analysis: On the other hand, this doesn't seem used very much at all in the std lib. I'm not sure what the entire global benefit is to moving the closure to be a global instead - but there are about 88000 potential uses of the code on github: https://github.com/search?p=3=literal_eval=Code=%E2%9C%93 One use seems to be scanning Python code - so potentially it gets a lot of use? Alternatively: - to echo Serhiy ("Maybe it is worth to spend some time for optimizing closure creation."), perhaps the matter could be made irrelevant by looking at how we handle closures. I'm not sure why the difference didn't amortize to nearly nothing in my testing - I used Anaconda's Python 3.6.1 distribution on Linux - if that matters. Potential improvement: So to be clear, the suggested change would probably be to move _convert to a global, maybe named _literal_eval_convert (this is less half-baked than my first code post, which I somewhat regret. Note that the recursive calls would need to be edited as well as the move and dedent.): def literal_eval(node_or_string): """ Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None. """ if isinstance(node_or_string, str): node_or_string = parse(node_or_string, mode='eval') if isinstance(node_or_string, Expression): node_or_string = node_or_string.body return _literal_eval_convert(node_or_string) def _literal_eval_convert(node): if isinstance(node, Constant): return node.value elif isinstance(node, (Str, Bytes)): return node.s elif isinstance(node, Num): return node.n elif isinstance(node, Tuple): return tuple(map(_literal_eval_convert, node.elts)) elif isinstance(node, List): return list(map(_literal_eval_convert, node.elts)) elif isinstance(node, Set): return set(map(_literal_eval_convert, node.elts)) elif isinstance(node, Dict): return dict((_literal_eval_convert(k), _literal_eval_convert(v)) for k, v in zip(node.keys, node.values)) elif isinstance(node, NameConstant): return node.value elif isinstance(node, UnaryOp) and isinstance(node.op, (UAdd, USub)): operand = _literal_eval_convert(node.operand) if isinstance(operand, _NUM_TYPES): if isinstance(node.op, UAdd): return + operand else: return - operand elif isinstance(node, BinOp) and isinstance(node.op, (Add, Sub)): left = _literal_eval_convert(node.left) right = _literal_eval_convert(node.right) if isinstance(left, _NUM_TYPES) and isinstance(right, _NUM_TYPES): if isinstance(node.op, Add): return left + right else: return left - right raise ValueError('malformed node or string: ' + repr(node)) Note that I am not strongly committed to this issue, and won't feel badly if it is closed. It just seemed to be some low-hanging fruit in the standard library that I happened across. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31742] Default to emitting FutureWarning for provisional APIs
Aaron Gallagher <_...@habnab.it> added the comment: >Storing the marker attribute in __main__ [...] Can I request please not using __main__ for this? setuptools console_scripts are very common, which is a case where __main__ will be a generated (i.e. not user-controllable) file. Making application code import __main__ to set the flag would be brittle. -- nosy: +habnabit ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31742> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
Aaron Hall <aaronch...@yahoo.com> added the comment: So... moving the closure (which may be called recursively) to the global scope actually does improve performance (for small cases, about 10% - larger cases amortize the cost of the closure being built, but in a 100 item dictionary, still about 4% faster to extricate the closure). So I'm reopening. Also suggesting we consider doing this with other functions if they are unnecessarily closures in the module. `fix_missing_locations` appears to be another such function with an unnecessary closure. the closure in `dump` cannot be removed without some rewriting of the signature, as it uses variables it closes over. Not sure this would be worth it. -- resolution: rejected -> status: closed -> open ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
Aaron Hall <aaronch...@yahoo.com> added the comment: Rejecting and withdrawing with apologies. -- resolution: -> rejected stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31753] Unnecessary closure in ast.literal_eval
New submission from Aaron Hall <aaronch...@yahoo.com>: Removing the closure seems to make the function about 10% faster. Original source code at: https://github.com/python/cpython/blob/3.6/Lib/ast.py#L40 Empirical evidence: astle.py import timeit from ast import literal_eval as orig_literal_eval from ast import * def new_literal_eval(node_or_string): """ Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None. """ if isinstance(node_or_string, str): node_or_string = parse(node_or_string, mode='eval') if isinstance(node_or_string, Expression): node_or_string = node_or_string.body node = node_or_string if isinstance(node, Constant): return node.value elif isinstance(node, (Str, Bytes)): return node.s elif isinstance(node, Num): return node.n elif isinstance(node, Tuple): return tuple(map(_convert, node.elts)) elif isinstance(node, List): return list(map(_convert, node.elts)) elif isinstance(node, Set): return set(map(_convert, node.elts)) elif isinstance(node, Dict): return dict((_convert(k), _convert(v)) for k, v in zip(node.keys, node.values)) elif isinstance(node, NameConstant): return node.value elif isinstance(node, UnaryOp) and isinstance(node.op, (UAdd, USub)): operand = _convert(node.operand) if isinstance(operand, _NUM_TYPES): if isinstance(node.op, UAdd): return + operand else: return - operand elif isinstance(node, BinOp) and isinstance(node.op, (Add, Sub)): left = _convert(node.left) right = _convert(node.right) if isinstance(left, _NUM_TYPES) and isinstance(right, _NUM_TYPES): if isinstance(node.op, Add): return left + right else: return left - right raise ValueError('malformed node or string: ' + repr(node)) def main(): print('orig first, then new') print("'1.01'") print(min(timeit.repeat(lambda: orig_literal_eval('1.01' print(min(timeit.repeat(lambda: new_literal_eval('1.01' print("""'"1.01"'""") print(min(timeit.repeat(lambda: orig_literal_eval('"1.01"' print(min(timeit.repeat(lambda: new_literal_eval('"1.01"' print("'1'") print(min(timeit.repeat(lambda: orig_literal_eval('1' print(min(timeit.repeat(lambda: new_literal_eval('1' if __name__ == '__main__': main() Shell: $ python -m astle orig first, then new '1.01' 3.518230145502848 3.274753015923377 '"1.01"' 3.189016693752965 2.906869704238048 '1' 3.40557457956146 3.157061471625788 -- components: Library (Lib) messages: 304089 nosy: Aaron Hall priority: normal severity: normal status: open title: Unnecessary closure in ast.literal_eval type: performance ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31753> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31684] Scientific formatting of decimal 0 different from float 0
Aaron Meurer <asmeu...@gmail.com> added the comment: I meant that format() destroys information in a decimal in general. Obviously if you have n digits of precision and format with m < n, then you lose information. I also can't help but feel that we're mixing up "trailing zeros" (i.e., precision), and "exponent" (magnitude), which should be orthogonal. I'm assuming that a decimal is represented internally as base*10**exponent. I'm also assuming that Decimal(0) sets both base and exponent to 0. It doesn't make sense to me that a string formatting operation that requests a certain number of digits of precision should change the exponent. I get that 0.0 is different from 0.0, but in that case, they should print differently: as '0.0' and '0.0'. It seems sly to try to maintain that through a format operation via the exponent, especially when format *in general* loses precision information for a decimal anyway (by "format" I mean format with a set number of digits requested). Especially since that "trick" only works for exactly one number, zero. If you do '{:+.30e}'.format(Decimal('1.000')) or '{:+.10e}'.format(Decimal('1.000')), no such trick is used, because no such trick can be used. You just lose information. I'm sure my mental model is off here. I'm used to sympy.Float/mpmath.mpf where values like 0*2**i are normalized to i = 0 (e.g. mpmath.mpf((0, 0, 20, 0))._mpf_ gives (0, 0, 0, 0)), so this problem never comes up in the code that I'm used to. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31684> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31684] Scientific formatting of decimal 0 different from float 0
Aaron Meurer <asmeu...@gmail.com> added the comment: I guess I would expect that to be captured by the number of zeros printed (and obviously doing a string format operation with a set number of digits destroys that information). -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31684> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31684] Scientific formatting of decimal 0 different from float 0
New submission from Aaron Meurer <asmeu...@gmail.com>: >>> '{:+.19e}'.format(0.) '+0.000e+00' >>> import decimal >>> '{:+.19e}'.format(decimal.Decimal(0)) '+0.000e+19' Note the decimal uses e+19 instead of e+00. Obviously it's still mathematically correct, but it's annoying to have anything other than e+00 for a 0 value. -- messages: 303653 nosy: Aaron.Meurer priority: normal severity: normal status: open title: Scientific formatting of decimal 0 different from float 0 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue31684> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com